Co-Director of Equilibria Network: https://eq-network.org/
I try to write as if I were having a conversation with you in person.
I would like to claim that my current safety beliefs are a mix between Paul Christiano's, Andrew Critch's and Def/Acc.
Yeah for sure, I think the devil might be in the details here around how things are run and what the purpose of the national organisation is. Since Sweden and Norway have 8x less of a population than germany I think the effect of a "nation-wide group" might be different?
In my experience, I've found that EA Sweden focuses on and provides a lot of the things that you listed so I would be very curious to hear what the difference between a local and national organisation would be? Is there a difference in the dynamics of them being motivated to sustain themselves because of the scale?
You probably have a lot more experience than me in this so it would be very interesting to hear!
I like that decomposition.
There's something about a prior on having democratic decision making as part of this because it allows for better community engagement usually? Representation often leads to feelings of inclusion and whilst I've only dabbled in the sociology here it seems like the option of saying no is quite important for members to feel heard?
My guess would be that the main pros of having democratic deliberation doesn't come from when the going is normal but rather as a resillience mechanism? Democracies tend to react late to major changes and not change path often but when they do they do it properly? (I think this statement is true but it might as well be a cultural myth that I've heard in the social choice adjacent community.)
First and foremost, I think the thoughts expressed here make sense and this comment is more just expressing a different perspective, not necessarily disagreeing.
I wanted to bring up an existing framework for thinking about this from Raghuram Rajan's "The Third Pillar," which provides economic arguments for why local communities matter even when they're less "efficient" than centralized alternatives.
The core economic benefits of local community structures include:
So when you bring up the question of efficiency and adherence to optimal reflective practices I start thinking about it from a more systemic perspective.
Here's a question that comes to mind: if local EA communities make people 3x more motivated to pursue high-impact careers, or make it much easier for newcomers to engage with EA ideas, then even if these local groups are only operating at 75% efficiency compared to some theoretical global optimum, you still get significant net benefit.
I think this becomes a governance design problem rather than a simple efficiency question. The real challenge is building local communities that capture these motivational benefits while maintaining mechanisms for critical self-evaluation. (Which I think happens through impact evaluations and similar at least in EA Sweden.)
I disagree with the pure globalization solution here. From a broader macroeconomic perspective, we've seen repeatedly that dismantling local institutions in favor of "more efficient" centralized alternatives often destroys valuable social infrastructure that's hard to rebuild. The national EA model might be preserving something important that pure optimization would eliminate.
This is very nice!
I've been thinking that there's a nice generalisable analogy between bayesian updating and forecasting. (It is quite no shit when you think about it but it feels like people aren't exploiting it?)
I'm doing a project on simulating a version of this idea but in a way that utilizes democratic decision making called Predictive Liquid Democracy (PLD) and I would love to hear if you have any thoughts on the general setup. It is model parameterization but within a specific democratic framing.
PLD is basically saying the following:
What if we could set up a trust based meritocratic voting network based on the predictions about how well a candidate will perform? It is futarchy with some twists.
Now for the generalised framing in terms of graphs that I'm thinking of:
I'm writing a paper on setting up the variational mathematics behind this right now. I'm also writing a paper on some more specific simulations of this to run so I'm very grateful for any thoughts you might have of this setup!
Some people might find that this post is written from a place of agitation which is fully okay. I think that even if you do there are two things that I would want to point out as really good points:
I think there's a very very interesting project of democratizingthe EA community in a way that makes it more effective. There are lots of institutional design that we can apply to ourselves and I would be very excited to see more work in this direction!
Edit:
Clarification on why I believe it to cause some agitation for some people:
This isnât just a technical issue. This is a design philosophy â one that rewards orthodoxy, punishes dissent, and enforces existing hierarchies.
I liked the post, I think it made a good point, I strong upvoted it but I wanted to mention it as a caveat.
I felt that this post might be relevant for longtermism and person affecting views so I had claude write up a quick report on that:
In short: Rejecting the SWWM đ¸11% pledge's EV calculation logically commits you to person-affecting views, effectively transforming you from a longtermist into a neartermist.
Example: Bob rejects investing in a $500 ergonomic chair despite the calculation showing 10^50 * 1.2*10^-49 = 12 lives saved due to "uncertainty in the probabilities." Yet Bob still identifies as a longtermist who believes we should value future generations. This is inconsistent, as longtermism fundamentally relies on the same expected value calculations with uncertain probabilities that SWWM uses.
The đŽ Badge
If you've rejected the SWWM đ¸11% Pledge while maintaining longtermist views, we'd appreciate if you could add the đŽ "crystal ball" emoji to your social media profiles to signal your epistemic inconsistency.
FAQ
Why can't I reject SWWM but stay a longtermist?
Both longtermism and SWWM rely on the same decision-theoretic framework of accepting tiny probabilities of affecting vast future populations. Our analysis shows the error bars in SWWM calculations (Âą0.0000000000000000000000000000000000000000000001%) are actually narrower than the error bars in most longtermist calculations.
What alternatives do I have?
According to our comprehensive Fermi estimate, maintaining consistency between your views on SWWM and longtermism is approximately 4.2x more philosophically respectable.
First and foremost, I'm low confidence here.
I will focus on x-risk from AI and I will challenge the premise of this being the right way to ask the question.
What is the difference between x-risk and s-risk/increasing the value of futures? When we mention x-risk with regards to AI we think of humans going extinct but I believe that to be a shortform for wise compassionate decision making. (at least in the EA sphere)
Personally, I think that x-risk and good decision making in terms of moral value might be coupled to each other. We can think of our current governance conditions a bit like correction systems for individual errors. If they pile up, we go off the rail and increase x-risk as well as chances of a bad future.
So a good decision making system should both account for x-risk and value estimation, therefore the solution is the same and it is a false dichotomy?
(I might be wrong and I appreciate the slider question anyway!)
First and foremost, I agree with the point. I think looking at this especially from a lens of transformative AI might be interesting. (Coincidentally this is something I'm currently doing using ABMs with LLMs)
You probably know this one but here's a link to a cool project: https://effectiveinstitutionsproject.org/
Dropping some links below, I've been working on this with a couple of people in Sweden for the last 2 years, we're building an open source platform for better democratic decision making using prediction markets:
https://digitaldemocracy.world/flowback-the-future-of-democracy/
The people I'm working with there are also working on:
I know the general space here so if anyone is curious I'm happy to link to people doing different things!
You might also want to check out:
Yeah, I think you're right and I also believe that it can be a both and?
You can have a general non-profit board and at the same time have a form of representative democracy going on which seems the best we can currently do for this?
I think it is fundamentally about a more timeless trade-off between hierarchical organisations that generally are able to act with more "commander's intent" versus democratic models that are more of a flat voting model. The democratic models suffer when there is a lot of single person linear thinking involved but do well at providing direct information for what people care about whilst the inverse is true for the hierarchical one and the project of good governance is to some extent somewhere in between.