Camille

Head of Projects @ Mieux Donner
354 karmaJoined Working (0-5 years)94110 Arcueil, France
www.effectivedisagreement.org

Bio

Participation
3

Currently working for Mieux Donner. I do many stuff, but I mostly write content.

Background in cognitive science. I run a workshop with the aim to teach methods to manage strong disagreements (including non-EA people). Also community building.

Interested in cyborgism and AIS via debate.

https://typhoon-salesman-018.notion.site/Date-me-doc-be69be79fb2c42ed8cd4d939b78a6869?pvs=4

How others can help me

I often get tremendous amounts of help from people knowing how to program being enthusiastic for helping over an evening.

Comments
47

This helped clarify things a bit, thanks -not the least because I've interacted with some of you lately. 

I'm still mostly skeptical because of the amount of implicit conjunctions (a solution that solves A & B & C & D & E seems less plausible to exist than several specialised solutions), how common it is for extremely effective ideas to be specialised (rather than the knock-on result of a general idea) and the vague similarity with "Great Idea" death spiral traits. All of this said, I'm in favor of keeping the discussion open. Fox mindset rules.

For those who need clarification, I think I understand four non-exclusive example avenues of what "solving the metacrisis" looks like (all the names are 100% made-up but useful for me to think about it): 

1-"Systemism" postulates the solution has to do with complex system stuff. You really want to solve AI X-risk ? Don’t try to build OpenAI, says the systemist, instead, do :

1.1-One gigabrained single intervention that truly addresses all the problems at the same time (maybe « Citizen Assemblies but better »)

1.2-A conjunction of interventions that mutually reinforces each other in a feedback loop (maybe « Citizen Assemblies » + « UBI » + « Empowerment in social help »)
 

Will this intervention solve one or all problems ? Again, opinions diverge : 

1.3-Each intervention should solve one problem -a bigger one, and to a better extent, than conventional solutions.

1.4-This intervention should solve all the problems

1.5-The intervention should solve one problem, but you can « copy-paste » it, it has high transferrability.
 

2-"Introspectivism" postulates the solution has to do with changing the way we relate to ourselves and the rest of the world. You really want to solve AI X-risk ? Again, don’t build OpenAI, but go meditate, learn NVC, use Holacracy.
 

3-"Integralism" postulates the solution is to incorporate criticism of all different paradigms. You really want to solve AI X-risk ? Make your plans consistent with Marxism, Heidegerian Phenomenology and Buddhism, and you’ll get there.


4-"Culturalism" postulates that cultural artifacts (workshops, books, a social movement, and/or memes in general) will suceed to change how people act and coordinate, such that reducing X-risks becomes feasible. Don't try to build OpenAI, think about cultural artifacts -but not books about AI Risk, more like books about coordination and communication.

Separately, I think discussing disagreements around the meta-crisis is going to be hard. 

Why? I think that there is a disparity in relevance heuristics among EA and... well, the rest of the world. EA has analytical and pragmatic relevance heuristics. "Meta-crisis people" have a social relevance heuristic, and some other streams of thoughts have a phenomenological relevance heuristic. 

Think Cluster Headache. I think many people attempted to say that Cluster Headaches could matter a big deal. But they said stuff we (or at least, I) didn't necessarily understand, like "it's very intense. You become pain. You can't understand". Then decades later, someone says "maybe suffering is exponential, not linear, and CH is an intensity where this insight is very clear". And then (along with other numerate considerations) we progressively started caring (or at least, I started caring).

All these systems can communicate with EA if and only if they succeed to formalize / pragmaticize themselves to some degree, and I personally think this is what people like Jan Kulveit, Andres G. Emilsson, or Richard Ngo are (inadvertently ?) doing. I'd suggest doing this for meta-crisis (math > dialogues) otherwise people may backfire.

I remember reading in comments from the world malaria day post that some malaria vaccines could turn out more cost-effective than bednets.

If that's plausibly the case for Sanaria, I can't but stand behind this ask (I'm sadly not in a position to help much more than that).

I personally feel the website is more engaging, but unexpectedly, I also feel my own conception of EA is more accurately represented ! 

I'd be excited to experiment with some of these insights, and these methods in general, for other community building practices.

As an ex-group organiser, I feel that my fallback plans have just been described with extreme precision.

As someone who sometimes hesitates way too much, this is unironically helpful x)

Thanks for posting this. It makes me think -"Oh, someone is finally mentioning this".

Observation : I think that your model rests on hypotheses that are the hypotheses I expect someone from Silicon Valley to suggest, using Silicon-Valley-originating observations. I don't think of Silicon Valley as 'the place' for politics, even less so epistemically accurate politics (not evidence against your model of course, but my inner simulator points at this feature as a potential source of confusion)

We might very well need to use a better approach than our usual tools for thinking about this. I'm not even sure current EAs are better at this than the few bottom-lined social science teachers I met in the past -being truth-seeking is one thing, knowing the common pitfalls of (non socially reflexive) truth-seeking in political thinking is another. 

For some reasons that I won't expand on, I think people working on hierarchical agency are really worth talking to on this topic and tend to avoid the sort of issues 'bayesian' rationalists will fall into.

I think I can confidently state that :
1-Some people will be heavily reluctant to attend BlueDot because it is an online course. Some people likewise have their needs better suited by alternatives (whether in terms of pedagogical style, UX, information bandwith).
2-Opening an AIS class in a university can unlock a surprizing amount of respectability.

Thank you for writing this ! I've been trying to find a good example of "translating between philosophical traditions" for some time, one that is both epistemically correct and well executed. This one is really good !

What I keep from this is the idea of making additional distinctions -aknowledging that EA (or whichever cause area one wants to defend) really is different from the initial "style", but being able to explain this difference with a shared vocabulary.

Good question. The international coalition is still being built right now, which means that no official dates have been decided. I've heard a credible source say the assemblies are planned to start in June. I'll update the post and discord server as soon as I get more information.

Answer by Camille4
2
1

[This does not represent the opinion of my employer] 

I currently mostly write content for an Effective Giving Initiative, and I think it would be somewhat misleading to write that we recommend animal charities that defend animal rights -people would misconstrue what we're talking about. Avoided suffering is what we think about when explaining who "made it" to the home page, it's part of the methodology, and my estimates ultimately weigh in on that. It's also the methodology of the evaluators who do all the hard work.

My guess would be that EA has a vast majority of consequentialists, whose success criterion is wellbeing, and whose methodology is [feasible because it is] welfare-focused (e.g. animal-adjusted QALYs per dollar spent). This probably sedimented itself early and people plausibly haven't questioned it a lot so far. EA-aligned rights-focused interventions exist, but they're ultimately measured according to their gains in terms of welfare.

On my side, I think it's already hard as it is to select cost-effective charities with a consequentialist framework (and sell it to people!), and "rights" add in a lot of additional distinctions (e.g. rights as means vs as ends) which makes it hard to operationalize. I can write an article about why we recommend animal welfare charity X in terms of avoided counterfactual suffering, but I'm clueless if I had to recommend it in terms of avoided right infringement, because it's harder to measure, and I'm not even sure of what I'm talking about.

I'd be happy to see people from other positions give their opinion, this is a strictly personal view.

Load more