Luke Kemp and I just published a paper which criticises existential risk for lacking a rigorous and safe methodology:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3995225
It could be a promising sign for epistemic health that the critiques of leading voices come from early career researchers within the community. Unfortunately, the creation of this paper has not signalled epistemic health. It has been the most emotionally draining paper we have ever written.
We lost sleep, time, friends, collaborators, and mentors because we disagreed on: whether this work should be published, whether potential EA funders would decide against funding us and the institutions we're affiliated with, and whether the authors whose work we critique would be upset.
We believe that critique is vital to academic progress. Academics should never have to worry about future career prospects just because they might disagree with funders. We take the prominent authors whose work we discuss here to be adults interested in truth and positive impact. Those who believe that this paper is meant as an attack against those scholars have fundamentally misunderstood what this paper is about and what is at stake. The responsibility of finding the right approach to existential risk is overwhelming. This is not a game. Fucking it up could end really badly.
What you see here is version 28. We have had approximately 20 + reviewers, around half of which we sought out as scholars who would be sceptical of our arguments. We believe it is time to accept that many people will disagree with several points we make, regardless of how these are phrased or nuanced. We hope you will voice your disagreement based on the arguments, not the perceived tone of this paper.
We always saw this paper as a reference point and platform to encourage greater diversity, debate, and innovation. However, the burden of proof placed on our claims was unbelievably high in comparison to papers which were considered less “political” or simply closer to orthodox views. Making the case for democracy was heavily contested, despite reams of supporting empirical and theoretical evidence. In contrast, the idea of differential technological development, or the NTI framework, have been wholesale adopted despite almost no underpinning peer-review research. I wonder how much of the ideas we critique here would have seen the light of day, if the same suspicious scrutiny was applied to more orthodox views and their authors.
We wrote this critique to help progress the field. We do not hate longtermism, utilitarianism or transhumanism,. In fact, we personally agree with some facets of each. But our personal views should barely matter. We ask of you what we have assumed to be true for all the authors that we cite in this paper: that the author is not equivalent to the arguments they present, that arguments will change, and that it doesn’t matter who said it, but instead that it was said.
The EA community prides itself on being able to invite and process criticism. However, warm welcome of criticism was certainly not our experience in writing this paper.
Many EAs we showed this paper to exemplified the ideal. They assessed the paper’s merits on the basis of its arguments rather than group membership, engaged in dialogue, disagreed respectfully, and improved our arguments with care and attention. We thank them for their support and meeting the challenge of reasoning in the midst of emotional discomfort. By others we were accused of lacking academic rigour and harbouring bad intentions.
We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism. They also told us that there is no TUA, and if the approach does exist then it certainly isn’t dominant. It was these same people that then tried to prevent this paper from being published. They did so largely out of fear that publishing might offend key funders who are aligned with the TUA.
These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don't know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst.
The greatest predictor of how negatively a reviewer would react to the paper was their personal identification with EA. Writing a critical piece should not incur negative consequences on one’s career options, personal life, and social connections in a community that is supposedly great at inviting and accepting criticism.
Many EAs have privately thanked us for "standing in the firing line" because they found the paper valuable to read but would not dare to write it. Some tell us they have independently thought of and agreed with our arguments but would like us not to repeat their name in connection with them. This is not a good sign for any community, never mind one with such a focus on epistemics. If you believe EA is epistemically healthy, you must ask yourself why your fellow members are unwilling to express criticism publicly. We too considered publishing this anonymously. Ultimately, we decided to support a vision of a curious community in which authors should not have to fear their name being associated with a piece that disagrees with current orthodoxy. It is a risk worth taking for all of us.
The state of EA is what it is due to structural reasons and norms (see this article). Design choices have made it so, and they can be reversed and amended. EA fails not because the individuals in it are not well intentioned, good intentions just only get you so far.
EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders' forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes. I now believe EA needs to make such structural adjustments in order to stay on the right side of history.
Hi - Thanks so much for writing this. I'm on holiday at the moment so have only have been able to quickly skim your post and paper. But, having got the gist, I just wanted to say:
(i) It really pains me to hear that you lost time and energy as a result of people discouraging you from publishing the paper, or that you had to worry over funding on the basis of this. I'm sorry you had to go through that.
(ii) Personally, I'm excited to fund or otherwise encourage engaged and in-depth "red team" critical work on either (a) the ideas of EA, longtermism or strong longtermism, or (b) what practical implications have been taken to follow from EA, longtermism, or strong longtermism. If anyone reading this comment would like funding (or other ways of making their life easier) to do (a) or (b)-type work, or if you know of people in that position, please let me know at will@effectivealtruism.org. I'll try to consider any suggestions, or put the suggestions in front of others to consider, by the end of January.
I just want to say that I think this is a beautifully accepting response to criticism. Not defensive. Says hey yes maybe there is a problem here. Concretely offers time and money and a plan to look into things more. Really lovely, thank you Will.
Thanks for stating this publically here Will!
Hi Carla and Luke, I was sad to hear that you and others were concerned that funders would be angry with you or your institutions for publishing this paper. For what it's worth, raising these criticisms wouldn't count as a black mark against you or your institutions in any funding decisions that I make. I'm saying this here publicly in case it makes others feel less concerned that funders would retaliate against people raising similar critiques. I disagree with the idea that publishing critiques like this is dangerous / should be discouraged.
+1 to everything Nick said, especially the last sentence. I'm glad this paper was published; I think it makes some valid points (which doesn't mean I agree with everything), and I don't see the case that it presents any risks or harms that should have made the authors consider withholding it. Furthermore, I think it's good for EA to be publicly examined and critiqued, so I think there are substantial potential harms from discouraging this general sort of work.
Whoever told you that funders would be upset by your publishing this piece, they didn't speak for Open Philanthropy. If there's an easy way to ensure they see this comment (and Nick's), it might be helpful to do so.
+1, EA Funds (which I run) is interested in funding critiques of popular EA-relevant ideas.
Thanks for saying this publically too Nick, this is helpful for anyone who might worry about funding.
I thought the paper itself was poorly argued, largely as a function of biting off too much at once. Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. Then, while I thought the original description of TUA was accurate, the TUA response to criticisms was entirely ignored. Statements like "it is unclear why a precise slowing and speeding up of different technologies...across the world is more feasible or effective than the simpler approach of outright bans and moratoriums" were egregious, and made it seem like you did not do your research. You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Not a single mention of the difficulty of incorporating future generations into the democratic process?
Ultimately, I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking. I bel... (read more)
I would agree that the article is too wide-ranging. There's a whole host of content ranging from criticisms of expected value theory, arguments for degrowth, arguments for democracy, and then criticisms of specific risk estimates. I agreed with some parts of the paper, but it is hard to engage with such a wide range of topics.
The paper doesn't explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.
"For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential." Personally, I consider a long-term future with a 48.6% child and infant mortality rate abhorrent and opposed to human potential, but the authors don't seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.
There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable.
"Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated"
"The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible"
"regressing, relinquishing, or stopping the development of many technologies is often disregarded as ... (read more)
Point taken. Thank you for pointing this out.
I think this is more about stopping the development of specific technologies - for example, they suggest that stopping AGI from being developed is an option. Stopping the development of certain technologies isn't necessarily related to degrowth - for example, many jurisdictions now ban government use of facial recognition technology, and there have been calls to abolish its use, but these are motivated by civil liberties concerns.
Suggesting that a future without industrialization is morally tolerable does not imply opposition to "any and all" technological progress, but the amount of space left is very small. I don't think they're taking an opinion on the value of better fishhooks.
I think that they didn't try to oppose the TUA in the paper, or make the argument against it themselves. To quote: "We focus on the techno-utopian approach to existential risk for three reasons. First, it serves as an example of how moral values are embedded in the analysis of risks. Second, a critical perspective towards the techno-utopian approach allows us to trace how this meshing of moral values and scientific analysis in ERS can lead to conclusions, which, from a different perspective, look like they in fact increase catastrophic risk. Third, it is the original and by far most influential approach within the field."
I also think that they don't need to prove that others are wrong to show that the lack of diversity has harms - as you agreed.
That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.
... (read more)If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated? I can certainly see the case that even wrong, harmful ideas should only be addressed by counterargument. However, I'm not saying that resources should be spent censoring wrong ideas harmful to EA, just that resources should not be spent actively promoting them. Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.
To be clear, that is absolutely not to say that publishing Democratizing Risk is/was justification for firing or cut funding, I am still very much talking abstractly.
I disagree pragmatically and conceptually. First, people pay more attention to Will than to me about this, and that's good, since he's spent more time thinking about it, is smarter, and has more insight into what is happening. Second, in fact, movements have leaders, and egalitarianism is great for rights, but direct democracy a really bad solution to running anything which wants to get anything done. (Which seems to be a major thing I disagree with the authors of the article on.)
Hi Carla,
Thanks for taking the time to engage with my reply. I'd like to engage with a few of the points you made.
First of all, my point prefaced with 'speaking abstractly' was genuinely that. I thought your paper was poorly argued, but certainly within acceptable limits that it should not result in withdrawn funding. On a sufficient timeframe, everybody will put out some duds, and your organizations certainly have a track record of producing excellent work. My point was about avoiding an overcorrection, where consistently low quality work is guaranteed some share of scarce funding merely out of fear that withdrawing such funding would be seen as censorship. It's a sign of healthy epistemics (in a dimension orthogonal to the criticisms of your post) for a community to be able to jump from a specific discussion to the general case, but I'm sorry you saw my abstraction as a personal attack.
You saw "we do not argue against the TUA, but point out the unanswered questions we observed. .. but highlight assumptions that may be incorrect or smuggle in values". Pointing out unanswered questions and incorrect assumptions is how you argue against something! What makes your paper polemic... (read more)
I share the surprise and dismay other commenters have expressed about the experience you report around drafting this preprint. While I disagree with many claims and framings in the paper, I agreed with others, and the paper as a whole certainly doesn't seem beyond the usual pale of academic dissent. I'm not sure what those who advised you not to publish were thinking.
In this comment, I'd like to turn attention to the recommendations you make at the end of this Forum post (rather than the preprint itself). Since these recommendations are somewhat more radical than those you make in the preprint, I think it would be worth discussing them specifically, particularly since in many cases it's not clear to me exactly what is being proposed.
Having written what follows, I realise it's quite repetitive, so if in responding you wanted to pick a few points that are particularly important to you and focus on those, I wouldn't blame you!
You claim that EA needs to...
Can you explain what this would mean in practice? Most of the big funding bodies are private foundations. How would we go about breaking these up? Who even is "we" in th... (read more)
While I appreciate that we're all busy people with many other things to do than reply to Forum comments, I do think I would need clarification (and per-item argumentation) of the kind I outline above in order to take a long list of sweeping changes like this seriously, or to support attempts at their implementation.
Especially given the claim that "EA needs to make such structural adjustments in order to stay on the right side of history".
The discussion of Bostrom's Vulnerable World Hypothesis seems very uncharitable. Bostrom argues that on the assumption that technological development makes the devastation of civilisation extremely likely, extreme policing and surveillance would be one of the few ways out. You give the impression that he is arguing for this now in our world ("There is little evidence that the push for more intrusive and draconian policies to stop existential risk is either necessary or effective"). But this is obviously not what he is proposing - the vulnerable world hypothesis is put forward as a hypothesis and he says he is not sure whether it is true.
Moreover, in the paper, Bostrom discusses at length the obvious risks associated with increasing surveillance and policing:
"It goes without saying that a mechanism that enables unprecedentedly intense forms of surveillance, or a global governance institution capable of imposing its will on any nation, could also have bad consequences. Improved capabilities for social control could help despotic regimes protect themselves from rebellion. Ubiquitous surveillance could enable a hegemonic ideology or an intolerant majority view to impose itself on... (read more)
That was my reading of VWH too - as a pro tanto argument for extreme surveillance and centralized global governance, provided that the VWH is true. However, many of its proponents seem to believe that the VWH is likely to be true. I do agree that the authors ought to have interpreted the paper more carefully, though.
It still seems like you have mischaracterised his view. You say "Take for example Bostrom’s “Vulnerable World Hypothesis”17, which argues for the need for extreme, ubiquitous surveillance and policing systems to mitigate existential threats, and which would run the risk of being co-opted by an authoritarian state." This is misleading imo. Wouldn't it have been better to note the clearly important hedging and nuance and then say that he is insufficiently cognisant of the risks of his solutions (which he discusses at length)?
Thanks for laying this out so clearly. One frustrating aspect of having a community comprised of so many analytic philosophy students (myself included!) is a common insistence on interpreting statements, including highly troubling ones, exactly as they may have been intended by the author, at the exclusion of anything further that readers might add, such as historical context or ways that the statement could be misunderstood or exploited for ill purposes. Another example of this is the discussion around Beckstead's (in my opinion, deeply objectionable) quote regarding the (hypothetical, ceteris-paribus, etc., to be clear) relative value of saving rich versus poor lives.[1]
I do understand the value of hypothetical inquiry as part analytic philosophy and appreciate its contributions to the study of morality and decision-making. However, for a community that is so intensely engaged in affecting the real world, it often feels like a frustrating motte-and-bailey, where the bailey is the efforts to influence policy and philanthropy on the direct basis of philosophical writings, and the motte is the insistence that those writings are merely hypothetical.
In my opinion, it's insuffici... (read more)
Here's a Q&A which answers some of the questions by reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)
"Do you not think we should work on x-risk?"
"Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?"
"Do you hate longtermism?"
"You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option"
- It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are.
- There’s some hedging i
... (read more)It’s been interesting to re-read the discussion of this post in light of new knowledge that Emile P Torres was originally a co-author. For example, Cremer instructs reviewers to ask why they might have felt like the paper was a hostile attack. Well, I’d certainly see why readers could have had this perception if they read it after Emile had already started publicly insinuating that various longtermists are sympathetic to white supremacy, are plagiarists.
Cremer also says some reviewers asked, “Do you hate longtermism?"
The answer she gives above is “No. We are both longtermists (probs just not the techno utopian kind)”, but it seems like the answer would have in fact been “Two of us do not, but one the authors does hate longtermism and has publicly called it incredibly dangerous”
Just noting that I strongly endorse both this format for responding to questions, and the specific responses.
Emile Torres (formerly Phil) just admitted on their Twitter that they were a co-author of a penultimate version of this paper. It is extremely deceptive not to disclose their contribution this in the paper or in the Forum post. At the point this paper was written, Torres had been banned from the EA Forum and multiple people in the community had accused Torres of harassing them. Do you think that that might have contributed to the (alleged) reception to your paper?
The post in which I speak about EAs being uncomfortable about us publishing the article only talks about interactions with people who did not have any information about initial drafting with Torres. At that stage, the paper was completely different and a paper between Kemp and I. None of the critiques about it or the conversations about it involved concerns about Torres, co-authoring with Torres or arguments by Torres, except in so far as they might have taken Torres an example of the closing doors that can follow a critique. The paper was in such a totally different state and it would have been misplaced to call it a collaboration with Torres.
There was a very early draft of Torres and Kemp which I was invited to look at (in December 2020) and collaborate on. While the arguments seemed promising to me, I thought it needed major re-writing of both tone and content. No one instructed me (maybe someone instructed Luke?) that one could not co-author with Torres. I also don't recall that we were forced to take Torres off the collaboration (I’m not sure who know about the conversations about collaborations we had): we decided to part because we wanted to move the content and tone i... (read more)
The linked article seems to overstate the extent to which EAs support totalitarian policies. While it is true that EAs are generally left-wing and have more frequently proposed increases in the size & scope of government than reductions, Bostrom did commission an entire chapter of his book on the dangers of a global totalitarian government from one of the world's leading libertarian/anarchists, and longtermists have also often been supportive of things that tend to reduce central control, like charter cities, cryptocurrency and decentralised pandemic control.
Indeed, I find it hard to square the article's support for ending technological development with its opposition to global governance. Given the social, economic and military advantages that technological advancement brings, it seems hard to believe that the US, China, Russia etc. would all forgo scientific development, absent global coordination/governance. It is precisely people's skepticism about global government that makes them treat AI progress as inevitable, and hence seek other solutions.
I really dislike it when left-anarchist-leaning folks put scare quotes around "anarcho" in anarcho-capitalist. In my experience it's a strong indicator that someone isn't arguing in good faith.
I'm not an ancap (or a left-anarchist), but David Friedman and his ilk is very clearly trying to formulate a way for a capitalist society to exist without a state. You might think their plans are unfeasible or undesirable (and I do), but that doesn't mean they're secretly not anarchists.
I don't think most people outside left-anarchism would equate "state" with the existence of any unjust hierarchies. Indeed, defining a state in that way seems to be begging the question with regard to anarchy's desirability and feasibility.
Whether or not Friedman provides ways to organise society without a state, he is clearly trying to do so, at least by any definition of "state" that a non-(left-anarchist) would recognise (e.g. an entity with a monopoly on legitimate violence).
First of all, I'm sorry to hear you found the paper so emotionally draining. Having rigorous debate on foundational issues in EA is clearly of the utmost importance. For what it's worth when I'm making grant recommendations I'd view criticizing orthodoxy (in EA or other fields) as a strong positive so long as it's well argued. While I do not wholly agree with your paper, it's clearly an important contribution, and has made me question a few implicit assumptions I was carrying around.
The most important updates I got from the paper:
The section on expected value theory seemed unfairly unsympathetic to TUA proponents
So, I think framing it as "here is this gaping hole in this worldview" is a bit unfair. Proponents of TUA pointed out the hole and are the main people trying to resolve the problem, and any alternatives also seem to have dire problems.
Thank you for writing and sharing. I think I agree with most of the core claims in the paper, even if I disagree with the framing and some minor details.
One thing must be said: I am sorry you seem to have had a bad experience while writing criticism of the field. I agree that this is worrisome, and makes me more skeptical of the apparent matters of consensus in the community. I do think in this community we can and must do better to vet our own views.
Some highlights:
I am big on the proposal to have more scientific inquiry . Most of the work today on existential risk is married to a particular school of Ethics, and I agree it need not be.
On the science side I would be enthusiastic about seeing more work on eg models of catastrophic biorisk infection, macroeconomic analysis on ways artificial intelligence might affect society and expansions of IPCC models that include permafrost methane release feedback loops.
On the humanities side I would want to see for example more work on historical, psychological and anthropological evidence for long term effects and successes and failures in managing global problems like the hole in the ozone layer. I would also like to see surveys ... (read more)
Everything written in the post above strongly resonates with my own experiences, in particular the following lines:
I think criticism of EA orthodoxy is routinely dismissed. I would like to share a few more stories of being publicly critical of EA in the hope that doing so adds some useful evidence to the discussion:
- Consider systemic change. "Some critics of effective altruism allege that its proponents have failed to engage with systemic change" (source). I have always found the responses (eg here and here) to this critique to be dismissive and miss the point. Why can we not just say: yes we are a new community this area feels difficult and we are not there yet? Why do we have to pretend EA is perfect and does systemic change stu
... (read more)i do think there is a difference between this article and stuff from people like Torres, in terms of good faith
I agree with this, and would add that the appropriate response to arguments made in bad faith is not to "steelman" them (or to add them to a syllabus, or to keep disseminating a cherry-picked quote from a doctoral dissertation), but to expose them for what they are or ignore them altogether. Intellectually dishonesty is the epistemic equivalent of defection in the cooperative enterprise of truth-seeking; to cooperate with defectors is not a sign of virtue, but quite the opposite.
I've seen "in bad faith" used in two ways:
While it's obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argument's original promoter not believing it isn't a reason for no one to believe it, and shouldn't stop us from engaging with arguments that aren't obviously false.
(See this comment for more.)
I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/herself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesn't cause cancer, I think it's reasonable to be skeptical even if the claims or arguments contained in those studies are presented by a person unaffiliated with those industries. One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.
In the case at hand, I think what's going on is pretty clear. A person who became deeply hostile to longtermism (for reasons that look prima facie mostly unr... (read more)
The "incentives" point is reasonable, and it's part of the reason I'd want to deprioritize checking into claims with dishonest origins.
However, I'll note that establishing a rule like "we won't look at claims seriously if the person making them has a personal vendetta against us" could lead to people trying to argue against examining someone's claims by arguing that they have a personal vendetta, which gets weird and messy. ("This person told me they were sad after org X rejected their job application, so I'm not going to take their argument against org X's work very seriously.")
Of course, there are many levels to what a "personal vendetta" might entail, and there are real trade-offs to whatever policy you establish. But I'm wary of taking the most extreme approach in any direction ("let's just ignore Phil entirely").... (read more)
Yes I think that is fair.
At the time (before he wrote his public critique) I had not yet realised that Phil Torres was acting in bad faith.
Just to clarify (since I now realize my comment was written in a way that may have suggested otherwise): I wasn't alluding to your attempt to steelman his criticism. I agree that at the time the evidence was much less clear, and that steelmanning probably made sense back then (though I don't recall the details well).
Strong upvote from me - you’ve articulated my main criticisms of EA.
I think it’s particularly surprising that EA still doesn’t pay much attention to mental health and happiness as a cause area, especially when we discuss pleasure and suffering all the time, Yew Kwang Ng focused so much on happiness, and Michael Plant has collaborated with Peter Singer.
In your view, what would it look like for EA to pay sufficient attention to mental health?
To me, it looks like there's a fair amount of engagement on this:
I can't easily find engagement with mental health from Open Phil or GiveWell, but this doesn't seem like an obvious sign of neglect, given the variety of other health interventions they haven't closely engaged with.
I'm limited her... (read more)
I've only just seen this and thought I should chime in. Before I describe my experience, I should note that I will respond to Luke’s specific concerns about subjective wellbeing separately in a reply to his comment.
TL;DR Although GiveWell (and Open Phil) have started to take an interest in subjective wellbeing and mental health in the last 12 months, I have felt considerable disappointment and frustration with their level of engagement over the previous six years.
I raised the "SWB and mental health might really matter" concerns in meetings with GiveWell staff about once a year since 2015. Before 2021, my experience was that they more or less dismissed my concerns, even though they didn't seem familiar with the relevant literature. When I asked what their specific doubts were, these were vague and seemed to change each time ("we're not sure you can measure feelings", "we're worried about experimenter demand effect", etc.). I'd typically point out their concerns had already been addressed in the literature, but that still didn't seem to make them more interested. (I don't recall anyone ever mentioning 'item response theory', which Luke raises as his objection.) In the end, I got the ... (read more)
Really sad to hear about this, thanks for sharing. And thank you for keeping at it despite the frustrations. I think you and the team at HLI are doing good and important work.
To me (as someone who has funded the Happier Lives institute) I just think it should not have taken founding an institute and 6 years and of repeating this message (and feeling largely ignored and dismissed by existing EA orgs) to reach the point we are at now.
I think expecting orgs and donors to change direction is certainly a very high bar. But I don’t think we should pride ourselves on being a community that pivots and changes direction when new data (e.g. on subjective wellbeing) is made available to us.
FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didn't pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this space, perhaps especially after better measure validation studies and intervention studies are conducted, but my initial investigation suggested it would take a lot of work for us to get there, so I moved on to other topics.
At least for me, I don't think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it's a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.
That said, I still think some happiness interventions might be cost-effective upon further investigation, and I think our Global Health & Well-Being team... (read more)
Hello Luke, thanks for this, which was illuminating. I'll make an initial clarifying comment and then go on to the substantive issues of disagreement.
I'm not sure what you mean here. Are you saying GiveWell didn't repeatedly ignore the work? That Open Phil didn't? Something else? As I set out in another comment, my experience with GiveWell staff was of being ignored by people who weren't at that familiar with the relevant literature - FWIW, I don't recall the concerns you raise in your notes being raised with me. I've not had interactions with Open Phil staff prior to 2021 - for those reading, Luke and I have never spoken - so I'm not able to comment regarding that.
Onto the substantive issues. Would you be prepared to more precisely state what your concerns are, and what sort of evidence would chance your mind? Reading your comments and your notes, I'm not sure exactly what your objections are and, in so far as I do, they ... (read more)
Hi Michael,
I don't have much time to engage on this, but here are some quick replies:
This is an interesting conversation. It’s veering off into a separate topic. I wish there was a way to “rebase” these spin-off discussions into a different place. For better organisation.
Thank you Luke – super helpful to hear!!
Do you feel that existing data on subjective wellbeing is so compelling that it's an indictment on EA for GiveWell/OpenPhil not to have funded more work in that area? (Founder's Pledge released their report in early 2019 and was presumably working on it much earlier, so they wouldn't seem to be blameworthy.)
I can't say much more here without knowing the details of how Michael/others' work was received when they presented it to funders. The situation I've outlined seems to be compatible both with "this work wasn't taken seriously enough" and "this work was taken seriously, but seen as a weaker thing to fund than the things that were actually funded" (which is, in turn, compatible with "funders were correct in their assessment" and "funders were incorrect in their assessment").
That Michael felt dismissed is moderate evidence for "not taken seriously enough". That his work (and other work like it) got a bunch of engagement on the Forum is weak evidence for "taken seriously" (what the Forum cares about =/= what funders care about, but the correlation isn't 0). I'm left feeling uncertain about this example, but it's certainly reasonable to argue that mental health and/or SWB hasn't gotten enough attention.
(Personally, I find the case for additional work on SWB more compelling than the case for additional work on mental health specifically, and I don't know the extent to which HLI was trying to get funding for one vs. the other.)
Tl;dr. Hard to judge. Maybe: Yes for GW. No for Open Phil. Mixed for EA community as a whole.
I think I will slightly dodge the question and answer the separate question – are these orgs doing enough exploratory type research. (I think this is a more pertinent question, and although I think subjective wellbeing is worth looking into as an example it is not clear it is at the very top of the list of things to look into more that might change how we think about doing good).
Firstly to give a massive caveat that I do not know for sure. It is hard to judge and knowing exactly how seriously various orgs have looked into topics is very hard to do from the outside. So take the below with a pinch of salt. That said:
- OpenPhil – AOK.
- OpenPhil (neartermists) generally seem good at exploring new areas and experimenting (and as Luke highlights, did look into this).
- GiveWell – hmmm could do better.
- GiveWell seem to have a pattern of saying they will do more exploratory research (e.g. into policy) and then not doing it (mention
... (read more)In light of recent events, I came back to take another look at this paper. It’s a shame that so much of the discussion ended up focusing on the community’s reaction rather than the content itself. I think the paranoid response you describe in the post was both unjust and an overreaction. None of the paper’s conclusions seems hugely damaging or unfair to me.
That said, like other commenters, I’m not wholly convinced by your arguments. You’ve asked people to be more specific about this, and I can give two specific examples.
On technological determinism
You write that people in the EA community “disregard controlling technology on the grounds of a perceived lack of tractability” (p. 17). But you think this is probably the wrong approach, since technological determinism is “derided and dismissed by scholars of science and technology studies” and “unduly curtails the available mitigation options” (p. 18).
I’m a bit skeptical of this because I know many AI safety and biosecurity workers who would be stoked to learn that it’s possible to stop the development of powerful technologies. You write that “we have historical evidence for collective action and coordination on technological progress a... (read more)
A few thoughts on the democracy criticism. Don't a lot of the criticisms here apply to the IPCC? "A homogenous group of experts attempting to directly influence powerful decision-makers is not a fair or safe way of traversing the precipice." IPCC contributors are disproportionately white very well-educated males in the West who are much more environmentalist than the global median voter, i.e. "unrepresentative of humanity at large and variably homogenous in respect to income, class, ideology, age, ethnicity, gender, nationality, religion, and professional background." So, would you propose replacing the IPCC with something like a citizen's assembly of people with no expertise in climate science or climate economics, that is representative wrt some of the demographic features you mention?
You say that decisions about which risks to take should be made democratically. The implication of this seems to be that everyone, and not just EAs, who is aiming to do good with their resources should donate only to their own government. Their govt could then decide how to spend the money democratically. Is that implication embraced? This would eg include all climate philanthropy, which is now at $5-9bn per year.
You seem to assume that we should be especially suspicious of a view if it is not held by a majority of the global population. Over history, the views of the global majority seem to me to have been an extremely poor guide to accurate moral beliefs. For example, a few hundred years ago, most people had abhorrent views about animals, women and people of other races. By the arguments here, do you think that people like Benjamin Lay, Bentham and Mill should not have advocated for change in these areas, including advocating for changes in policy?
As I said in a different but related context earlier this week, "If a small, non-representative group disagrees with the majority of humans, we should wonder why, and given base rates and the outside view, worry about failure modes that have affected similar small groups in the past."
I do think we should worry about failure modes and being wrong. But I think the main reason to do that is that people are often wrong, they are bad at reasoning, and subject to a host of biases. The fact that we are in a minority of the global population is an extremely weak indicator of being wrong. The majority has been gravely wrong on many moral and empirical questions in the past and today. It's not at all clear that the base rate of being wrong for 'minority view' vs 'majority view' is higher or not, and that question is extremely difficult to answer because there are lots of ways of slicing up the minority you are referring to.
I feel like there's just a crazy number of minority views (in the limit a bunch of psychoses held by just one individual), most of which must be wrong. We're more likely to hear about minority views which later turn out to be correct, but it seems very implausible that the base rate of correctness is higher for minority views than majority views.
On the other hand I think there's some distinction to be drawn between "minority view disagrees with strongly held majority view" and "minority view concerns something that majority mostly ignores / doesn't have a view on".
that is a fair point. departures from global majority opinion still seems like a pretty weak 'fire alarm' for being wrong. Taking a position that is eg contrary to most experts on a topic would be a much greater warning sign.
I see how this could be misread. I'll reformulate the statement;
"If our small, non-representative group comes to a conclusion, we should wonder, given base rates about correctness in general and the outside view, about which failure modes have affected similar small groups in the past, and consider if they apply, and how we might be wrong or misguided."
So yes, errors are common to all groups, and being a minority isn't a indicator of truth, which I mistakenly implied. But the way in which groups are wrong is influenced by group-level reasoning fallacies and biases, which are a product of both individual fallacies and characteristics of the group. That's why I think that investigating how previous similar groups failed seems like a particularly useful way to identify relevant failure modes.
My argument here is about whether we should be more suspicious of a view if it is held by the majority or the minority. Whether that is true seems to me to be mainly dependent on the object-level quality of the belief and not whether it is held by the majority or not - that is a very weak indicator, as the examples of slavery, women, racism, homosexuality etc illustrate.
I don't think your piece argues that TUA reinforces existing power relations. The main things that proponents of TUA have diverted resources to are: engineered pandemics, AI alignment, nuclear war and to a lesser extent climate change. How does any of this entrench existing power relations?
nitpick, but it is also not true that the view you criticise is mainly advocated for by billionaires. Obviously, a tiny minority of billionaires are longtermists and a tiny minority of longtermists are billionaires.
I think it is very unclear whether it is true that diverting money to these organisations would entrench wealth disparity. Examining the demographics of the organisations funded is a faulty way to assess the overall effect on global wealth inequality - the main effect these organisations will have is via the actions they take rather than the take home pay of their staff.
Consider pandemic risk. Open Phil has been the main funder in this space for several years and if they had their way, the world would have been much better prepared for covid. Covid has been a complete disaster for low and middle-income countries, and has driven millions into extreme poverty. I don't think the net effect of pandemic preparedness funding is bad for the global poor. Similarly, with AI safety, if you actually believe that transformative AI will arrive in 20 years, then ensuring the development of transformative AI goes well is extremely consequential for people in low and middle-income countries.
I did not mean the demographic composition of organisations to be the main contributor to their impact. Rather, what I'm saying is that that is the only impact we can be completely sure of. Any further impact depends on your beliefs regarding the value of the kind of work done.
I personally will probably go to the EA Long Term Future Fund for funding in the not so distant future. My preferred career is in beneficial AI. So obviously I believe the work in the area has value that makes it worth putting money into.
But looking at it as an outsider, it's obvious that I (Guy) have an incentive to evaluate that work as important, seeing as I may personally profit from that view. Rather, if you think AI risk - or even existential risk as a whole - is some orders of magnitude less important than it's laid out to be in EA - then the only straightforward impact of supporting X-risk research is in who gets the money and who does not. If you think any AI research is actually harmful, then the expected value of funding this is even worse.
It's also not the claim being made:
I seem to remember learning about rampant racism in China helping to cause the Taiping rebellion? And there are enormous amounts of racism and sectarianism today outside Western countries - look at the Rohingya genocide, the Rwanda genocide, the Nigerian civil war, the current Ethiopian civil war, and the Lebanese political crisis for a few examples.
Every one of these examples should be taken with skepticism as this is far outside my area of expertise. But while I agree with the sentiment that we often conflate the history of the world with the history of white people, I'm not sure it's true in this specific case.
Yeah, you're probably right. It's just I got a strong "history=Western history" vibe from the comment I was responding to, but maybe that was unfair!
i'd be pretty surprised if almost everyone didn't have strongly racist views in 1780. Anti-black views are very prevalent in India and China today, as I understand it. eg Gandhi had pretty racist attitudes.
minor point but I don't think you've described citizen's assemblies in the most charitable way. Yes, it is a representative sortition of the public so they don't necessarily have expertise in any particular field but there is generally a lot of focus on experts from various fields who inform the assembly. So in reality, a citizen's assembly on climate would be a random selection of representative citizens who would be informed/educated by IPCC (or similar) scientists, who would then deliberate amongst themselves to reach their conclusions.These conclusions one would hope would be similar to what the scientists would recommend themselves as it based on information largely provided by them.
For people that might be interested, here is the report of the Climate Assembly (a citizen's assembly on climate commissioned by the UK government) that in my opinion, had some fairly reasonable policy suggestions. You can also watch a documentary about it by the BBC here.
The paper never spoke about getting rid of experts or replacing experts with citzens. So no.
Many countries now run citizen assemblies on climate change, which I'm sure you're aware of. They do not aim to replace the role of IPCC.
EA or the field of existential risk cannot be equated with the IPCC.
To your second point, no this does not follow at all. Democracy as a procedure is not to be equated (and thus limited) to governments that grant you a vote every so often. You will find references to the relevant literature on democratic experimentation in the last section which focusses on democracy in the paper.
It would help for clarity if I understood your stance on central bank independence. This seems to produce better outcomes but also seems undemocratic. Do you think this would be legitimate?
It still seems like, if I were Gates, donating my money to the US govt would be more democratic than eg spending it on climate advocacy? Is the vision for Open phil that they set up a citizen's assembly that is representative of the global population and have that decide how to spend the money, by majority vote?
As in the discussion above, I think you're being disingenuous by claiming government is "more democratic."
And if you were Gates, I'd argue that it would be even more democratic to allow the IPCC, which is more globally representative and less dominated by special interests that the US government, to guide where you spend your money than it would to allow the US government to do so. And given how much the Gates foundation engages with international orgs and allows them to guide his giving, I think that "hand it to the US government" would plausibly be a less democratic alternative than the current approach, which seems to be to allow GAVI, the WHO, and the IPCC to suggest where the money can best be spent.
And having Open Phil convene a consensus driven international body on longtermism actually seems somewhat similar to what the CTLR futureproof report co-written by Toby Ord suggests when it says the UK should lead by, "creating and then leading a global extreme risks network," and push for "a Treaty on the Risks to the Future of Humanity." Perhaps you don't think that's a good idea, but I'm unclear why you would treat it as a reductio, except in the most straw-man form.
Hi David, I wasn't being disingenuous. Here, you say "I think you're being disingenuous by claiming government is "more democratic." In your comment above you say "One way to make things more democratic is to have government handle it, but it's clearly not the only way." Doesn't this grant that having the government decide is more democratic? These statements seem inconsistent.
So, to clarify before we discuss the idea, is your view that all global climate philanthropy should be donated to the IPCC?
I think there is a difference between having a citizen's assembly decide what to do with all global philanthropic money (which as I understand it, is the implication of the article), and having a citizen's assembly whose express goal is protecting the long-term (which is not the implication of the article). If all longtermist funding was allocated on the first mechanism, then I think it highly likely that funding for AI safety, engineered pandemics and nuclear war would fall dramatically.
The treaty in the CTLR report seems like a good idea but seems quite different to the idea of democratic control proposed in the article.
Hi David. We were initially discussing whether giving the money to govts would be more democratic. You suggested this was a patently mad idea but then seemed to agree with it.
Here is how the authors define democracy: "We understand democracy here in accordance with Landemore as the rule of the cognitively diverse many who are entitled to equal decision-making power and partake in a democratic procedure that includes both a deliberative element and one of preference aggregation (such as majority voting)"
You say: "You also might want to look into the citations Zoe suggested that you read above, about what "democratic" means, since you keep interpreting in the same simplistic and usually incorrect way, as equivalent to having everyone vote about what to do."
equal political power and preference aggregation entails majority rule or lottery voting or sortition. Your own view that equal votes aren't a necessary condition of democracy seems to be in tension with the authors of the article.
A lot of the results showing the wisdom of democratic procedures depend on certain assumptions especially about voters not being systematically biased. In the real world, this isn't true so so... (read more)
You're using a word differently than they explicitly say they are using the same word. I agree that it's confusing, but will again note that consensus decision making is democratic in thes sense they use, and yet is none of the options you mention. (And again, the IPCC is a great example of a democratic deliberative body which seems to fulfill the criteria you've laid out, and it's the one they cite explicitly.)
On the validity and usefulness of democracy as a method of state governance, you've made a very reasonable case that it would be ineffective for charity, but in the more general sense that Landemore uses it, which includes how institutions other than governments can account for democratic preferences, I'm not sure that the same argument applies.
That said, I strongly disagree with Cremer and Kemp about the usefulness of this approach on very different grounds. I think that both consensus and other democratic methods, if used for funding, rather than for governance, would make hits based giving and policy entrepreneurship impossible, not to mention being fundamentally incompatible with finding neglected causes.
Hi James, I do think it would be interesting to see what a true global citizen's assembly with complete free rein would decide. I would prefer that the experiment were not done with Open Phil's money as the opportunity cost would be very high. A citizen's assembly with longtermist aims would also be interesting, but would be different to what is proposed in the article. Pre-setting the aims of such an assembly seems undemocratic.
I would be pretty pessimistic about convincing lots of people of something like longtermism in a citizen's assembly - at least I think funding for things like AI, engineered viruses and nuclear war would fall a fair amount. The median global citizen is someone who is strongly religious, probably has strong nationalist and socialist beliefs (per the literature on voter preferences in rich countries, which is probably true in poorer countries), unwilling to pay high carbon taxes, homophobic etc.
For what it's worth, I wasn't genuinely saying we should hold a citizen's assembly to decide what we do with all of Open Phil's money, I just thought it was an interesting thought experiment. I'm not sure I agree that the pre-setting of the aims of an assembly is undemocratic, however, as surely all citizen's assemblies need an initial question to start from? That seems to have been the case for previous assemblies (climate, abortion, etc.).
To play devil's advocate, I'm not sure your points about the average global citizen being homophobic, religious, socialist, etc., actually matter that much when it comes to people deciding where they should allocate funding for existential risk. I can't see any relationship between beliefs in which existential risks are the most severe and queer people, religion or their willingness to pay carbon taxes (assuming the pot of funding they allocate is fixed and doesn't affect their taxes).
Also, I don't think you've given much convincing evidence that a citizen's assemblies would lead to funding for key issues falling a fair amount vs decisions by OP program officers, besides your intuition. I can't say I have much evidence myself except for th... (read more)
I think you're unaware of the diversity and approach of the IPCC. It is incredibly interdisciplinary, consensus driven, and represents stakeholders around the world faithfully. You should look into what they do and their process more carefully before citing them as an example.
Then, you conflated "democratically" with "via governments, through those government's processes" which is either a bizarre misunderstanding, or a strange rhetorical game you're playing with terminology.
As mentioned, the vast majority of the authors are from a similar demographic background to EAs. The IPCC also produces lots of policy-relevant material on eg the social costs of climate change and the best paths to mitigation, which are mainly determined by white males.
Here is a description of climate philanthropy as practiced today in the United States. Lots of unelected rich people who disproportionately care about climate change spend hundreds of millions of pounds advocating for actions and policies that they prefer. It would be a democratic improvement to have that decision made by the US government, because at least politicians are subject to competitive elections. So, having the decision made by the US government would be more democratic. Which part of this do you disagree with?
It seems a bit weird to class this as a 'bizarre misunderstanding' since many of the people who make the democracy criticism of philanthropy, such as Rob Reich, do in fact argue that the money should be spent by the government.
First, are you backing away from your initial claims about the IPCC, since it in fact is consensus based with stakeholders rather than being either a direct democracy, or a unilateralist decision.
Second, I'm not interested in debating what you say Luke Kemp thinks about climate philanthropy, nor do I know anything about his opinions, nor it is germane to this discussion.
But in your claims that you say are about his views, you keep claiming and assuming that the only democratic alternatives to whatever we're discussing are a direct democracy or control by a citizens' assembly (without expertise) or handing things to governments. Regardless of Luke's views elsewhere, that's certainly not what they meant in this paper. Perhaps this quote will be helpful;
As Landemore, who the paper cites several times, explains, institutions work better when the technocratic advice is within the context of a inclusive decision procedure, rather than either having technocrats in charge, or having a direct democracy.
Hello, Yes i think it would be fair to back away a bit from the claims about the IPCC. it remains true that most climate scientists and economists are white men and they have a disproportionate influence on the content of the IPCC reports. nonetheless, the case was not as clear cut as I initially suggested.
I find the second point a bit strange. Isn't it highly relevant to understand whether the views of the author of the piece we are discussing are consistent or not?
It's also useful to know what the implication of the ideas are expressed actually are. They explicitly give a citizen's assembly as an example of a democratic procedure. Even if it is some other deliberative mechanism followed by a majority vote, I would still like to know what they think about stopping all climate philanthropy and handing decisions over all money over to such a body. It's pretty hard to square a central role for expertise with a normative idea of political equality.
I do think it is germane to the discussion, because it helps to clarify what the authors are claiming and whether they are applying their claims consistently.
I'm not fully sure that deciding which risks to take seriously in a democratic fashion logically leads to donating all of your money to the government. Some reasons I think this:
- That implies that we all think our governments are well-functioning democracies but I (amongst many others) don't believe that to be true. I think it's fairly common sentiment and knowledge that political myopia by politicians, vested interests and other influences mean that governments don't implement policies that are best for their populations.
- As I mentioned in another comment, I think the authors are saying that as existential risks affect the entirety of humanity in a unique way, this is one particular area where we should be deciding things more democratically. This isn't necessarily the case for spending on education, healthcare, animal welfare, etc, so there it would make sense you donate to institu
... (read more)Regarding the risk that longtermism could lead people to violate rights, it seems to me like you could make exactly the same argument for any view that prioritises between different things. For instance, as Peter Singer has pointed out, billions of animals are tortured and killed every year. By exactly analogous reasoning, one could say that other problems 'dwindle into irrelevance' as other values are sacrificed at the altar of the astronomical expected value of preventing factory farming. So, this would justify animal rights terrorism and the like and other abhorrent actions
"Don't be fanatical about utilitarian or longtermist concerns and don't take actions that violate common sense morality" is a message that longtermists have been emphasizing from the very beginnings of this social movement, and quite a lot.
Some examples:
More generally, there's often at least a full paragraph devoted to this topic when someone writes a longer overview article on longtermism or writes about particularly dicey implications with outsized moral stakes. I also remember this being a presentation or conversation topic at many EA conferences.
I haven't read the corresponding section in the paper that the OP refers to, yet, but I skimmed the literature section and found none of the sources I linked to above. If the paper criticizes longtermism on grounds of this sort of implication and fails to mention that longtermists have been aware of this and are putting in a lot o... (read more)
I also agree with this. There are many reasons for consequentialists to respect common sense morality.
I was just making the point that the rhetorical argument about rights can pretty much be made about any moral view. eg The authors seem to believe that degrowth would be a good idea, and it is a built in feature of degrowth that it would have enormous humanitarian costs
I agree that there is an analogy to animal suffering here, but there's a difference in degree I think. To longtermists, the importance of future generations is many orders of magnitude higher than the importance of animal suffering is to animal welfare advocates. Therefore, I would claim, longtermists are more likely to ignore other non-longtermist considerations than animal welfare advocates would be.
The longtermist could then argue that an analogous argument applies to "other-defence" of future generations. (In case there was any need to clarify: I am not making this argument, but I am also not making the argument that violence should be used to prevent nonhuman animals from being tortured.)
Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.
In general, I think it would be very helpful if critics of totalist longtermism made it clear what rival view in population ethics they themselves endorse (or what distribution of credences over rival views, if they are morally uncertain). The impression one gets from reading many of these critics is that they assume the problems they raise are unique to totalist longtermism, and that alternative views don't have different but comparably serious problems. But this assumption can't be taken for granted, given the known impossibility theorems and other results in population ethics. An argument is needed.
@CarlaZoeC or Luke Kemp, could you create another forum post solely focused on your article? This might lead to more focused discussions, separating debate on community norms vs discussing arguments within your piece.
I also wanted to express that I'm sorry this experience has been so stressful. It's crucial to facilitate internal critique of EA, especially as the movement is becoming more powerful, and I feel pieces like yours are very useful to launch constructive discussions.
Points where I agree with the paper:
Points where I disagree with the paper:
- The papers argues that "for others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent". I think it is completely clear, given that in pre-industrial times most people lived in societies that were rather unfree and unequal (harder to say about "virtue" since different people would argue for very different conceptions of what virtue is). Moreover, although intellectuals argued for all sorts of positions (words are cheap, after all), few people are trying to return to pre-industrial life in practice. Finally, techno-utopian visions of the future are usually very pro-freedom and are entirely consistent with groups of people voluntarily choosing to live in primitivist communes or whatever.
- If ideas are promoted by an "elitist" minority, that doesn't automa
... (read more)Hey Zoe and Luke, thank you for posting this and for writing the paper! I just finished reading it and found it thoughtful, detailed, and it gave me a lot to think about. It is the best piece of criticism I have read, and will recommend it to others looking for that going forward. I can see the care, time, and revisions that went into the piece. I am very sorry to hear about your experience of writing it. I think you contributed something important, and wish you had been met with more support. I hope the community can read this post and learn from it so we can get a little closer to that ideal of how to handle, incorporate, and respond to criticism.
Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.
Epistemic status: Written late at night, in a rush, I'll probably regret some of this in the morning but (a) if I don't publish now, it won't happen, and (b) I did promise extra spice after I retired.
It seems valuable to separate "support for the action of writing the paper" from "support for the arguments in the paper". My read is that the authors had a lot of the former, but less of the latter.
From the original post:
While "invalid" seems like too strong a word for a critic to use (and I'd be disappointed in any critic who did use it), this sounds like people were asked to review/critique the paper and then offered reviews and critiques of the paper.
Still, to the degree that ther... (read more)
This is a great comment, thank you for writing it. I agree - I too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opposite.
I want to make sure that the discussion does not unduly focus on FHI or CSER in particular. I think this has little to do with the organisations as a whole and more to do with individuals who sit somewhere in the EA hierarchy. We made the choice to protect the privacy of the people whose comments we speak of here. This is out of respect but also because I think the more interesting area of focus (and that which can be changed) is what role EA as a community plays in something like this happening.
I woud caution to center the discussion only around the question of whether or not OpenPhil would reduce funding in res... (read more)
Thanks for writing this reply, I think this is an important clarification.
I enjoyed some of the discussion of emergency powers. It could be good to mention the response to covid. Leaving to one side whether such policies were justified (they do seem to have saved many lives), country-wide lockdowns were surely one of the most illiberal policies enacted in history, and explicitly motivated by trying to address a global disaster. Outside of genocide and slavery, I struggle to think of many greater restrictions on individuals freedom than confining essentially the entire population to semi house arrest. In many cases these rules were brought in under special emergency powers, and sometimes later determined to be illegal after judicial review. However, these policies were often extremely popular with the general population, so I'm not sure they fit the democracy-vs-illiberalism dichotomy the article is sort of going for.
I think it is disappointing that so many comments are focusing on arguing with the paper rather than discussing the challenges outlined in the post. From a very quick reading I don't find any of the comments here unreasonable but I do find them to be talking about a different topic. It would be better if we could separate out the discussion of "red teaming" EA from the discussion of this particular paper
The paper is very well written, crisp and communicates its points very well.
The paper includes characterizations of longtermists that seem schematic and many would find unfair.
In the post itself, there are serious statements that add a lot of heat to the issue and are hard to approach.
I think that this is a difficult time where many people are getting/staying out away, or performing emotional labor, for what are genuinely difficult experiences of the OP.
This isn't ideal for truthseeking.
If I was in a different cause area with a similar issue, I wouldn't want a lot of longtermists coming in and pulling on these threads, I don't think that is the ideal or right thing to do.
Interesting, I was thinking the opposite! I was thinking, "There's so many interesting specific suggestions in this paper and people are just caught up on whether or not they like diversity initiatives generally and what they think of the tone on this paper, how annoying."
I just mean this could have been two posts - one about the paper and one about the experience of publishing the paper. Both would be very valuable.
I agree it would have been better to have this as two posts – I'm personally finding it difficult to respond to either the paper or the post, because when I focus on one I feel like I'm ignoring important points in the other.
That said, the fact that both are being discussed in a single post is down to the authors, not the commenters. I think it's reasonable for any given commenter to focus on one without justifying why they're neglecting the other.
Yeah I agree. I disagree with most of the paper, but I find the claims about pressures not to publish criticism troubling.
Quick thoughts from my phone:
Thanks for writing this post, Carla and Luke. I am sorry to hear about your experiences, that sounds very challenging.
I also understand why people would object to your work, as many may have had high confidence in it having negative expected value.
It was surely a very difficult situation for all parties.
I am glad you are voicing concerns, I like posts like this.
At the same time, what occurred mostly sounded reasonable to me, even if it was unpleasant. Strong opinions were expressed, concerns were made salient, people may have been defensive or acted with some self-interest, but no one was forced to do anything. Now the paper and your comments are out, and we can read and react to them. I have heard much worse in other academic and professional settings.
I think that it's unavoidable that there will be a lot of strong disagreement in the EA community. It seems unavoidable in any group of diverse individuals who are passionately working together towards important goals. Of course, we should try to handle conflict well, but we shouldn't expect that it can ever be avoided or be completely pleasant.
I also understand why people don't express criticism publicl... (read more)
I don't think "the work got published, so the censorship couldn't have been that bad" really makes sense as a reaction to claims of censorship. You won't see work that doesn't get published, so this is basically a catch-22 (either it gets published, in which cases there isn't censorship, or it doesn't get published, in which case no one ever hears about it).
Also, most censorship is soft rather than hard, and comes via chilling effects.
(I'm not intending this response to make any further object-level claims about the current situation, just that the quoted argument is not a good argument.)
I agree with most of what you say other than it being reasonable for some people to have acted in self-interest.
While I do think it is unavoidable that there will be attempts to shut down certain ideas and arguments out of the self-interest of some EAs, I think it's important that we have a very low tolerance of this.
Ah okay.
I think I interpreted this as ‘pressure’ to not publish, and my definition of ‘shutting down ideas’ includes pressure / strong advice against publishing them, while yours is restricted to forcing people not to publish them.
EDIT: See Ben's comment in the thread below on his experience as Zoe's advisor and confidence in her good intentions.
(Opening disclaimer: this was written to express my honest thoughts, not to be maximally diplomatic. My response is to the post, not the paper itself.)
I'd like to raise a point I haven't seen mentioned (though I'm sure it's somewhere in the comments). EA is a very high-trust environment, and has recently become a high-funding environment. That makes it a tempting target for less intellectually honest or pro-social actors.
If you just read through the post, every paragraph except the last two (and the first sentence) is mostly bravery claims (from SSC's "Against Bravery Debates"). This is a major red flag for me reading something on the internet about a community I know well. It's much easier to start an online discussion about how you're being silenced than to defend your key claims on the merits. Smaller red flags were: explicit warnings of impending harms if the critique is not heeded, and anonymous accounts posting mostly low-quality comments in support of the critique (shoutout to "AnonymousEA").
A lot of EAs have a natural tendency to defend someone wh... (read more)
I believe these are authors already working at EA orgs, not "brave lone researchers" per se.
Thanks - I meant "lone" as in one or two researchers raising these concerns in isolation, not to say they were unaffiliated with an institution.
I'm not familiar with Zoe's work, and would love to hear from anyone who has worked with them in the past. After seeing the red flags mentioned above, and being stuck with only Zoe's word for their claims, anything from a named community member along the lines of "this person has done good research/has been intellectually honest" would be a big update for me.
And since I've stated my suspicions, I apologize to Zoe if their claims turn out to be substantiated. This is an extremely important post if true, although I remain skeptical.
In particular, a post of the form:
I have written a paper (link).
(12 paragraphs of bravery claims)
(1 paragraph on why EA is failing)
(1 paragraph call to action)
Strikes me as being motivated not by a desire to increase community understanding of an important issue, but rather to generate sympathy for the authors and support for their position by appealing to justice and fairness norms. The other explanation is that this was a very stressful experience, and the author was simply venting their f... (read more)
(Hopefully I'm not overstepping; I’m just reading this thread now and thought someone ought to reply.)
I’ve worked with Zoe and am happy to vouch for her intentions here; I’m sure others would be as well. I served as her advisor at FHI for a bit more than a year, and have now known her for a few years. Although I didn’t review this paper, and don’t have any detailed or first-hand knowledge of the reviewer discussions, I have also talked to her about this paper a fe... (read more)
Thanks for sharing this! Responding to just some parts of the object-level issues raised by the paper (I only read parts closely, so I might not have the full picture)--I find several parts of this pretty confusing or unintuitive:
Other thoughts:
How do we solve this?
If I imagine myself dependent on the funding of someone, that would change my behaviour. Anyone have any ideas of how to get around this?
- Tenure is the standard academic approach but does that lead to better work overall
- A wider set of funders who will fund work even if it attacks the other funders?
- OpenPhil making a statement to fund high quality work they disagree with
- Some kind of way to anonymously survey EA academics to get a sense of if there is a point that everyone thinks but it too scared to say
- Some kind of prediction market on views that are likely to be found to be wrong in the future.
I think offering financial incentives specifically for red teaming makes sense. I tend to think red teaming is systematically undersupplied because people are concerned (often correctly in my experience with EA) that it will cost them social capital, and financial capital can offset that.
I'm a fan of the CEEALAR funding model -- giving small amounts to dedicated EAs, with less scrutiny and less prestige distribution. IMO it is less incentive-distorting than more popular EA funding models.
Most these ideas sound interesting to me. However —
I'm not quite sure what this means? I'm reading it as "funding work which looks set to make good progress on a goal OP don't believe is especially important, or even net bad". And that doesn't seem right to me.
Similar ideas that could be good —
I'm especially keen on the latter!
Sounds good. At the more granular and practical end, this sounds like red-teaming, which is often just good practice.
Good for you!
I'm sad that this seemed necessary, and happy to see that despite some opposition, it was written published. I sincerely hope that the cynics saying it could damage your credibility or careers are wrong, and that most of the criticisms are not as severe as they may seem - but if so, it's great that the issues are being pointed out, and if not, it's critical that they are.
Sorry if this is a bit of a tangent but it seems possible to me to frame a lot of the ideas from the paper as wholly uncontroversial contributions to priorities research. In fact I remember a number of the ideas being raised in the spirit of contributions by various researchers over the years, for which they expected appreciation and kudos rather than penalty.
(By “un-/controversial” I mean socially un-/controversial, not intellectually. By socially controversial I mean the sort of thing that will lead some people to escalate from the level of a truth-seeking discussion to the level of interpersonal conflict.)
It think it’s more a matter of temperament than conviction that I prefer the contribution framing to a respectful critique. (By “respectful” I mean respecting feelings, dignity, and individuality of the addressees, not authority/status. Such a respectful critique can be perfectly irreverent.) Both probably have various pros and cons in different contexts.
But one big advantage of the contribution framing seems to be that it makes the process of writing, review, and publishing a lot less stressful because it avoids antagonizing people – even though they ideally shouldn’t feel ant... (read more)
Strong upvote, especially to signal my support of
Maybe my models are off but I find it hard to believe that anyone actually said that. Are we sure people said "Please don't criticize central figures in EA because it may lead to an inability to secure EA funding?"
That sounds to me like a thing only cartoon villains would say.
"Please don't criticize central figures in EA because it may lead to an inability to secure EA funding?" I have heard this multiple times from different sources in EA.
This is interesting if true. With respect to this paper in particular, I don't really get why anyone would advise the authors not to publish it. It doesn't seem like it would affect CSER's funding, since as I understand it (maybe I'm wrong) they don't get much EA money and it's hard to see how it would affect FHI's funding situation. The critiques don't seem to me to be overly personal, so it's difficult to see why publishing it would be overly risky.
I might be able to provide a bit of context:
I think the devil is really in the details here. I think there are some reasonable versions of this.
The big question is why and how you're criticizing people, and what that reveals about your beliefs (and what those beliefs are).
As an extreme example, imagine if a trusted researcher came out publicly, saying,
"EA is a danger to humanity because it's stopping us from getting to AGI very quickly, and we need to raise as much public pressure against EA as possible, as quickly as possible. We need to shut EA down."
If I were a funder, and I were funding researchers, I'd be hesitant to fund researchers who both believed that and was taking intense action accordingly. Like, they might be directly fighting against my interests.
It's possible to use criticism to improve a field or try to destroy it.
I'm a big fan of positive criticism, but think that some kinds of criticism can be destructive (see a lot of politics, for example)
I know less about this certain circumstance, I'm just pointing out how the other side would see it.
This is all reasonable but none of your comment addresses the part where I'm confused. I'm confused about someone saying something that's either literally the following sentence, or identical in meaning to:
"Please don't criticize central figures in EA because it may lead to an inability to secure EA funding."
That part of the example makes sense to me. What I don't understand is the following:
In your example, imagine you're a friend, colleague, or an acquaintance of that researcher who considers publishing their draft about how EA needs to be stopped because it's slowing down AGI. What do you tell them? It seems like telling them "The reason you shouldn't publish this piece is that you [or "we," in case you're affiliated with them] might no longer get any funding" is a strange non sequitur. If you think they're right about their claim, it's really important to publish the article anyway. If you think they're wrong, there are still arguments in favor of discussin... (read more)
Very happy to have a private chat and tell you about our experience then.
I'm curious about this and would be happy to hear more about it if you're comfortable sharing. I'll get in touch (and would make sure to read the full article before maybe chatting)!
Update: Zoe and I had a call and the private info she shared with me convinced me that some people with credentials or track record in EA/longtermist research indeed discouraged publication of the paper based on funding concerns. I realized that I originally wasn't imaginative enough to think of situations where those sorts of concerns could apply (in the sense that people would be motivated to voice them for common psychological reasons and not as cartoon villains). When I thought about how EA funding generates pressure to conform, I was much too focused on the parts of EA I was most familiar with. That said, the situation in question arose because of specific features coming together – it wouldn't be accurate to say that all areas of the EA ecosystem face the same pressures to conform. (I think Zoe agrees with this last bit.) Nonetheless, looking forward I can see similar dynamics happening again, so I think it's important to have identified this as a source of bias.
When I wrote my comment, I worried it would be unkind to Zoe because I'm also questioning her recollection of what people said.
Now that it looks like people did in fact say the thing exactly the way I quoted it (or identical to it in meaning and intent), my comment looks more unkind toward Zoe's critics.
Edit: Knowing for sure that people actually said the comment, I obviously no longer think they must be cartoon villains. (But I remain confused.)
fwiw I was not offended at all.
I haven't seen any quotes but Joey saying he had the same experience, Zoe confirming that she didn't misremember this part, and none of the reviewers speaking up saying "This isn't how things happened," made me update that maybe one or more people actually did say the thing I considered cartoonish.
And because people are never cartoon villains in real life, I'm now trying to understand what their real motivations were.
For instance, one way I thought of how the comment could make sense is if someone brought it up because they are close to Zoe and care most about her future career and how she'll be doing, and they already happen to have a (for me very surprising) negative view of EA funders and are pessimistic about bringing about change. In that scenario, it makes sense to voice the concerns for Zoe's sake.
Initially, I simply assumed that the comment must be coming from the people who have strong objections to (parts of) Zoe's paper. And I was thinking "If you think the paper is really unfair, why not focus on that? Why express a concern about funding that only makes EA look even worse?"
So my new model is that the people who gave Zoe this sort of advice may not have been defend... (read more)
It might be useful to hear from the reviewers themselves as to the thought process here. As mentioned above, I don't really understand why anyone would advise the authors not to publish this. For comparison, I have published several critiques of the research of several Open Phil-funded EA orgs while working at an open phil-funded EA org. In my experience, I think if the arguments are good, it doesn't really matter if you disagree with something Open Phil funds. Perhaps that is not true in this domain for some reason?
This is also how I interpreted the situation.
(In my words: Some reviewers like and support Zoe and Luke but are worried about the sustainability of their funding situation because of the model that these reviewers have of some big funders. So these reviewers are well-intentioned and supportive in their own way. I just hope that their worries are unwarranted.)
I think a third hypothesis is that they really think funding whatever we are funding at the moment is more important than continuing to check whether we are right; and don't see the problems with this attitude (perhaps because the problem is more visible from a movement-wide, longterm perspective rather than an immediate local one?).
As a moderator, I thought Lukas's comment was fine.
I read it as a humorous version of "this doesn't sound like something someone would say in those words", or "I cast doubt on this being the actual thing someone said, because people generally don't make threats that are this obvious/open".
Reading between the lines, I saw the comment as "approaching a disagreement with curiosity" by implying a request for clarification or specification ("what did you actually hear someone say"?). Others seem to have read the same implication, though Lukas could have been clearer in the first place and I could be too charitable in my reading.
Compared to this comment, I thought Lukas's added something to the conversation (though the humor perhaps hurt more than helped).
*****
On a meta level, I upvoted David's comment because I appreciate people flagging things for potential moderation, though I wish more people would use the Report button attached to all comments and posts (which notifies all mods automatically, so we don't miss things):
Thanks for writing this! It seems like you've gone through a lot in publishing this. I am glad you had the courage and grit to go through with it despite the backlash you faced.
A lot of this comments are at their heart debating technocracy vs populism in decision-making. A separate conversation on this topic has been started here: https://forum.effectivealtruism.org/posts/yrwTnMr8Dz86NW7L4/technocracy-vs-populism-including-thoughts-on-the
Thanks for sharing this, Zoe!
I think your piece is valuable as a summary of weaknesses in existing longtermist thinking, though I don't agree with all your points or the ways you frame them.
Things that would make me excited to read future work, and IMO would make that work stronger:
What does TUA stand for?
Techno-utopian approach (via paper abstract)
How could we solve this?
Singer started the Journal of Controversial Ideas, which lets people publish under pseudonyms.
https://journalofcontroversialideas.org/
Maybe more should try and publish criticisms there, or there could be funding for an EA specific journal with similar rules.
I guess there are problems with this suggestion, let me know what they are.
I like the idea of setting up a home for criticisms of EA/longtermism. Although I guess the EA Forum already exists as a natural place for anyone to post criticisms, even anonymously. So I guess the question is — what is the forum lacking? My tentative answer might be prestige / funding. Journals offer the first. The tricky question on the second is: who decides which criticisms get awarded? If it's just EAs, this would be disingenuous.
I think people don't appreciate how much upvotes and especially downvotes can encourage conformity.
Suppose a forum user has drafted "Comment C", and they estimate an 90% chance that it will be upvoted to +4, and a 10% chance it will be downvoted to -1.
Do we want them to post the comment? I'd say we do -- if we take score as a proxy for utility, the expected utility is positive.
However, I submit that for most people, the 10% chance of being downvoted to -1 is much more salient in their mind -- the associated rejection/humiliation of -1 is a bigger social punishment than +4 is a social reward, and people take those silly "karma" numbers surprisingly seriously.
It seems to me that there are a lot of users on this forum who have almost no comments voted below 0, suggesting a revealed preference to leave things like "Comment C" unposted (or even worse, they don't think the thoughts that would lead to "Comment C" in the first place). People (including me) just don't seem very willing to be unpopular. And as a result, we aren't just losing stuff that would be voted to -1. We're losing stuff which people thought might be voted to -1.
(I also don't think karma is a great proxy for utility... (read more)
Alternatively, the "Long Reflection" has already begun, it's just not very evenly distributed. And humanity has a lot of things to hash out.
At an object level, I appreciate this statement on page 15:
At a meta level, thank you for your bravery and persistence in publishing this paper. I've added some tags to this post, including Criticism of the effective altruism community.
I'm happy with more critiques of total utilitarianism here. :)
For what it's worth, I think there are a lot of people unsatisfied with total utilitarianism within the EA community. In my anecdotal experience, many longtermists (including myself) are suffering focused. This often takes the form of negative utilitarianism, but other variants of suffering focused ethics exist.
I may have missed it, but I didn't see any part of the paper that explicitly addressed suffering-focused longtermists. (One part mentioned, "Preventing existential risk is not primarily about preventing the suffering and termination of existing humans.").
I think you might be interested in the arguments made for caring about the long-term future from a suffering-focused perspective. The arguments for avoiding existential risk are translated into arguments for reducing s-risks.
I also think that suffering-focused altruists are not especially vulnerable to your argument about moral pluralism. In particular, what matters to me is not the values of humans who exist now but the values of everyone who will ever exist. A natural generalization of this principle is the idea that we should try to step on as few people's preferences as possible (with the preferences of animals and sentient AI included), which leads to a sort of negative preference utilitarianism.
Clarificatory question - are you arguing here that stagnation at the current level of technology would be a good thing?
If so, there seem to be several problems with this. this seems like a very severe bound on human potential. even in the richest countries in the world, most people work for a ~third of their life in jobs they find incredibly boring.
It also seems like this would expose us to indefinite risk from engineered pandemics. what do you make of that risk?
It also seems unlikely that climate change will be fixed without very strong technological progress in things like zero carbon fuels, energy storage etc.
In my view, covid is a very dramatic counter-example to the benefits of technological stagnation/degrowth for the climate. Millions of people died, billions of people were locked in their homes for months on end and travel was massively reduced. In spite of that, emissions in 2020 merely fell to 2011 levels. The climate challenge is to get to net zero emissions. A truly enormous humanitarian cataclysm would be required to make that happen without improved technology.
On your last paragraph, the instinct you characterise as techno-utopian here just seems to me to be clearly correct. It just seems true that we are more likely to solve climate change by making better low carbon tech than we are to get everyone to get all countries to agree to stop all technological progress. Consider emissions from cars. Suppose for the sake of argument that electric cars were as advanced as they were ten years ago and were not going to improve. What, then, would be involved in getting car emissions to zero? On your approach, the only option seems to be for billions of people to give up their cars, and for them only to be accessible to people who can afford a $100k Tesla. That approach is obviously less likely to succeed than the 'techno-optimist' one of making electric cars better (which is the path we have taken, with significant success)
Hi David, I was arguing against this point:
"I'm saying that the instinct to judge coming up with a magic technology to allow economic growth and the current state of life while fixing climate change as more likely than global coordination to use existing technology in more sustainable ways feels techno-utopian to me."
So, the author was saying that s/he thinks we are more likely to solve climate change by global coordination with zero technological progress than we are through continued economic growth and technological progress. I argued that this wasn't true. This isn't a false dichotomy, I was discussing the dichotomy explicitly made by the author in the first place.
My claim is that without technological progress in electricity, industry and transport we are extremely unlikely to solve climate change, which is the point that luke kemp seems to disagree with.
How is this a false dilemma?
Technically it omits a third option (technological progress in areas other than low carbon technology) but it certainly seems to cover all the relevant possibilities to me. Whether we have carbon taxes and so on is a somewhat separate issue: Halstead is arguing that without technological progress, sufficiently high carbon taxes would be ruinously expensive.
but this is differential technological development, which the authors strongly reject. The author and commenter explicitly ask us to consider how well we would fare if we stopped technological progress entirely
"it's more nuanced than that".
Thanks Carla and Luke for a great paper. This is exactly the sort of antagonism that those not so deeply immersed in the xrisk literature can benefit from, because it surveys so much and highlights the dangers of a single core framework. Alternatives to the often esoteric and quasi-religious far-future speculations that seem to drive a lot of xrisk work are not always obvious to decision makers and that gap means that the field can be ignored as 'far fetched'. Democratisation is a critical component (along with apoliticisation).
I must say that it was a bit of a surprise to me that TUA is seen as the paradigm approach to ERS. I've worked in this space for about 5-6 years and never really felt that I was drawn to strong-longtermism or transhumanism, or technological progress. ERS seems like the limiting case of ordinary risk studies to me. I've worked in healthcare quality and safety (risk to one person at a time), public health (risk to members of populations) and extinction risk just seems like the important and interesting limit of this. I concur with the calls for grounding in the literature of risk analysis, democracy, and pluralism. In fact in peer reviewed work I've prev... (read more)
I haven't opened the paper yet - this is a reply to the content of the forum post.
Thank you for writing it. I completely agree with you: EA has to not only tolerate critics, but also encourage critical debate among its members.
Disabling healthy thought processes for fear of losing funding is disastrous, and puts in question the effectiveness of such begotten funding.
I furthermore agree with all the changes you suggested the movement should make.
Is there a non-PDF version of the paper available? (e.g. html)
From skimming a couple of the argments seem to be the same I brought up here so I'd like to read the paper in full, but knowing myself I won't have the patience to get through a 35 page pdf.
I'm not affiliated with EA research organizations at all (I participate in running a local group at Finland and am looking for industry / other EA affiliated career options more so than specifically research).
However I have had multiple discussions with fellow local EA:s where it was deemed problematic that some X-risk papers are subject to quite "weak" standards of criticism relative to how much they often imply. Heartfelt thanks to you both for publishing and discussing this topic. And starting up conversation on the important meta-topic of EA research topic and funding decisionmaking and standards.
Thank you both from the bottom of my heart for writing this. I share many (but not all) of your views, but I don’t express them publicly because if I do my career will be over.
What you call the Techno-Utopian Approach is, for all intents and purposes, hegemonic within this field.
Newcomers (who are typically undergraduates not yet in their twenties) have the TUA presented to them as fact, through reading lists that aim to be educational. In fact, they are extremely philosophically, scientifically, and politically biased; when I showed a non-EA friend of min... (read more)
I'm genuinely not sure why I'm being downvoted here. What did I say?
I think it's because you're making strong claims without presenting any supporting evidence. I don't know what reading lists you're referring to; I have doubts about not asking questions being an 'unspoken condition' about getting access to funding; and I have no idea what you're conspiratorially alluding to regarding 'quasi-censorship' and 'emotional blackmail'.
I also feel like the comment doesn't seem to engage much with the perspective it criticizes (in terms of trying to see things from that point of view). (I didn't downvote the OP myself.)
When you criticize a group/movement for giving money to those who seem aligned with their mission, it seems relevant to acknowledge that it wouldn't make sense to not focus on this sort of alignment at all. There's an inevitable, tricky tradeoff between movement/aim dilution and too much insularity. It would be fair if you wanted to claim that EA longtermism is too far on one end of that spectrum, but it seems unfair to play up the bad connotations of taking actions that contribute to insularity, implying that there's something sinister about having selection criteria at all, without acknowledging that taking at least some such actions is part of the only sensible strategy.
I feel similar about the remark about "techbros." If you're able to work with rich people, wouldn't it be wasteful not to do it? It would be fair if you wanted to claim that the rich people in EA use their influence in ways that... what is even the claim here? That their idiosyncrasies end up having an outsized effect? ... (read more)
My apologies, specific evidence was not presented with respect to...
- ...the quasi-censorship/emotional blackmail point because I think it's up to the people involved to provide as much detail as they are personally comfortable with. All I can morally do is signal to those out of the loop that there are serious problems and hope that somebody with the right to name names does so. I can see why this may seem conspiratorial without further context. All I can suggest is that you keep an ear to the ground. I'm anonymous for a reason.
- ...the funding issue because either it fits the first category of "areas where I don't have a right to name names" (cf. "...any critique of central figures in EA would result in an inability to secure funding from EA sources..." above) or because the relevant information would probably be enough to identify me and thus destroy my career.
- ...the reading list issue because I thought the point was self-evident. If you would like some examples, see a very brief selection below, but this criticism applies to all relevant reading lists I have seen and is an area where I'm afraid we have prior form - see https://www.simonknutsson.com/problems-in-effective-altruism-an
... (read more)Personally I more or less agreed with you and I don't think you were as insensitive as people suggested. I work in machine learning yet I feel shining a light on the biases and the oversized control of people in the tech industry is warranted and important.
IMO we should seek out and listen to the most persuasive advocates for a lot of different worldviews. It doesn't seem epistemically justified to penalize a worldview because it gets a lot of obtuse advocacy.
If people downvote comments on the basis of perceived ingroup affiliation rather than content then I think that might make OP's point for them...
I think that the dismissive and insulting language is at best unhelpful - and signaling your affiliations by being insulting to people you see as the outgroup seems like a bad strategy for engaging in conversation.
The "content" here is that you refer to the funders you dislike with slurs like "techbro". It's reasonable to update negatively in response to that evidence.
Priors should matter! For example, early rationalists were (rightfully) criticized for being too open to arguments from white nationalists, believing they should only look at the argument itself rather than the source. It isn't good epistemics to ignore the source of an argument and their potential biases (though it isn't good epistemics to dismiss them out of hand either based on that, of course).
I think it's plausible that it's hard to notice this issue if your personal aesthetic preferences happen to be aligned with TUA. I tried to write here a little questioning how important aesthetic preferences may be. I think it's plausible that people can unite around negative goals even if positive goals would divide them, for instance, but I'm not convinced.
>the idea of [...] the NTI framework [has] been wholesale adopted despite almost no underpinning peer-review research.
I argue that the importance-tractability-crowdedness framework is equivalent to maximizing utility subject to a budget constraint.
Re the undue influence of TUA on policy, you say
"An obvious retort here would be that these are scholars, not decision-makers, that any claim of elitism is less relevant if it refers to simple intellectual exploration. This is not the case. Scholars of existential risk, especially those related to the TUA, are rapidly and intentionally growing in influence. To name only one example noted earlier, scholars in the field have already had “existential risks” referenced in a vision-setting report of the UN Secretary General. Toby Ord has been referenced, ... (read more)
"Toby Ord shouldn't seek to influence policy" is not the message I get from that paragraph, fwiw.
It comes across to me as "Toby Ord and other techno-optimists already have policy influence [and so it's especially important for people who care about the long-term future to fund researchers from other viewpoints as well]."
I'm obviously not the authors; maybe they did mean to say that you and Toby Ord should stop trying to influence policy. But that wasn't my first impression.
I thought it was clear, in context, that the point made was that a minority shouldn't be in charge, especially when ignoring other views. (You've ignored my discussion of this in the past, but I take it you disagree.)
That doesn't mean they shouldn't say anything, just that we should strive for more representative views to be presented alongside theirs - something that Toby and Luke seem to agree with, given what they have suggested in the CTLR report, in this paper, and elsewhere.