Should we fund people for more years at a time? I've heard that various EA organisations and individuals with substantial track records still need to apply for funding one year at a time, because they either are refused longer-term funding, or they perceive they will be.
For example, the LTFF page asks for applications to be "as few as possible", but clarifies that this means "established organizations once a year unless there is a significant reason for submitting multiple applications". Even the largest organisations seem to only receive OpenPhil funding every 2-4 years. For individuals, even if they are highly capable, ~12 months seems to be the norm.
Offering longer (2-5 year) grants would have some obvious benefits:
The biggest benefit, though, I think, is that:
Job security is something people value immensely. This is especially true as you get older (something I've noticed tbh), and would be even moreso for someone trying to raise kids. In the EA economy, many people get by on short-term gr... (read more)
Lots of my favorite EA people seem to think this is a good idea, so I'll provide a dissenting view: job security can be costly in hard-to-spot ways.
Another consideration is turn around time. Grant decisions are slow, deadlines are, imperfectly timed, distribution can be slow. So you need to apply many months before you need the money, which means well before you've used up your current grant, which means planning your work to have something to show ~6 months into a one year grant. Which is just pretty slow.
I don't know what the right duration is, but the process needs to be built so that continuous funding for good work is a possibility, and possible to plan around.
I feel like reliability is more important than speed here, and it's ~impossible to get this level of reliability from orgs run by unpaid volunteers with day jobs. Especially when volunteer hours aren't fungible and the day jobs are demanding.
I think Lightspeed set a fairly ambitious goal it has struggled to meet. I applied for a fast turn around and got a response within a week, but two months later I haven't received the check. The main grants were supposed to be announced on 8/6 and AFAIK still haven't been. This is fine for me, but if people didn't have to plan around it or risk being screwed I think it would be better.
Based on some of Ozzie's comments here, I suspect that that using a grant process made for organizations to fund individuals is kind of doomed, and you either want to fund specific projects without the expectation it's someone's whole income (which is what I do), or do something more employer like with regular feedback cycles and rolling payments. And if you do the former, it needs to be enough money to compensate for the risk and inconvenience of freelancing.
I kind of get the arguments against paying grantmakers, but from my perspective I'd love to see you paid more with a higher reliability level.
I agree - I think the financial uncertainty created by having to renew funding each year is very significantly costly and stressful and makes it hard to commit to longer-term plans.
I think that the OpenPhil situation can be decent. If you have a good manager/grantmaker that you have a good relationship and a lot of trust in, then I think this could provide a lot of the benefit. You don't have assurance, but you should have a solid understanding of what you need to accomplish in order to get funding or not - and thus, this would give you a good idea of your chances.
I think that the non-OP funders are pretty amateurish here and could improve a lot.
I think the situation now is pretty lacking. However, the most obvious/suggested fix is more like "better management and ongoing relationships", rather than 3+ year grants. The former could basically be the latter, if desired (once you start getting funding, it's understood to be very likely to continue, absent huge mess-ups).
Just for the record, I don't think OP has been doing well in this respect since the collapse of FTX. My sense is few Open Phil grantees know the conditions under which they would get more funding, or when they might receive an answer. At least I don't, and none of the Open Phil grantees I've talked to in the past few months about this felt like this was handled well either.
I've wondered about this for independent projects and there's some previous discussion here.
See also the shadows of the future term that Michael Nielsen uses.
Comments on Jacy Reese Anthis' Some Early History of EA (archived version).
Summary: The piece could give the reader the impression that Jacy, Felicifia and THINK played a comparably important role to the Oxford community, Will, and Toby, which is not the case.
I'll follow the chronological structure of Jacy's post, focusing first on 2008-2012, then 2012-2021. Finally, I'll discuss "founders" of EA, and sum up.
2008-2012
Jacy says that EA started as the confluence of four proto-communities: 1) SingInst/rationality, 2) Givewell/OpenPhil, 3) Felicifia, and 4) GWWC/80k (or the broader Oxford community). He also gives honorable mentions to randomistas and other Peter Singer fans. Great - so far I agree.
What is important to note, however, is the contributions that these various groups made. For the first decade of EA, most key community institutions of EA came from (4) - the Oxford community, including GWWC, 80k, and CEA, and secondly from (2), although Givewell seems to me to have been more of a grantmaking entity than a community hub. Although the rationality community provided many key ideas and introduced many key individuals to EA, the institutions that it ran, such as CFAR, were mostl... (read more)
Thanks for this, and for your work on Felicifia. As someone who's found it crucial to have others around me setting an example for me, I particularly admire the people who basically just figured out for themselves what they should be doing and then starting doing it.
Fwiw re THINK: I might be wrong in this recollection, but at the time it felt like very clearly Mark Lee's organisation (though Jacy did help him out). It also was basically only around for a year. The model was 'try to go really broad by contacting tonnes of schools in one go and getting hype going'. It was a cool idea which had precedent, but my impression was the experiment basically didn't pan out.
That's very nice of you to say, thanks Michelle!
Regarding THINK, I personally also got the impression that Mark was a sole-founder, albeit one who managed other staff. I had just taken Jacy's claim of co-founding THINK at face value. If his claim was inaccurate, then clearly Jacy's piece was more misleading than I had realised.
I agree with the impression that Mark Lee seemed the sole founder. I was helping Mark Lee with some minor contributions at THINK in 2013, and Jacy didn't occur to me as one of the main contributors at the time. (Perhaps he was more involved with a specific THINK group, but not the overall organization?)
I think I agree with essentially all of this, though I would have preferred if you gave this feedback when you were reading the draft because I would have worded my comments to ensure they don't give the impression you're worried about.
If it seemed to you like I was raising different issues in the draft, then each to their own, I guess. But these concerns were what I had in mind when I wrote comments like the following:
> 2004–2008: Before I found other EAs
If you're starting with this, then you should probably include "my" in the title (or similar) because it's about your experience with EA, rather than just an impartial historical recount... you allocate about 1/3 of the word count to autobiographical content that is only loosely related to the early history of EA...
> In general, EA emerged as the convergence from 2008 to 2012 at least 4 distinct but overlapping communities
I think the "EA" name largely emerged from (4), and it's core institutions mostly from (4) with a bit of (2). You'd be on more solid ground if you said that the EA community - the major contributors - emerged from (1-4), or if you at least clarified this somehow.
... (read more)> dozens of people worke
The FTX crisis through the lens of wikipedia pageviews.
(Relevant: comparing the amounts donated and defrauded by EAs)
1. In the last two weeks, SBF has about about 2M views to his wikipedia page. This absolutely dwarfs the number of pageviews to any major EA previously.
2. Viewing the same graph on a logarithmic scale, we can see that even before the recent crisis, SBF was the best known EA. Second was Moskovitz, and roughly tied at third are Singer and Macaskill.
3. Since the scandal, many people will have heard about effective altruism, in a negative light. It has been accumulating pageviews at about 10x the normal rate. If pageviews are a good guide, then 2% of people who had heard about effective altruism ever would have heard about it in the last two weeks, through the FTX implosion.
4. Interest in "longtermism" has been only weakly affected by the FTX implosion, and "existential risk" not at all.
Given this and the fact that two books and a film are on the way, I think that "effective altruism" doesn't have any brand value anymore is more likely than not to lose all its brand value. Whereas "existential risk" is far enough removed that it is untainted by these events. "Longt... (read more)
Updated pageview figures:
There are apparently five films/series/documentaries coming up on SBF - these four, plus Amazon.
My impression is that the coverage of EA has been more negative than you suggest, even though I don't have hard data either. It could be useful to look into.
The NYT article isn't an opinion piece but a news article, and I guess that it's a bit less clear how to classify them. Potentially one should distinguish between news articles and opinion pieces. But in any event, I think that if someone who didn't know about EA before reads the NYT article, they're more likely to form a negative than a positive opinion.
A case of precocious policy influence, and my pitch for more research on how to get a top policy job.
Last week Lina Khan was appointed as Chair of the FTC, at age 32! How did she get such an elite role? At age 11, she moved to the US from London. In 2014, she studied antitrust topics at the New America Foundation (centre-left think tank). Got a JD from Yale in 2017, and published work relevant to the emerging Hipster Antitrust movement at the same time. In 2018, she worked as a legal fellow at the FTC. In 2020, became an associate professor of law at Columbia. This year - 2021 - she was appointed by Biden.
The FTC chair role is an extraordinary level of success to reach at such a young age. But it kind-of makes sense that she should be able to get such a role: she has elite academic credentials that are highly relevant for the role, has riden the hipster antitrust wave, and has experience of and willingness to work in government.
I think biosec and AI policy EAs could try to emulate this. Specifically, they could try to gather some elite academic credentials, while also engaging with regulatory issues and working for regulators, or more broadly, in the executive branch of goverment. ... (read more)
What's especially interesting is that the one article that kick-started her career was, by truth-orientated standards, quite poor. For example, she suggested that Amazon was able to charge unprofitably low prices by selling equity/debt to raise more cash - but you only have to look at Amazon's accounts to see that they have been almost entirely self-financing for a long time. This is because Amazon has actually been cashflow positive, in contrast to the impression you would get from Khan's piece. (More detail on this and other problems here).
Depressingly this suggests to me that a good strategy for gaining political power is to pick a growing, popular movement, become an extreme advocate of it, and trust that people will simply ignore the logical problems with the position.
My impression is that a lot of her quick success was because her antitrust stuff tapped into progressive anti Big Tech sentiment. It's possible EAs could somehow fit into the biorisk zeitgeist but otherwise, I think it would take a lot of thought to figure out how an EA could replicate this.
Agreed that in her outlying case, most of what she's done is tap into a political movement in ways we'd prefer not to. But is that true for high-performers generally? I'd hypothesise that elite academic credentials + policy-relevant research + willingness to be political, is enough to get people into elite political positions, maybe a tier lower than hers, a decade later, but it'd be worth knowing how all the variables in these different cases contribute.
Yep - agree with all that, especially that it would be cool for somebody to look into the general question.
Putting things in perspective: what is and isn't the FTX crisis, for EA?
In thinking about the effect of the FTX crisis on EA, it's easy to fixate on one aspect that is really severely damaged, and then to doomscroll about that, or conversely to focus on an aspect that is more lightly affected, and therefore to think all will be fine across the board. Instead, we should realise that both of these things can be true for different facets of EA. So in this comment, I'll now list some important things that are, in my opinion, badly damaged, and some that aren't, or that might not be.
What in EA is badly damaged:
I also can't think of a bigger scandal in the 223-year history of utilitarianism
I feel like there's been a lot here, though not as "one sudden shock".
Really, every large ideology I could think of have some pretty massive scandals associated with it. The political left, political right, lots of stuff. FTX is take compared to a lot of that. (Still really bad of course).
I think the FTX stuff a bigger deal than Peter Singer's views on disability, and for me to be convinced about the England and enlightenment examples, you'd have to draw a clearer line between the philosophy and the wrongful actions (cf. in the FTX case, we have a self-identified utilitarian doing various wrongs for stated utilitarian reasons).
I agree that every large ideology has had massive scandals, in some cases ranging up to purges, famines, wars, etc. I think the problem for us, though, is that there aren't very many people who take utilitarianism or beneficentrism seriously as an action-guiding principle - there are only ~10k effective altruists, basically. What happens if you scale that up to 100k and beyond? My claim would be that we need to tweak the product before we scale it, in order to make sure these catastrophes don't scale with the size of the movement.
Fwiw I'm not sure it badly damages the publishability. It might lead to more critical papers, though.
Translating EA into Republican. There are dozens of EAs in US party politics, Vox, the Obama admin, Google, and Facebook. Hardly in the Republican party, working for WSJ, appointed for Trump, or working for Palantir. A dozen community groups in places like NYC, SF, Seattle, Berkeley, Stanford, Harvard, Yale. But none in Dallas, Phoenix, Miami, the US Naval Laboratory, the Westpoint Military Academy, etc - the libertarian-leaning GMU economics department being a sole possible exception.
This is despite the fact that people passing through military academies would be disproportionately more likely to work on technological dangers in the military and public service, while the ease of competitiveness is less than more liberal colleges.
I'm coming to the view that similarly to the serious effort to rework EA ideas to align with Chinese politics and culture, we need to translate EA into Republican, and that this should be a multi-year, multi-person project.
I thought this Astral Codex Ten post, explaining how the GOP could benefit from integrating some EA-aligned ideas like prediction markets into its platform, was really interesting. Karl Rove retweeted it here. I don't know how well an anti-classism message would align with EA in its current form though, if Habryka is right that EA is currently "too prestige-seeking".
My favorite example of Slate Star Codex translating into Republican is the passage on climate change starting with "In the 1950s": https://slatestarcodex.com/2014/10/16/five-case-studies-on-politicization/
SBF's views on utilitarianism
After hearing about his defrauding FTX, like everyone else, I wondered why he did it. I haven't met Sam in over five years, but one thing that I can do is take a look at his old Felicifia comments. At that time, back in 2012, Sam identified as an act utilitarian, and said that he would only follow rules (such as abstaining from theft) only if and when there was a real risk of getting caught. You can see this in the following pair of quotes.
Quote #1. Regarding the Parfit's Hiker thought experiment, he said:
... (read more)I'm not sure I understand what the paradox is here. Fundamentally if you are going to donate the money to THL and he's going to buy lots of cigarettes with it it's clearly in an act utilitarian's interest to keep the money as long as this doesn't have consequences down the road, so you won't actually give it to him if he drives you. He might predict this and thus not give you the ride, but then your mistake was letting Paul know that you're an act utilitarian, not in being one. Perhaps this was because you've done this before, but then not giving him money the previous time was possibly not the correct decision according to act utilitarianism, because
Sam Bankman Fried (~$25B) is currently estimated to be about twice as rich as Dustin Moskovitz (~$13B). The rest of committed EA money is <$10B, so SBF and colleagues constitute close-to, if not half of all EA funds. I don't think people have fully reoriented toward this reality. For example, we should care more about top talent going to the FTX Foundation, and worry less if OpenPhil won't fund a pet project.
Obviously, crypto is volatile, so this may change!
Five recruitment ideas.
Here are five ideas, each of which I suspect could improve the flow of EA talent by at least a few percent.
1. A top math professor who takes on students in alignment-relevant topics
A few years ago, this was imperative in CS. Now we have some AIS professors in CS, and a couple in stats, but none in pure math. But some students obsessed with pure math, and interested in AIS, are very bright, yet don't want to drop out of their PhDs. Thus having a top professor could be a good way to catch these people.
2. A new university that could hire people as professors
Because some academics don't want to leave academia.
3. A recruitment ground for politicians. This could involve top law and policy schools, and would be not be explicitly EA -branded.
Because we need more good candidates to support. And some distance between EAs and the politicians we support could help with both epistemic and reputational contamination.
4. Mass scholarships for undergrads at non-elite, non-US high-schools/undergrad, based on testing. This could award thousands of scholarships per year.
A lot of top scientists study undergrad in their own country, so it would make sense to either fund them... (read more)
Getting advice on a job decision, efficiently (five steps)
When using EA considerations to decide between job offers, asking for help is often a good idea, even if those who could provide advice are busy, and their time is valued. This is because advisors can spend minutes of their time to guide years of yours. It's not disrespecting their "valuable" time, if you do it right. I've had some experience as an advisor, both and as an advisee, and I think a safe bet is to follow the following several steps:
People often tell me that they encountered EA because they were Googling "How do I choose where to donate?", "How do I choose a high-impact career?" and so on. Has anyone considered writing up answers to these topics as WikiHow instructionals? It seems like it could attract a pretty good amount of traffic to EA research and the EA community in general.
I recently published six new wikiHow articles to promote EA principles: How to Make a Difference in Your Career, How to Help Farmed Animals, How to Launch a High Impact Nonprofit, How to Reduce Animal Cruelty in Your Diet, How to Help Save a Child's Life with a Malaria Bed Net Donation, and How to Donate Cryptocurrency to Effective Charities.
Some titles might change soon in case you can't find them anymore (e.g., How to Reduce Animal Cruelty in Your Diet --> How to Have a More Ethical Diet Towards Animals, and How to Help Save a Child's Life with a Malaria Bed Net Donation --> How to Help Save a Child's Life from Malaria).
Three more are in the approval process (you have to wait a few days before seeing them): How to Fight Climate Climate Change by Donating to the Best Charities, How to Donate to the Most Effective Animal Welfare Charities, and How to Help the World's Poorest People by Sending Money. I will publish some more articles in the following weeks.
Let me know if you have feedback on the articles, and I'll be glad to improve them :)
Also, thank you for writing this shortform, as it inspired my mentor Cillian Crosson to ask me about writing these wikiHows :)
B... (read more)
EA Highschool Outreach Org (see Catherine's and Buck's posts, my comment on EA teachers)
Running a literal school would be awesome, but seems too consuming of time and organisational resources to do right now.Assuming we did want to do that eventually, what could be a suitable smaller step? Founding an organisation with vetted staff, working full-time on promoting analytical and altruistic thinking to high-schoolers - professionalising in this way increases the safety and reputability of these programs. Its activities should be targeted to top schools, and could include, in increasing order of duration:
I'm not confident this would go well, given the various reports from Catherine's recap and Buck's further theorising. But targeting the right students, and bri... (read more)
Affector & Effector Roles as Task Y?
Longtermist EA seems relatively strong at thinking about how to do good, and raising funds for doing so, but relatively weak in affector organs, that tell us what's going on in the world, and effector organs that influence the world. Three examples of ways that EAs can actually influence behaviour are:
- working in & advising US nat sec
- working in UK & EU governments, in regulation
- working in & advising AI companies
But I expect this is not enough, and our (a/e)ffector organs are bottlenecking our impact. To be clear, it's not that these roles aren't mentally stimulating - they are. It's just that their impact lies primarily in implementing ideas, and uncovering practical considerations, rather than in an Ivory tower's pure, deep thinking.
The world is quickly becoming polarised between US and China, and this means that certain (a/e)ffector organs may be even more neglected than the others. We may want to promote: i) work as a diplomat ii) working at diplomat-adjacent think tanks, such as the Asia Society, iii) working at relevant UN bodies, relating to disarmament and bioweapon control, iv) working at UN... (read more)
[Maybe a bit of a tangent]
A Brookings article argues that (among other things):
This updated me a little bit further towards thinking it might be useful:
Here's the part of the article which... (read more)
High impact teachers? (Teaching as Task Y). More recent thoughts at EA Highschool Outreach Org. See also An EA teaching pathway?
The typical view, here, on high-school outreach seems to be that:
So I think high-school outreach should be done, but done differently. Involving some teachers would be useful step toward professionalisation (separating the outreach from the rationalist community would be another).
But (1) also suggests that teaching at a school for gifted children could be a priority activity in itself. The argument is if a teacher can inspire a bright student to try to do good in their career, then the student might be manifold more effective than the teacher themselves would have been, if they had tried to work directly on the world's problems. And students at such schools are exceptional enough (Z>2) that this could happen many ... (read more)
Overzealous moderation?
Has anyone else noticed that the EA Forum moderation is quite intense of late?
Back in 2014, I'd proposed quite limited criteria for moderation: "spam, abuse, guilt-trips, socially or ecologically destructive destructive advocacy". I'd said then: "Largely, I expect to be able to stay out of users' way!" But my impression is that the moderators have at some point after 2017 taken to advising and sanction users based on their tone, for example, here (Halstead being "warned" for unsubstantiated true comments), "rudeness" and "Other behavior that interferes with good discourse" being criteria for content deletion. Generally I get the impression that we need more, not less, people directly speaking harsh truths, and that it's rarely useful for a moderator to insert themselves into such conversation, given that we already have other remedies: judging a user's reputation, counterarguing, or voting up and down. Overall, I'd go as far as to conjecture that if moderators did 50% less (by continuing to delete spam, but standing down in the less clear-cut cases) the forum would be better off.
Speaking as the lead moderator, I feel as though we really don’t make all that many visible “warning” comments (though of course, "all that many" is in the eye of the beholder).
I do think we’ve increased the number of public comments we make, but this is partly due to a move toward public rather than private comments in cases where we want to emphasize the existence of a given rule or norm. We send fewer private messages than we used to (comparing the last 12 months to the 18 months before that).
Since the new Forum was launched at the end of 2018, moderator actions (aside from deleting spam, approving posts, and other “infrastructure”) have included:
I generally think more moderation is good, but have also pushed back on a number of specific moderation decisions. In general I think we need more moderation of the type "this user seems like they are reliably making low-quality contributions that don't meet our bar" and less moderation of the type "this was rude/impolite but it's content was good", of which there have been a few recently.
I don't have a view of the level of moderation in general, but think that warning Halstead was incorrect. I suggest that the warning be retracted.
It also seems out of step with what the forum users think - at the time of writing, the comment in question has 143 Karma (56 votes).
The Safety/Capabilities Ratio
People who do AI safety research sometimes worry that their research could also contribute to AI capabilities, thereby hastening a possible AI safety disaster. But when might this be a reasonable concern?
We can model a researcher i as contributing intellectual resources of si to safety, and ci to capabilities, both real numbers. We let the total safety investment (of all researchers) be s=∑isi, and the capabilities investment be c=∑ici. Then, we assume that a good outcome is achieved if s>c/k, for some constant k, and a bad outcome otherwise.
The assumption about s>b/k could be justified by safety and capabilities research having diminishing return. Then you could have log-uniform beliefs (over some interval) about the level of capabilities c′ required to achieve AGI, and the amount of safety research c′/k required for a good outcome. Within the support of c′ and c′/k, linearly increasing s/c, will linearly increase the chance of safe AGI.
In this model, having a positive marginal impact doesn't require us to completely abstain ... (read more)
AI Seems a Lot More Risky Than Biotech
We tend to think that AI x-risk is mostly from accidents because well, few people are omnicidal, and alignment is hard, so an accident is more likely. We tend to think that in bio, on the other hand, it would be very hard for a natural or accidental event to cause the extinction of all humanity. But the arguments we use for AI ought to also imply that the risks from intentional use of biotech are quite slim.
We can state this argument more formally using three premises:
It follows from (1-3) that x-risk from AI is >10x larger than that of biotech. We ought to believe that (1) and (3) are true for reasons given in the first paragraph. (2) is, in my opinion, a topic too fraught with infohazards to be fit for public debate. That said, it seems plausible due to AI being generally more powerful than biotech. So I lean toward thinking the conclusion is correct.
In The Precipice, the risk from AI was rated as merely 3x greater. But if the difference is >10x, then almost all longtermists who are not much more competent in bio than in AI should prefer to work on AIS.
I like this approach, even though I'm unsure of what to conclude from it. In particular, I like the introduction of the accident vs non-accident distinction. It's hard to get an intuition of what the relative chances of a bio-x-catastrophe and an AI-x-catastrophe are. It's easier to have intuitions about the relative chances of:
That's what you're making use of in this post. Regardless of what one thinks of the conclusion, the methodology is interesting.
Which longtermist hubs do we most need? (see also: Hacking Academia)
Suppose longtermism already has some presence in SF, Oxford, DC, London, Toronto, Melbourne, Boston, New York, and is already trying to boost its presence in the EU (especially Brussels, Paris, Berlin), UN (NYC, Geneva), and China (Beijing, ...). Which other cities are important?
I think there's a case for New Delhi, as the capital of India. It's the third-largest country by GDP (PPP), soon-to-be the most populous country, high-growth, and a neighbour of China. Perhaps we're neglecting it due to founder effects, because it has lower average wealth, because it's universities aren't thriving, and/or because it currently has a nationalist government.
I also see a case for Singapore - that it's government and universities could be a place from which to work on de-escalating US-China tensions. It's physically and culturally not far from China. As a city-state, it benefits a lot from peace and global trade. It's by far the most-developed member of ASEAN, which is also large, mostly neutral, and benefits from peace. It's generally very technocratic with high historical growth, and is also the HQ of APEC.
I feel Indonesia / Jakarta is perhaps overlooked / neglected sometimes, despite it being expected to be the world's 4th largest economy by 2050:
Hacking Academia.
Certain opportunities are much more attractive to the impact-minded than to regular academics, and so may be attractive, relative to how competitive they are.
Thinking about an academic career in this way makes me think more people should pursue tenure at UMD, Georgetown, and Johns Hopkins (good for both biosecurity and causal models of AI), than I thought beforehand.
Making community-building grants more attractive
An organiser from Stanford EA asked me today how community building grants could be made more attractive. I have two reactions:
EAs have reason to favour Top-5 postdocs over Top-100 tenure?
Related to Hacking Academia.
A bunch of people face a choice between being a postdoc at one of the top 5 universities, and being a professor at one of the top 100 universities. For the purpose of this post, let's set aside the possibilities of working in industry, grantmaking and nonprofits. Some of the relative strengths (+) of the top-5 postdoc route are accentuated for EAs, while some of the weaknesses (-) are attenuated:
+greater access to elite talent (extra-important for EAs)
+larger university-based EA communities, many of which are at top-5 universities
-less secure research funding (less of an issue in longtermist research)
-less career security (less important for high levels of altruism)
-can't be a sole-supervisor of a PhD student (less important if one works with a full-professor who can supervise, e.g. at Berkeley or Oxford).
-harder to set up a centre (this one does seem bad for EAs, and hard to escape)
There are also considerations relating to EAs' ability to secure tenure. Sometimes, this is decreased a bit due to the research running against prevailing trends.
Overall, I think that some EAs should... (read more)
A quite obvious point that may still be worth making is that the balance of the considerations will look very different for different people. E.g. if you're able to have a connection with a top university while being a professor elsewhere, that could change the calculus. There could be numerous idiosyncratic considerations worth taking into account.
This is probably overstated—at most major US research universities, tenure outcomes are fairly predictable, and tenure is granted in 80-95% of cases. This obviously depends on your field and your sense of your fit with a potential tenure-track job, though.
That said, it is much easier to do research when you're at an institution that is widely considered to be competitive/credible in your field and subfield, and the set of institutions that gets that distinction can be smaller than the (US) top 100 in many cases. So, it may often make sense to go for a postdoc if you think it'll increase your odds of getting a job at a top-10 or top-50 institution.
How the Haste Consideration turned out to be wrong.
In The haste consideration, Matt Wage essentially argued that given exponential movement growth, recruiting someone is very important, and that in particular, it’s important to do it sooner rather than later. After the passage of nine years, noone in the EA movement seems to believes it anymore, but it feels useful to recap what I view as the three main reasons why:
I have a few thoughts here, but my most important one is that your (2), as phrased, is an argument in favour of outreach, not against it. If you update towards a much better way of doing good, and any significant fraction of the people you 'recruit' update with you, you presumably did much more good via recruitment than via direct work.
Put another way, recruitment defers to question of how to do good into the future, and is therefore particularly valuable if we think our ideas are going to change/improve particularly fast. By contrast, recruitment (or deferring to the future in general) is less valuable when you 'have it all figured out'; you might just want to 'get on with it' at that point.
***
It might be easier to see with an illustrated example:
Let's say in the year 2015 you are choosing whether to work on cause P, or to recruit for the broader EA movement. Without thinking about the question of shifting cause preferences, you decide to recruit, because you think that one year of recruiting generates (e.g.) two years of counterfactual EA effort at your level of ability.
In the year 2020, looking back on this choice, you observe that you now work on cause Q, whic... (read more)
EA Tweet prizes.
Possible EA intervention: just like the EA Forum Prizes, but for the best Tweets (from an EA point-of-view) in a given time window.
Reasons this might be better than the EA Forum Prize:
1) Popular tweets have greater reach than popular forum posts, so this could promote EA more effectively
2) The prizes could go to EAs who are not regular forum users, which could also help to promote EA more effectively.
One would have to check the rules and regulations.
I feel like reliability is more important than speed here, and it's ~impossible to get this level of reliability from orgs run by unpaid volunteers with day jobs. Especially when volunteer hours aren't fungible and the day jobs are demanding.
I think Lightspeed set a fairly ambitious goal it has struggled to meet. I applied for a fast turn around and got a response within a week, but two months later I haven't received the check. The main grants were supposed to be announced on 8/6 and AFAIK still haven't been. This is fine for me, but if people didn't have to plan around it or risk being screwed I think it would be better.
Based on some of Ozzie's comments here, I suspect that that using a grant process made for organizations to fund individuals is kind of doomed, and you either want to fund specific projects without the expectation it's someone's whole income (which is what I do), or do something more employer like with regular feedback cycles and rolling payments. And if you do the former, it needs to be enough money to compensate for the risk and inconvenience of freelancing.
I kind of get the arguments against paying grantmakers, but from my perspective I'd love to see you paid more with a higher reliability level.