Hide table of contents

I recently attended EAGxOxford, EAGxBoston, and EAG London. In total, I had about 101 one-on-one conversations (+-10; depends on how you count casual/informal 1-1s). The vast majority of these were with people interested in longtermist community-building or AI safety. 

Here are three of my biggest takeaways:

1) There are many people doing interesting work who aren’t well-connected

I recently moved to Berkeley, one of the major EA hubs for AI safety research and longtermist movement-building. I live with several people who are skilling up in alignment research, and I work out of an office with people who regularly talk about the probability of doom, timelines, takeoff speeds, MIRI’s agenda, Paul and Eliezer dialogues, ELK, agent foundations, interpretability, and, well, you get it.

At the EAGs, I was excited to meet many people (>30) interested in AI safety and longtermist community-building. Several (>10) of them were already dedicating a large portion of their time to AI safety or longtermist community-building (e.g., by spending a summer on AI safety research, leading a local EA chapter, or contracting for EA orgs.)

One thing stood out to me, though: Many of the people I spoke to, including those who were already investing >100 hours into EA work, weren’t aware of the people/models/work in Berkeley and other EA hubs. 

Here’s a hypothetical example:

  • Alice: I’ve spent the last several semesters skilling up in AI safety research. I took several ML classes, and I’m going to be spending the summer working under [professor] at [my university] on [ML project that doesn’t really have to do with AI safety].
  • Me: Wow, that’s great! Have you considered working on the Eliciting Latent Knowledge challenge, or reading Evan Hubinger’s Risks from Learned Optimization sequence, or trying to distill alignment articles? 
  • Alice: Oh, no! I haven’t heard of most of these things… do you think it would be useful for me to do that instead?
  • Me: Well, I’m not sure what you should do. But I would encourage you to at least consider these options and at least be familiar with these resources and opportunities. As an example, did you ever considering applying to the Long-Term Future Fund to skill-up on your own, and maybe visit Berkeley and some other EA Hubs?

The point here is not that Alice should immediately drop what she’s doing. But I found it interesting how many people didn’t even realize what options they had available. Alice, for example, could apply for a grant to skill-up in AI safety research in an EA Hub. But she often doesn’t even realize this, or even if she does, she doesn’t seriously consider it when she’s thinking about her summer plans. 

I don’t think people should blindly defer to the people/models in EA hubs. But I do think that exposure to these people/models will generally help people make more informed decisions. Two quick examples:

  • People skilling-up in AI safety researcher would generally benefit from understanding the major criticisms/concerns with current alignment research agendas. It seems useful to at least be exposed to some of the doomy people and understand why they’re so doomy.
  • People skilling-up in longtermist community-building could benefit from understanding the major criticisms/concerns with general community-building efforts. It seems useful to at least be exposed to the arguments around impact being heavy-tailed, mass outreach compromising the epistemics of the community and making it less attractive for people who take ideas seriously, and concerns around community-builders not knowing enough about the issues they are community-building for.

One of the easiest ways to do this, I claim, is to talk directly to people doing this kind of work. After 1:1s with people who were doing (or seriously considering) longtermist work, I often asked, “Who would be good for this person to talk to?” and then I immediately threw them into some group chats.

More broadly, I’ve updated in the direction of the following claim: There are people doing (or capable of doing) meaningful longtermist work outside of major EA hubs. I’m excited about interventions that try to find these individuals and connect them to people who can support their work, challenge their thinking, and introduce them to new opportunities

2) Considering wide action spaces is rare and valuable

It’s extremely common for people to think about the opportunities that are in front of them, rather than considering the entire action space of possibilities.

A classic example is when I met Bob, a community-builder at Peter Singer University.

  • Bob: I’ve been running the PSU group for the last year, and it’s been going pretty well. We have an AI safety group now, and we’re thinking about ways to do more projects and contests. I graduate this year, and I’m planning to do community-building full-time at PSU.
  • Me: Wow, great work at PSU, Bob! Out of curiosity, have you considered any other ways you could use your community-building aptitudes? Like, what if you take a step back… what do you think are the most important challenges that we’re facing? And how could a community-builder—not necessarily you—how could some imaginary person who just gets dropped in from the sky make the biggest impact?
  • Bob: [Mentions something pretty cool and ambitious].
  • Me: Yeah, that seems worth thinking more about. I also wonder if you’ve thought about supporting community-building efforts at MIT, or in India, or running a global alignment competition, or running a research scholars program, or… 
  • Bob: Woah, I haven’t thought about those, but like, why me? I don’t know anything about [India/MIT/competitions/research programs].
  • Me: Sure… and I’m not saying you would be a good fit for any of this. I barely know you! But I’d be pretty excited for you to at least consider some of these wilder options, rather seriously, for at least 10-60 minutes, before you fully dismiss them. And I think people often underestimate how much they could know about a particular topic if they really tried.

I think “considering wide action spaces” and “taking weird ideas seriously” are two of the traits that I most commonly see in highly impactful people. To be clear, I think considerations of personal fit are important, and we don’t want anyone trying anything. But I claim that people generally default to dismissing ideas prematurely and failing to seriously consider what it would look like to do something that deviates from the natural, intuitive, default pathways.

If you are a student at PSU, I encourage you to think seriously about projects, internships, research projects, skilling-up quests, and other opportunities that exist outside of PSU. Maybe the best thing for you to do is to stay, but you won’t know unless you consider the wide action space.

3) People should write down their ideas

At least 10 times during EAGs, someone was describing something they had thought in some detail (examples: a project proposal, a grant idea, comparisons between career options they had been considering). 

And I asked, “Wow, have you written any of this up?”

And the person (usually) responded, “Oh… uh. No—well, not yet! I might write it up later/I’m planning to write it up/Maybe after the conference I’ll write it up/I’m nervous to write it up/I don’t have enough to actually write up…”

Some benefits of writing that I’ve noticed:

  • Writing helps me think better. For instance, writing often forces me to be more concrete about my ideas, and it often helps me identify new uncertainties/confusions.
  • Writing improves the quantity and quality of feedback that I receive. Some EAs are much better at critiquing ideas in writing than in conversation.
  • When other people share their writing with me, I find it useful to be able to reflect on the ideas before conversing with them.
  • When other people share their writing with me, I generally take them more seriously. I update in the direction of “ah, they have thought seriously about this, and they might actually want to do this!”

If you’re reading this, I encourage you to take 30-60 minutes to start writing something. Here are some examples of things that I’ve been encouraging my friends (and myself) to write up:

  • How I’m Currently Thinking about My Career & Path to Impact
  • What do I think are the World’s Biggest Problems, and what are my Biggest Uncertainties?
  • How to Support Me When I am Upset
  • Should I Take this Job/Internship?
  • Bugs, How I am Working on them, and How my Friends can Help

If you write something down by April 30, feel free to submit to the Community Builder Writing Contest.

Miscellaneous Reflections

  • I think EAGxOxford was more valuable for me than EAGxBoston. This was mostly because I am based in the US, so there were many people in the UK who didn’t know me, or others in my network, or ideas that have been swimming around the US community-building/AI alignment scene. Slight update toward going to conferences that cause me to meet people outside my “bubble” or generally visiting non-US EA Hubs. 
  • I learned a lot about S-risks. Grateful to Linh Chi Nguyen for explaining multi-AI scenarios, spiteful preferences, “near-miss” scenarios, and astronomical misuse. I went from thinking “yeah s-risk stuff seems important” to “oh wow, there are some very specific and tangible problems in AI safety that are especially important from an s-risk perspective.”
  • I’ve also been thinking about s-risks in light of the Death with Dignity post and follow-up posts (like thisthis, and this). If nothing else, it reminds me that there are outcomes even worse than “everyone dies.” If we fail to produce aligned AGI, maybe we can at least produce AGI that doesn’t torture anyone. (I imagine some people have written about this—please link in the comments if you know more about this!)
  • A lot of people are interested in alignment contests! Aris Richardson (from UC Berkeley) is currently running an alignment distillation contest for college students. If you want to talk about alignment contests, feel free to reach out to me (or her!)
  • A lot of people are interested in supporting AI safety researchers! Redwood research is hiring for some exciting roles, including Head of Community, Operations Manager, Recruiting & MLAB Lead, and IT Analyst. I encourage you to apply, even if you’re not sure about your fit.
  • If you liked this piece, you might also like this reflection from the EA student summit (somewhat outdated) and this reflection from a few months ago (less outdated).

I’m grateful to Madhu Sriram, Luise Wöhlke, Lara Thurnherr, and Harriet Patterson for feedback on a draft of this post.

Comments16
Sorted by Click to highlight new comments since:

I know very little about this, but in a recent conference, I heard from informed people that S-risk seems to be one area in longtermism or AI-risk that isn't as well funded as others right now (comparatively to the funding situation in AI-risk and longtermism). 

As the OP says, S-risks are one of the few areas that's relevant to worldviews or theories of change with "very short timelines"—their "tractability" might rise with the underlying likelihood of AGI emergence or "short timelines".

These S-risks are particularly important in one perspective about these "short timelines" that seems to benefit from a certain perspective about "AI-risk", described below:

If you or someone you know are seeking funding to reduce s-risk, please send me a message. If it's for a smaller amount, you can also apply directly to CLR Fund. This is true even if you want funding for a very different type of project than what we've funded in the past.

I work for CLR on s-risk community building and on our CLR Fund, which mostly does small-scale grantmaking, but I might also be able to make large-scale funding for s-risk projects ~in the tens of $ millions (per project) happen. And if you have something more ambitious than that, I'm also always keen to hear it :)

This sounds great!

I heard from informed people that S-risk seems to be one area in longtermism or AI-risk that isn't as well funded as others right now

Would you say this is statement wrong, or a bad characterization of the funding situation? 

I want to be corrected so I don't spread misinformation.

I didn't run this by anyone else in the s-risk funding space, so please don't hold others to these numbers/opinions.
 

Tl;dr: I think this is probably right in direction but with lots of caveats. In particular, it's still the case that s-risk has a lot of money (~low hundreds $m) compared to ideas/opportunities at least right now and at least possibly more so than general longtermism. I think this might change soon since I expect s-risk money to grow less than general longtermist money.

edit: I think s-risk is ideas constrained when it comes to small grants and funding (and ideas) constrained for large grants/investments.

I'd estimate s-risk to have something in the low hundreds $m in expected value (not time-discounted) of current assets specifically dedicated to it. Your question is slightly hard to answer since I'm guessing OpenPhil and FTXF would fund at least some s-risk projects if there were more proposals/more demand for money in s-risk. Also, a lot of funded people and projects who don't work directly on s-risk still care about s-risk. Maybe that should be counted somehow. Naively not counting these people and OpenPhil/FTXF money at all and comparing current total assets in general longtermism vs. s-risk:

In absolute terms: Yup, general longtermism definitely has much more money (~two orders of magnitude.) My guess is that this ratio will grow bigger over time and that it will in expectation grow bigger over time. (~70% credence for each of the claims? Again confused about how to count OpenPhil and FTX F money and how they'll decide to spend money in the future. If I stick to not counting them as s-risk money at all, then >70% credence.)

Per person working on s-risk/general longtermism: Would still say yes although I don't have a good way to count s-risk people and general longtermist people. Could be closer to even and probably not (much) more than an order of magnitude difference. Again, quick and wild guess is that the difference will in expectation grow larger over time, but less confident in this than my guess about how the ratio of absolute money will develop. (55%?)

Per quality-adjusted idea/opportunity to spend money: Unsure. I'd (much) rather have more money-eating ideas/opportunities to reduce s-risk than more money to reduce s-risk but I'm not sure if this is more or less the case compared to general longtermism (s-risk has both fewer ideas/opportunities and less money). Also don't know how this will develop. Arguably, the ratio between money and idea/opportunity also isn't a great metric because you might care more about absolutes here. I think some people might argue that s-risk is less funding constrained compared to ideas-constrained than general longtermism. This isn't exactly what you've asked for but still seems relevant. OTOH, having less absolute money does mean that the s-risk space might struggle to fund even one really expensive project.

edit: I do think if we had significantly more money right now, we would be spending more money now-ish.

Per "how much people in the EA community care about this issue": Who knows :) I'm  obviously both biased and in a position that selects for my opinion.

Funding infrastructure: Funding in s-risk is even more centralized than in general longtermism, so if you think diversification is good, more s-risk funders are good :) There are also fewer structured opportunities for funding in s-risk and I think the s-risk funding sources are generally harder to find. Although again, I assume one could easily apply with an s-risk motivated proposal to general longtermist places, so it's kind of weird to compare the s-risk funding infrastructure to the general longtermist funding infrastructure.

 

I wrote this off the cuff and in particular, might substantially revise my predictions with 15 minutes of thought.

Wow, thanks for the reply!

Ok, so for me, the takeaway and socially best message (for a proponent of S-risk) is probably:

"For strong ideas/founders/leaders, there is ample funding for top new initiatives in S-risk."

Also, if you might revise this with "15 minutes of thought", that implies that you wrote this detailed, thoughtful comment in comparable time, which seems really impressive.

Haha, no, it took me quite a bit longer to phrase what I wrote but I didn't have dedicated non-writing thinking time, e.g. the claim about the expected ratio of future assets seems like something I could sanity check + get a better number for with a pen and pencil and a few minutes but I was too lazy to do that :)

(And I can't let false praise of me stand)

edit to also comment on the substantial part of your comment: Yes, that takeaway seems good to me!

edit edit: Although I'd caveat that s-risk is less mature than general longtermism (more "pre-paradigmatic" for people who like that word), so there might be less (obvious) to do for founders/leaders right now and that can be very frustrating. We still always want to hear about such people.

last edit?: And as in general longtermism, if somebody is interested in s-risk and has really high EtG potential, I might sometimes prefer that. Especially given what I said above about founder/leader type people. Something within an order of magnitude or two of FTX F for s-risk reduction would obviously be a huge win for the space and I don't think it's crazy to think that people could achieve that.

Comment: "Very short timelines" might be conflated with "inevitability". 

(The following isn't my idea, I've heard about it several times now. It seems good to share, even though my explanation is really basic.)

For many people with short timelines, it's less that they view AGI as coming in "15 or 50 years", but more that they view the "shape of the path" of the emergence of AGI as inevitable in some deep sense. 

To explain it in one way: to these people, watching civilization avoid dangerous AGI, is sort of like watching a drunkard walking forward in a landscape that has deep, dangerous holes.  These holes get bigger and bigger over time as the drunkard walks. 

Eventually, the holes are going to get so big, and gain such vast, slippery slopes, that even a skilled person won't be able to escape slipping into it.

To get more "gearsy", these people with negative views believe that AI hardware and models/patterns/training will get much better and widely distributed. Government regulation will be highly inadequate (e.g. due to "moloch") and won't even come close to being effective in preventing or regulating AGI.

If you have this belief, this gets even worse, once you consider other civilizations ("grabby aliens"). This is because, if you think aggressive AGI is inevitable, if you think it's a "lower entropy" state, it must also be so for any civilization. Then even if your civilization manages to escape it, some other will come across it. Then, it seems likely some aggressive AGI will always emerge, and will prevail and grab the other civilizations.

 

This all might be relevant to S-risk, because if you can't prevent AGI, you can shape the path it emerges, and you might avoid extremely dark, S-risk scenarios. 

If you believe AGI is so inevitable, then it is logical to believe you can find it (and focused efforts can find it ahead of everyone else). This explains why some subset of people might be "trying to find AGI" or take certain other interventions, that might seem wilder to someone without these perspectives.

Note that some people with these beliefs might not have that "high of a probability" on S-risk or even have certain timelines on AGI. It's more that they view S-risk as extremely bad, in a way that warrants serious attention (certainly more than right now). The reason for pointing this out is that the actual probability of S-risk might be low, and also that understanding this lower risk might make the presentation/explanation of this view more effective and reasonable.

Just wanted to add that at 80k we notice a lot of people around who can benefit from these three things, even people who are pretty interested in EA. In fact, I'd say these three things are a pretty good summary of the main value-adds and aims of 80k's one-on-one team.

I really liked this post, one of the best things that I have read here in a while.

+1 for taking weird ideas seriously and considering wide action spaces being underrated.

[anonymous]2
0
0

Thank you, Caleb! 

Really enjoyed this post and the takeaways, which I thought were insightful and ~fairly novel (at least amongst EAG(x) reflections). I'm a big proponent of 3) and definitely think it can be useful to have things written up in advance of the conference, too. People may not be inclined to read it at the conference but at least they'll have something to refer to after!

Thanks for this, Akash!

[anonymous]3
0
0

Miranda, your FB profile & EA profile are great examples of #3 :) 

This is great! Awesome work, Akash!

[anonymous]2
0
0

Thank you, Chana!

I really really loved section 2 of this post!! It articulates a mindset shift that I think is important and valuable, and I've not seen it written out like that before. 

[anonymous]2
0
0

Thanks, Evie!

Curated and popular this week
Relevant opportunities