Hi everyone!
Managers of the EA Infrastructure Fund will be available for an Ask Me Anything session. We'll start answering questions on Friday, June 4th, though some of us will only be able to answer questions the week after. Nevertheless, if you would like to make sure that all fund managers can consider your question, you might want to post it before early UK time on Friday morning.
What is the EA Infrastructure Fund?
The EAIF is one of the four EA Funds. While the other three Funds support direct work on various causes, this Fund supports work that could multiply the impact of direct work, including projects that provide intellectual infrastructure for the effective altruism community, run events, disseminate information, or fundraise for effective charities.
Who are the fund managers, and why might you want to ask them questions?
The fund managers are Max Daniel, Michelle Hutchinson, and Buck Shlegeris. In addition, EA Funds Executive Director Jonas Vollmer is temporarily taking on chairperson duties, advising, and voting consultatively on grants. Ben Kuhn was a guest manager in our last grant round. They will all be available for questions, though some may have spotty availability and might post their answers as they have time throughout next week.
One particular reason why you might want to ask us questions is that we are all new in these roles: All fund managers of the EAIF have recently changed, and this was our first grant round.
What happened in our most recent grant round?
We have made 26 grants totalling about $1.2 million. They include:
- Two grants totalling $139,200 to Emma Abele, James Aung, Bella Forristal, and Henry Sleight. They will work together to identify and implement new ways to support EA university groups – e.g., through high-quality introductory talks about EA and creating other content for workshops and events. University groups have historically been one of the most important sources of highly engaged EA community members, and we believe there is significant untapped potential for further growth. We are also excited about the team, based significantly on their track record – e.g., James and Bella previously led two of the globally most successful university groups.
- $41,868 to Zak Ulhaq to develop and implement workshops aimed at helping highly talented teenagers apply EA concepts and quantitative reasoning to their lives. We are excited about this grant because we generally think that educating pre-university audiences about EA-related ideas and concepts could be highly valuable; e.g., we’re aware of (unpublished) survey data indicating that in a large sample of highly engaged community members who learned about EA in the last few years, about ¼ had first heard of EA when they were 18 or younger. At the same time, this space seems underexplored. Projects that are mindful of the risks involved in engaging younger audiences therefore have a high value of information – if successful, they could pave the way for many more projects of this type. We think that Zak is a good fit for efforts in this space because he has a strong technical background and experience with both teaching and EA community building.
- $5,000 to the Czech Association for Effective Altruism to give away EA-related books to people with strong results in Czech STEM competitions, AI classes, and similar. We believe that this is a highly cost-effective way to engage a high-value audience; long-form content allows for deep understanding of important ideas, and surveys typically find books have helped many people become involved with EA (e.g., in the 2020 EA Survey, more than ⅕ of respondents said a book was important for getting them more involved).
- $248,300 to Rethink Priorities to allow Rethink to take on nine research interns (7 FTE) across various EA causes, plus support for further EA movement strategy research. We have been impressed with Rethink’s demonstrated ability to successfully grow their team while maintaining a constant stream of high-quality outputs, and think this puts them in a good position to provide growth opportunities for junior researchers. They also have a long history of doing empirical research relevant to movement strategy (e.g., the EA survey), and we are excited about their plans to build upon this track record by running additional surveys illuminating how various audiences think of EA and how responsive they are to EA messaging.
For more detail, see our payout report. It covers all grants from this round and provides more detail on our reasoning behind some of them.
The application deadline for our next grant round will be the 13th of June. After this round is wrapped up, we plan to accept rolling applications.
Ask any questions you like; we'll respond to as many as we can.
Re your 19 interventions, here are my quick takes on all of them
Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.
I am in favor of this. I think one of the biggest bottlenecks here is finding people who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, it's very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on the job and not really try to make the experience useful.
Yeah this seems good if you can do it, but I don't think this is that much of the bottleneck on research. It doesn't take very much time to evaluate a grant for someone to do research compared to how much time it takes to mentor them.
My current unconfident position is that I am very enthusiastic about funding people to do research if they have someone who wants to mentor them and be held somewhat accountable for whether they do anything useful. And so I'd love to get more grant applications from people describing their research proposal and saying who their mentor is; I can make that grant in like two hours (30 mins to talk to the grantee, 30 mins to talk to the mentor, 60 mins overhead). If the grants are for 4 months, then I can spend five hours a week and do all the grantmaking for 40 people. This feels pretty leveraged to me and I am happy to spend that time, and therefore I don't feel much need to scale this up more.
I think that grantmaking capacity is more of a bottleneck for things other than research output.
I don't immediately feel excited by this for longtermist research; I wouldn't be surprised if it's good for animal welfare stuff but I'm not qualified to judge. I think that most research areas relevant to longtermism require high context in order to contribute to, and I don't think that pushing people in the direction of good thesis topics is very likely to produce extremely useful research.
I'm not confident.
The post doesn't seem to exist yet so idk
I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.
I feel pessimistic; I don't think that this is the bottleneck. I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldn't need this database as much. This database is mostly helpful in the very-little-supervision world, and so doesn't seem like the key thing to work on.
I feel pessimistic, but idk maybe elicit is really amazing. (It seems at least pretty cool to me, but idk how useful it is.) Seems like if it's amazing we should expect it to be extremely commercially successful; I think I'll wait to see if I'm hearing people rave about it and then try it if so.
I think this is worth doing to some extent, obviously; I think that my guess is that EAs aren't as into forecasting as they should be (including me unfortunately.) I'd need to know your specific proposal in order to have more specific thoughts.
I think that facilitating junior researchers to connect with each other is somewhat good but doesn't seem as good as having them connect more with senior researchers somehow.
I'm into this. I designed a noticeable fraction of the Triplebyte interview at one point (and delivered it hundreds of times); I wonder whether I should try making up an EA interview.
Seems cool. I think a major bottleneck here is people who are extremely extroverted and have lots of background and are willing to spend a huge amount of time talking to a huge amount of people. I think that the job "spend many hours a day talking to EAs who aren't as well connected as would be ideal for 30 minutes each, in the hope of answering their questions and connecting them to people and encouraging them" is not as good as what I'm currently doing with my time, but it feels like a tempting alternative.
I am excited for people trying to organize retreats where they invite a mix of highly-connected senior researchers and junior researchers to one place to talk about things. I would be excited to receive grant applications for things like this.
I'm not sure that this is better than providing funding to people, though it's worth considering. I'm worried that it has some bad selection effects, where the most promising people are more likely to have money that they can spend living in closer proximity to EA hubs (and are more likely to have other sources of funding) and so the cheapo EA accommodations end up filtering for people who aren't as promising.
Another way of putting this is that I think it's kind of unhealthy to have a bunch of people floating around trying unsuccessfully to get into EA research; I'd rather they tried to get funding to try it really hard for a while, and if it doesn't go well, they have a clean break from the attempt and then try to do one of the many other useful things they could do with their lives, rather than slowly giving up over the course of years and infecting everyone else with despair.
I'm not sure; seems worth people making some materials, but I'd think that we should mostly be relying on materials not produced by EAs
I am a total sucker for this stuff, and would love to make it happen; I don't think it's a very leveraged way of working on increasing the EA-aligned research pipeline though.
Yeah I'm into this; I think that strong web developers should consider reaching out to LessWrong and saying "hey do you want to hire me to make your site better".
I think Ben Todd is wrong here. I think that the number of extremely promising junior researchers is totally a bottleneck and we totally have mentorship capacity for them. For example, I have twice run across undergrads at EA Global who I was immediately extremely impressed by and wanted to hire (they both did MIRI internships and have IMO very impactful roles (not at MIRI) now). I think that I would happily spend ten hours a week managing three more of these people, and the bottleneck here is just that I don't know many new people who are that talented (and to a lesser extent, who want to grow in the ways that align with my interests).
I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity. Though I'd still love to have more of these people. I think that I want people who are between 25th and 90th percentile intellectual promisingness among top schools to try first to acquire some specific and useful skill (like programming really well, or doing machine learning, or doing biology literature reviews, or clearly synthesizing disparate and confusing arguments), because they can learn these skills without needing as much mentorship from senior researchers and then they have more of a value proposition to those senior researchers later.
This seems almost entirely useless; I don't think this would help at all.
Seems like a good use of someone's time.
---------------
This was a pretty good list of suggestions. I guess my takeaways from this are: