Hide table of contents

LTFF is running an Ask Us Anything! Most of the grantmakers at LTFF have agreed to set aside some time to answer questions on the Forum.

I (Linch) will make a soft commitment to answer one round of questions this coming Monday (September 4th) and another round the Friday after (September 8th). 

We think that right now could be an unusually good time to donate. If you agree, you can donate to us here.

About the Fund

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas and to otherwise increase the likelihood that future generations will flourish.

In 2022, we dispersed ~250 grants worth ~10 million. You can see our public grants database here.

About the Team

  • Asya Bergal: Asya is the current chair of the Long-Term Future Fund. She also works as a Program Associate at Open Philanthropy. Previously, she worked as a researcher at AI Impacts and as a trader and software engineer for a crypto hedgefund. She's also written for the AI alignment newsletter and been a research fellow at the Centre for the Governance of AI at the Future of Humanity Institute (FHI). She has a BA in Computer Science and Engineering from MIT.
  • Caleb Parikh: Caleb is the project lead of EA Funds. Caleb has previously worked on global priorities research as a research assistant at GPI, EA community building (as a contractor to the community health team at CEA), and global health policy.
  • Linchuan Zhang: Linchuan (Linch) Zhang is a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.
  • Oliver Habryka: Oliver runs Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LWits community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities.

You can find a list of our fund managers in our request for funding here.

Ask Us Anything

We’re happy to answer any questions – marginal uses of money, how we approach grants, questions/critiques/concerns you have in general, what reservations you have as a potential donor or applicant, etc.

There’s no real deadline for questions, but let’s say we have a soft commitment to focus on questions asked on or before September 8th.

Because we’re unusually funding-constrained right now, I’m going to shill again for donating to us.

If you have projects relevant to mitigating global catastrophic risks, you can also apply for funding here.

64

0
0

Reactions

0
0

More posts like this

Comments127
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

What fraction of the best projects that you currently can't fund has applied for funding from OpenPhilantropy directly? Reading this it seems that many would qualify.

Why doesn't OpenPhilantropy fund these hyper-promising projects if, as one grantmaker writes, they are "among the best historical grant opportunities in the time that I have been active as a grantmaker?" OpenPhilantropy writes that LTFF "supported projects we often thought seemed valuable but didn’t encounter ourselves." But since the chair of the LTFF is now a Senior Program Associate at OpenPhilantropy, I assume that this does not apply to existing funding opportunities.

I have many disagreements with the funding decisions of Open Philanthropy, so some divergence here is to be expected. 

Separately, my sense is Open Phil really isn't set up to deal with the grant volume that the LTFF is dealing with, in addition to its existing grantmaking. My current guess is that the Open Phil longtermist community building team makes like 350-450 grants a year, in total, with 7-8 full-time staff [edit: previously said 50-100 grants on 3-4 staff, because I forgot about half of the team, I am sorry. I also clarified that I was referring to the Open Phil longtermist community building team, not the whole longtermist part]. The LTFF makes ~250 grants per year, on around 1.5 full-time equivalents, which, if Open Phil were to try to take them on additionally, would require more staff capacity than they have available. 

Also, Open Phil already has been having a good amount of trouble getting back to their current grantees in a timely manner, at least based on conversations I've had with various OP grantees, so I don't think there is a way Open Phil could fill the relevant grant opportunities, without just directly making a large grant to the LTFF (and also, hon... (read more)

3
Linch
I suspect your figures for Open Phil are pretty off on both the scale of people and the scale of the number of grants. I would guess (only counting people with direct grantmaking authority) OP longtermism would have: * 5-6 people on Claire's team (longtermist CB) * 1-2 people on alignment * (Yes, this feels shockingly low to me as well) * 2-5 people on biosecurity * 3-6 people on AI governance * probably other people I'm missing Also looking at their website, it looks like there's a lag for when grants are reported (similar to us) but before May 2023, there appears to be 10-20 public grants reported per month (just looking at their grants database and filtering on longtermism). I don't know how many non-public grants they give out but I'd guess it's ~10-40% of the total. First order, I think it's reasonable to think that OP roughly gives out a similar number of grants to us but at 10-20 times the dollar amount per grant. This is not accounting for how some programs that OP would classify as a single program would be counted as multiple grants by our ontology, e.g. Century Fellowship. 
2
Habryka
Sorry, I meant to just refer to the Open Phil longtermist community building team, which felt like the team that would most likely be able to take over some of the grant load, and I know much less about the other teams. Edited to correct that. Agree that I underestimated things here. Agree that OP grants are vastly larger, which makes up a amount of the difference in grant-capacity per staff. Also additionally the case that OP seems particularly low on AI Alignment grant capacity, which is where most of the grants that I am most excited about would fall into, which formed a bunch of my aggregate impression. 
8
abergal
[Speaking for myself, not Open Philanthropy] Empirically, I've observed some but not huge amounts of overlap between higher-rated applicants to the LTFF and applicants to Open Philanthropy's programs; I'd estimate around 10%. And my guess is the "best historical grant opportunities" that Habryka is referring to[1] are largely in object-level AI safety work, which Open Philanthropy doesn’t have any open applications for right now (though it’s still funding individuals and research groups sourced through other means, and I think it may fund some of the MATS scholars in particular). More broadly, many grantmakers at Open Philanthropy (including myself, and Ajeya, who is currently the only person full-time on technical AI safety grantmaking), are currently extremely capacity-constrained, so I wouldn't make strong inferences that a given project isn't cost-effective purely on the basis that Open Philanthropy hasn't already funded it. 1. ^  I don’t know exactly which grants this refers to and haven’t looked at our current highest-rated grants in-depth; I’m not intending to imply that I necessarily agree (or disagree) with Habryka’s statement.
1
AnonymousTurtle
Thank you for the detailed reply, that seems surprisingly little, I hope more apply. Also really glad to hear that OP may fund some of the MATS scholars, as the original post mentioned that "some of [the unusual funding constrain] is caused by a large number of participants of the SERI MATS program applying for funding to continue the research they started during the program, and those applications are both highly time-sensitive and of higher-than-usual quality". Thank you again for taking the time to reply given the extreme capacity constrains
2
calebp
Responding specifically to Having a chair who works at Open Phil has helped less than one might naively think. My impression is that Open Phil doesn't want to commit to evaluating LTFF applications that the LTFF thinks are good but doesn't have the ability to fund. We are working out how to more systematically share applications going forward in a way that doesn't create an obligation for Open Phil to evaluate them (or the impression that Open Phil has this obligation to the public), but I think that this will look more like Open Phil having the option to look at some grant applications we think are good, as opposed to Open Phil actually checking every application that we share with them.

How is the search going for the new LTFF chair? What kind of background and qualities would the ideal candidate have?

Here are my guesses for the most valuable qualities:

  1. Deep technical background and knowledge in longtermist topics, particularly in alignment. 
    1. Though I haven't studied this area myself, my understanding of the history of good funding for new scientific fields (and other forms of research "leadership"/setting strategic direction in highly innovative domains) is that usually you want people who are quite good at the field you want to advance or fund, even if they aren't the very top scientists. 
      1. Basically you might not want the best scientists at the top, but for roles that require complex/nuanced calls in a deeply technical field, you want second-raters who are capable of understanding what's going on quickly and broadly. You don't want your research agenda implicitly set by mediocre scientists, or worse, non-technical people.
    2. Because we give more grants in alignment than other technical fields, I think a deep understanding of alignment and other aspects of technical AI safety should be prioritized over (eg) technical biosecurity or nuclear security or forecasting or longtermist philosophy.
      1. The other skillsets are still valuable ofc, and would be a plus in a fund manager.
  2. Consi
... (read more)

How does the team weigh the interests of non-humans (such as animals, extraterrestrials, and digital sentience) relative to humans? What do you folks think of the value of interventions to help non-humans in the long-term future specifically relative to that of interventions to reduce x-risk?

7
Linch
I don't think there is a team-wide answer, and there certainly isn't an institutional answer that I'm aware of. My own position is a pretty-standard-within-EA form of cosmopolitanism, where a) we should have a strong prior in favor of moral value being substrate-independent, and b) we should naively expect people to (wrongly) underestimate the moral value of beings that look different from ourselves. Also as an empirical belief, I do expect the majority of moral value in the future to be held in minds that are very different from my own. The human brain is just such a narrow target in the space of possible designs, it'd be quite surprising to me if a million years from now the most effective way to achieve value is via minds-designed-just-like-2023-humans, even by the lights of typical 2023-humans. There are some second-order concerns like cooperativeness (I have a stronger presumption in favor of believing it's correct to cooperate with other humans than with ants, or with aliens), but I think cosmopolitanism is mostly correct. However, I want to be careful in distinguishing the moral value or moral patiency of other beings from their interests. It is at least theoretically possible to imagine agents (eg designed digital beings) with strong preferences and optimization ability but not morally relevant experiences. In those cases, I think there are cooperative reasons to care about their preferences, but not altruistic reasons. In particular, I think the case for optimizing for the preferences of non-existent beings is fairly weak, but the case for optimizing for their experiences (eg making sure future beings aren't tortured) is very strong. That said, in practice I don't think we often (ever?) get competitive grant applications that specialize in helping non-humans in the LT future; most of our applications are about reducing risks of extinction or other catastrophic outcomes, with a smattering of applications that are about helping individuals and organization
4
calebp
I think we've funded some work on digital sentience before. I would personally be excited about seeing some more applications in this area. I think marginal work in this area could be competitive with AIS grants if the bar reduces (as I expect).
2
BrownHairedEevee
Thanks for the responses, @Linch and @calebp! There are several organizations that work on helping non-humans in the long-term future, such as Sentience Institute and Center on Long-Term Risk; do you think that their activities could be competitive with the typical grant applications that LTFF gets? Also, in general, how do you folks decide how to prioritize between causes and how to compare projects?
4
Linch
I'm confused about the prudence of publicly discussing specific organizations in the context of being potential grantees, especially ones that we haven't (AFAIK) given money to.
2
Linch
Okay, giving entirely my own professional view as I see it, absolutely not speaking for anybody else or the fund writ large:  To be honest, I'm not entirely sure what most of these organizations actually do research on, on a day-to-day basis. Here are some examples of what I understand to be the one-sentence pitch for many of these projects  * figure out models of digital sentience * research on cooperation in large worlds * how to design AIs to reduce the risk that unaligned AIs will lead to hyperexistential catastrophes * moral circle expansion * etc,  Intuitively, they all sound plausible enough to me. I can definitely imagine projects in those categories being competitive with our other grants, especially if and when our bar lowers to where I think the longtermist bar overall "should" be. That said, the specific details of those projects, individual researchers, and organizational structure and leadership matters as well[1], so it's hard to give an answer writ large.   From a community building angle, I think junior researchers who try to work on these topics have a reasonably decent hit rate of progressing to doing important work in other longtermist areas. So I can imagine a reasonable community-building case to fund some talent development programs as well[2], though I haven't done a BOTEC and again the specific details matter a lot.  1. ^ For example, I'm rather hesitant to recommend funding to organizations where I view the leadership as having substantially higher-than-baseline rate of being interpersonally dangerous. 2. ^ I happen to have a small COI with one of the groups so were they to apply, I will likely recuse myself from the evaluation.

I've heard that you have a large delay between when someone applies to the fund, and when they hear back from you. How large is this delay right now? Are you doing anything in particular to address?

I think last time we checked, it was ~a month in the median, and ~2 months on average, with moderately high variance. This is obviously very bad. Unfortunately, our current funding constraints probably makes things worse[1], but I'm tentatively optimistic that with the a) new guest fund managers, b) more time to come up with better processes (now that I'm onboard ~full-time, at least temporarily) and c) hopefully incoming money (or at least greater certainty about funding levels), we can do somewhat better going forwards.

(Will try to answer other parts of your question/other unanswered questions on Friday).

  1. ^

    Because we are currently doing a mix of a) holding on to grants that are above our old bar but below our current bar while waiting for further funding, and b) trying to refer them to other grantmakers, both of which takes up calendar time. Also, the lower levels of funding means we are, or at least I am, prioritizing other aspects of the job (eg fundraising, public communications) over getting back to applicants quickly.

The level of malfunctioning that is going on here seems severe:

  • The two month average presumably includes a lot of easy decisions, not just hard ones.
  • The website still says LTFF will respond in 8 weeks (my emphasis)
  • The website says they may not respond within an applicant's preferred deadline. But what it should actually say is that LTFF also may not respond within their own self-imposed deadline.
  • And then the website should indicate when, statistically, it does actually tend to give a response.
  • Moreover, my understanding is that weeks after these self-imposed deadlines, you still may have to send multiple emails and wait weeks longer to figure out what is going on.

Given all of the above, I would hope you could aim to get more than "somewhat better", and have a more comprehensive plan of how to get there. I get that LTFF is pretty broke rn and that we need an OpenPhil alternative, and that there's a 3:1 match going on, so probably it makes sense for LTFF to receive some funding for the time being. Also that you guys are trying hard to do good, probably currently shopping around unfunded grants etc. but there's a part of me that thinks if you can't even get it together on a basic level, then to find that OpenPhil alternative, we should be looking elsewhere.

The website still says LTFF will respond in 8 weeks (my emphasis)

Oof. Apologies, I thought we've fixed that everywhere already. Will try to fix asap. 

but there's a part of me that thinks if you can't even get it together on a basic level, then to find that OpenPhil alternative, we should be looking elsewhere.

Yeah I think this is very fair. I do think the funding ecosystem is pretty broken in a bunch of ways and of course we're a part of that; I'm reminded of Luke Muelhauser's old comment about how MIRI's operations got a lot better after he read Nonprofit Kit for Dummies

We are trying to hire for a new LTFF chair, so if you or anybody else you know is excited to try to right the ship, please encourage them to apply! There are a number of ways we suck and a chair can prioritize speed at getting back to grantees as the first thing to fix.

I can also appreciate wanting a new solution rather than via fixing LTFF. For what it's worth people have been consistently talking about shutting down LTFF in favor of a different org[1] approximately since I started volunteering here in early 2022; over the last 18 months I've gotten more pessimistic about replacements, which is one... (read more)

7
calebp
Fwiw I think that this Is not "very fair". Whilst I agree that we are slower than I'd like and slower than our website indicates I think it's pretty unclear that Open Phil is generally faster than us, I have definitely heard similar complaints about Open Phil, SFF and Longview (all the current EA funders with a track record > 1 year). My sense is that Ryan has the impression that we are slower than the average funder, but I don't have a great sense of how he could know this. If we aren't particularly bad relative to some average of funders that have existed for a while, I think the claim "we don't have it together on a basic level is" pretty unfair. (after some discussion with Linch I think we disagree on what "get it together on a basic level means", one thing that Linch and I both agree on is that we should be setting more accurate expectations with grantees (e.g. in some of the ways Ryan has suggested) and that if we had set more accurate expectations we would not be having more than 10% more impact) Here we say that the LTFF between Jan 22 - April 23: * had a median response time of 29 days  * evaluated >1000 applications * recommended ~$13M of funding across > 300 grants Whilst using mostly part-time people (meaning our overheads are very low), dealing with complications from the FTX crash and running always-open general applications (which aim to be more flexible than round-based funds or specialised programs), and making grants in complex areas that don't just directly funge with Open Phil (unlike, for example, Longview's longtermism fund). It was pretty hard to get a sense of how much grantmaking Open Phil, SFF, Founders Pledge, and Longview have done over a similar timeframe (and a decent amount of what I do know isn't sharable), but I currently think we stack up pretty well. I'm aware that my general tone could leave you with the impression that I am not taking the delays seriously, when I do actually directionally agree. I do think we could be mu

Not weighing in on LTFF specifically, but from having done a lot of traditional nonprofit fundraising, I'd guess two months is a faster response time than 80% of foundations/institutional funders, and one month is probably faster than like 95%+. My best guess at the average for traditional nonprofit funders is more like 3-6 months. I guess my impression is that even in the worst cases, EA Funds has been operating pretty well above average compared to the traditional nonprofit funding world (though perhaps that isn't the right comparison). Given that LTFF is funding a lot of research, 2 months is almost certainly better than most academic grants.

My impression from what I think is a pretty large sample of EA funders and grants is also that EA Funds is the fastest turnaround time on average compared to the list you mention (which exceptions in some cases in both directions for EA Funds and other funders)

2
RyanCarey
I think the core of the issue is that there's unfortunately somewhat of a hierarchy of needs from a grant making org. That you're operating at size, and in diverse areas, with always-open applications, and using part-time staff is impressive, but people will still judge you harshly if you struggling to perform your basic service. Regarding these basics, we seem to agree that an OpenPhil alternative should accurately represent their evaluation timelines on the website, and should give an updated timeline when the stated grant decision time passes (at least on request). With regard to speed, just objectively, LTFF is falling short of the self-imposed standard - "within eight weeks, and typically in just four weeks". And I don't think that standard is an inappropriate one, given that LTFF is a leaner operation than OpenPhil, and afaict, past-LTFF, past SFF, and Fast Grants all managed to be pretty quick. That you're struggling with the basics is what leads me to say that LTFF doesn't "have it together".

That you're struggling with the basics is what leads me to say that LTFF doesn't "have it together".

Just FWIW, this feels kind of unfair, given that like, if our grant volume didn't increase by like 5x over the past 1-2 years (and especially the last 8 months), we would probably be totally rocking it in terms of "the basics". 

Like, yeah, the funding ecosystem is still recovering from a major shock, and it feels kind of unfair to judge the LTFF performance on the basis of such an unprecedented context. My guess is things will settle into some healthy rhythm again when there is a new fund chair, and the basics will be better covered again, when the funding ecosystem settles into more of an equilibrium again.

4
RyanCarey
Ok, it makes sense that a temporary 5x in volume can really mess you up.

If someone told me about a temporary 5x increase in volume that understandably messed things up, I would think they were talking about a couple month timeframe, not 8 months to 2 years. Surely there’s some point at which you step back and realise you need to adapt your systems to scale with demand? E.g. automating deadline notifications.

It’s also not clear to me that either supply or demand for funding will go back to pre-LTFF levels, given the increased interest in AI safety from both potential donors and potential recipients.

2
Linch
We already have automated deadline notifications; I'm not sure why you think it's especially helpful.  One potential hope is that other funders will step up in the longer term so it can reduce LTFF'S load; as an empirical matter I've gotten more skeptical about the short-term viability of such hopes in the last 18 months. [1] 1. ^ Not long after I started, there were talks about sunsetting LTFF "soon" in favor of a dedicated program to do LTFF's work hosted in a larger longtermist org. Empirically this still hasn't happened and LTFF's workload has very much increased rather than decreased.
1
Rebecca
Partially based on Asya’s comment in her reflections post that there was difficulty keeping track of deadlines, and partially an assumption that the reason for in some cases not having any communication with an applicant by their stated time-sensitive deadline was because that was not kept track of. It’s good to hear you were keeping track of this, although confusing to me that it didn’t help with this.
4
Linch
There are probably process fixes in addition to personnel constraints; like once you ignore the first deadline it becomes a lot easier to ignore future deadlines, both individually and as a cultural matter.  This is why I agreed with Ryan on "can't even get it together on a basic level," certainly as a fund manager I often felt like I didn't have it together on a basic level , and I doubt that this opinion is unique. I think Caleb disagreed because from his vantage point other funders weren't clearly doing better given the higher load across the board (and there's some evidence they do worse); we ended up not settling the question on whether "basic level" should be defined in relation to peer organizations or in relation to how we internally feel about whether and how much things have gone wrong. Probably the thing we want to do (in addition to having more capacity) is clearing out a backlog first and then assigning people to be responsible for other people's deadlines. Figuring this out is currently one of our four highest priorities (but not the highest).
1
Rebecca
By ‘the above’ I meant my comment rather than your previous one. Have edited to make this clearer.
1
dan.pandori
I deeply appreciate the degree to which this comment acknowledges issues and provides alternative organizations that may be better in specific respects. It has given me substantial respect for LTFF.
7
abergal
Hey Ryan: - Thanks for flagging that the EA Funds form still says that the funds will definitely get back in 8 weeks; I think that's real bad. - I agree that it would be good to have a comprehensive plan-- personally, I think that if the LTFF fails to hire additional FT staff in the next few months (in particular, a FT chair), the fund should switch back to a round-based application system. But it's ultimately not my call.
4
NunoSempere
This blogpost of mine: Quick thoughts on Manifund’s application to Open Philanthropy might be of interest here.
4
Daniel_Eth
Another reason that the higher funding bar is likely increasing delays – borderline decisions are higher stakes, as we're deciding between higher EV grants. It seems to me like this is leading to more deliberation per grant, for instance.

Thank you for hosting this! I'll repost a question on Asya's retrospective post regarding response times for the fund.

our median response time from January 2022 to April 2023 was 29 days, but our current mean (across all time) is 54 days (although the mean is very unstable)

I would love to hear more about the numbers and information here. For instance, how did the median and mean change over time? What does the global distribution look like? The disparity between the mean and median suggests there might be significant outliers; how are these outliers addressed? I assume many applications become desk rejects; do you have the median and mean for the acceptance response times?

Continuing my efforts to annoy everyone who will listen with this genre of question, what value of X would make this proposition seem true to you?

It would be better in expectation to have $X dollars of additional funding available in the field in the year 2028 than an additional full time AI safety researcher starting today.

Feel free to answer based on concrete example researchers if desired. Earlier respondents have based their answer on people like Paul Christiano.

I'd also be interested in hearing answers for a distribution of different years or different levels of research impact.

(This is a pretty difficult and high variance forecast, so don't worry, I won't put irresponsible weight on the specifics of any particular answer! Noisy shrug-filled answers are better than none for my purposes.)

5
Lauro Langosco
This is a hard question to answer, in part because it depends a lot on the researcher. My wild guess for a 90%-interval is $500k-$10m
4
Daniel_Eth
Annoy away – it's a good question! Of course, standard caveats to my answer apply, but there's a few caveats in particular that I want to flag: * It's possible that by 2028 there will be one (or more) further longtermist billionaires who really open up the spigot, significantly decreasing the value of marginal longtermist money at that time * It's possible that by 2028, AI would have gotten "weird" in ways that affect the value of money at that time, even if we haven't reached AGI (e.g., certain tech stocks might have skyrocketed by then, or it might be possible to turn money into valuable research labor via AI) * You might be considering donation opportunities that significantly differ in value from other large funders in the field * This is all pretty opinionated and I'm writing it on the fly, so others on the LTFF may disagree with me (or I might disagree with myself if I thought about it at another time). In principle, we could try to assign probability distributions to all the important cruxes and Monte Carlo this out. Instead, I'm just going to give my answer based on simplifying assumptions that we still have one major longtermist donor who prioritizes AI safety to a similar amount as today, things haven't gotten particularly weird, your donation opportunities don't look that different from others' and roughly match donation opportunities now,[1] etc. One marginal funding opportunity to benchmark the value of donations against would be funding the marginal AI alignment researcher, which probably costs ~$100k/yr. Assuming a 10% yearly discount rate (in line with the long-term, inflation-adjusted returns to equities within the US), funding this in perpetuity is equivalent to a lump-sum donation now of $1M, or a donation in 2028 of ($1M)*(1.1^5) = $1.6M.[2] Then the question becomes, how valuable is the marginal researcher (and how would you expect to compare against them)? Borrowing from Linch's piece on the value of marginal grants to the LTFF, the mar
3
porby
Thanks for breaking down details! That's very helpful. (And thanks to Lauro too!)

What are some types of grant that you'd love to fund, but don't tend to get as applications?

8
Lawrence Chan
I'd personally like to see more well-thought out 1) AI governance projects and 2) longtermist community building projects that are more about strengthening the existing community as opposed to mass recruitment. 

Why did the LTFF/EAIF chairs step down before new chairs were recruited?

4
Habryka
The LTFF chair at least hasn't stepped down yet! Asya is leaving in October, IIRC, and by then we hope to have found a new chair. I can't comment much on the EAIF. It does seem kind of bad that they didn't find a replacement chair before the current one resigned, but I don't know the details.
3
calebp
(Re the EAIF chair specifically) We are hoping to publish some more posts about the EAIF soon; this is just an AMA for the LTFF. I am currently acting as the interim EAIF chair. I am trying to work out what the strategy of the EAIF be over the next few months. It's plausible to me that we'll want to make substantive changes, in part due FTX and shifts in resources between cause areas. Before we hire a chair (or potentially as part of hiring a chair) I am planning to spend some time thinking about this, whilst keeping the EAIF moving with its current strategy.
1
Grumpy Squid
Thanks for the clarificaton! 

I'd love to see a database of waitlisted grant applications publicly posted and endorsed by LTFF, ideally with the score that LTFF evaluators have assigned. Would you consider doing it? 

By waitlisted, I mean those that LTFF would have funded if it wasn't funding constrained.

5
calebp
Is it important to see identifiable information (so that a donor could fund that grant specifically), or are you more interested in the types of projects/grantees we'd like to fund? Here is a fictional example of the thing I have in mind. Funding Request for AI Safety Community Events Location: New York Project Goals: Strengthen communication and teamwork among AI safety researchers via workshops, social gatherings, and research retreats. Promote academic understanding and reach in the realm of AI cross-safety through seminars and workshops. Enhance the skills of budding researchers and students via reading groups, workshops, and tutorials. Ongoing 3rd-year PhD with multiple publications in AI safety/alignment. Part of the Quantum Strategies Group and has connections with leading innovators in Quantum Computing. Mentored several students on AI safety projects. Key roles in organizing several AI safety events and workshops. Conducted lectures and tutorials on ethical considerations and technical facets of AI. Budget: Between £5000 - £20000. Costs encompass event necessities like food, venue, travel, and recognition awards for achievements. Current Funding: A grant of $4000 from FLI designated for the STAI workshop with an additional £400 from other sources. No intention to use the funds for personal expenses. Alternative Funding Sources: Considering application to the nonlinear network.
4
Mckiev 🔸
I meant real projects, so that potential donors could fund them directly. Both Manifund and Nonlinear Network gathered applications, but evaluating them remains a challenging task. Having a project publicly endorsed by LTFF, would have been a strong signal to potential funders in my opinion

What kinds of grants tend to be most controversial among fund managers?

4
Habryka
Somewhat embarrassingly we've been overwhelmed enough with grant requests in the past few months that we haven't had much time to discuss grants, so there hasn't been much opportunity for things to be controversial among the fund managers. But guessing about what kinds of things I disagree most with other people on, my sense is that grants that are very PR-risky, and grants that are more oriented around a theory of change that involves people getting better at thinking and reasoning (e.g. "rationality development"), instead of directly being helpful with solving technical problems or acquiring resources that could be used by the broader longtermist community, tend to be the two most controversial categories. But again, I want to emphasize that I don't have a ton of data here, since the vast majority of grants are currently just evaluated by one fund manager and then sanity-checked by the fund chair, so there aren't a lot of contexts in which disagreements like this could surface.
2
calebp
I am not sure these are the most controversial, but I have had several conversations when evaluating AIS grants where I disagreed substantively with other fund managers. I think there are some object-level disagreements (what kinds of research do we expect to be productive) as well as meta-level disagreements (like "what should the epistemic process look like that decides what types of research get funded" or "how do our actions change the incentives landscape within EA/rationality/AIS").
2
Linch
I've answered both you and Quadratic Reciprocity here.

What are your AI timelines and p(doom)? Specifically:
1. What year do you think there is a 10%[1] chance that we will have AGI by? (P(AGI by 20XX)=10%).
2. What chance of doom do we have on our current trajectory given your answer to 1? P(doom|AGI in year 20XX).

[I appreciate that your answers will be subject to the usual caveats about definitions of AGI and doom, spread of probability distributions and model uncertainty, so no need to go into detail on these if pushed for time. Also feel free to be to give more descriptive, gut feel answers.]

  1. ^

    I put 50% originally, but think 10% is more salient (recalling last year's blog).

Presumably this will differ a fair bit for different members of the LTFF, but speaking personally, my p(doom) is around 30%,[1] and my median timelines are ~15 years (though with high uncertainty). I haven't thought as much about 10% timelines, but it would be some single-digit number of years.

  1. ^

    Though a large chunk of the remainder includes outcomes that are much "better" than today but which are also very suboptimal – e.g., due to "good-enough" alignment + ~shard theory + etc, AI turns most of the reachable universe into paperclips but leaves humans + our descendants to do what we want with the Milky Way. This is arguably an existential catastrophe in terms of opportunity cost, but wouldn't represent human extinction or disempowerment of humanity in the same way as "doom."

2
Greg_Colbourn
Interesting that you give significant weight to non-extinction existential catastrophes (such as the AI leaving us the Milky Way). By what mechanism would that happen? Naively, all or (especially) nothing seem much more likely. It doesn't seem like we'd have much bargaining power with not perfectly-aligned ASI. If it's something analogous to us preserving other species, then I'm not optimistic that we'd get anything close to a flourishing civilisation confined to one galaxy. A small population in a "zoo"; or grossly distorted "pet" versions of humans; or merely being kept, overwhelmingly inactive, in digital storage, seem more likely.
2
Daniel_Eth
So I'm imagining, for instance, AGIs with some shards of caring about human ~autonomy, but also other (stronger) shards that are for caring about (say) paperclips (also this was just meant as an example). I was also thinking that this might be what "a small population in a 'zoo'" would look like – the Milky Way is small compared to the reachable universe! (Though before writing out my response, I almost wrote it as "our solar system" instead of "the Milky Way," so I was imagining a relatively expansive set within this category; I'm not sure if distorted "pet" versions of humans would qualify or not.)
2
Greg_Colbourn
Why wouldn't the stronger shards just overpower the weaker shards?
1
Greg_Colbourn
Please keep this in mind in your grantmaking.
4
Daniel_Eth
FWIW, I think specific changes here are unlikely to be cruxy for the decisions we make. [Edited to add: I think if we could know with certainty that AGI was coming in 202X for a specific X, then that would be decision-relevant for certain decisions we'd face. But a shift of a few years for the 10% mark seems less decision relevant]
2
Greg_Colbourn
I think it's super decision-relevant if the shift leads you to 10%(+) in 2023 or 2024. Basically I think we can no longer rely on having enough time for alignment research to bear fruit, so we should be shifting the bulk of resources toward directly buying more time (i.e. pushing for a global moratorium on AGI).
2
Linch
Do you have specific examples of mistakes you think we're making, eg (with permission from the applicants) grants we didn't make that we would if we have shorter 10% timelines, or grants that we made that we shouldn't? 
4
Greg_Colbourn
I don't know specifics on who has applied to LTFF, but I think you should be funding orgs and people like these: (Maybe there is a bottleneck on applications too.)

If a project is partially funded by e.g. open philanthropy, would you take that as a strong signal of the projects value (e.g. not worth funding at higher levels)?

5
Linch
Nah, at least in my own evaluation I don't think Open Phil evaluations take a large role in my evaluation qua evaluation. That said, LTFF has historically[1] been pretty constrained on grantmaker time so if we think OP evaluation can save us time, obviously that's good. A few exceptions I can think of: * I think OP is reasonably good at avoiding types-of-downside-risks-that-I-model-OP-as-caring-about (eg reputational harm), so I tend to spend less time vetting grants for that downside risk vector when OP has already funded them. * For grants into technical areas I think OP has experience in (eg biosecurity), if a project has already been funded by OP (or sometimes rejected) I might ask OP for a quick explanation of their evaluation. Often they know key object-level facts that I don't. * In the past, OP has given grants to us. I think OP didn't want to both fund orgs and to fund us to then fund those orgs, so we reduced evaluation of orgs (not individuals) that OP has already funded. I think switching over from a "OP gives grants to LTFF" model to a "OP matches external donations to us" model hopefully means this is no longer an issue. Another factor going forwards is that we'll trying to increase epistemic independence and decrease our reliance on OP even further, so I expect to try to actively reduce how much OP judgments influence my thinking. 1. ^ And probably currently as well, though at this very moment funding is a larger concern/constraint. We did make some guest fund manager hires recently so hopefully we're less time-bottlenecked now. But I won't be too surprised if grantmaker time becomes a constraint again after this current round of fundraising is over.

How should applicants think about grant proposals that are rejected. I especially find newer members of the community can be heavily discouraged by rejections, is there anything you would want to communicate to them?

7
Linch
I don't know how many points I can really cleanly communicate to such a heterogeneous group, and I'm really worried about anything I say in this context being misunderstood or reified in unhelpful ways. But here goes nothing: * First of all, I don't know man, should you really listen to my opinion? I'm just one guy, who happened to have some resources/power/attention vested in me; I worry that people (especially the younger EAs) vastly overestimate how much my judgment is worth, relative to their own opinions and local context. * Thank you for applying, and for wanting to do the right thing. I genuinely appreciate everybody who applies, whether for a small project or large, in the hopes that their work can make the world a better place. It's emotionally hard and risky, and I have a lot of appreciation for the very small number people who tried to take a step in making the world better. * These decisions are really hard, and we're likely to screw up. Morality is hard and longtermism by its very nature means worse feedback loops than normal. I'm sure you're familiar with how selection/rejections can often be extremely noisy in other domains (colleges, jobs, etc). There aren't many reasons to think we'll do better, and some key reasons to think we'd do worse. We tried our best to make the best funding decisions we could, given limited resources, limited grantmaker time, and limited attention and cognitive capabilities. It's very likely that we have and will continue to consistently fuck up.  * This probably means that if you continue to be excited about your project in the absence of LTFF funding, it makes sense to continue to pursue it either under your own time or while seeking other funding. * Funding is a constraint again, at least for now. So earning-to-give might make sense. The wonderful thing about earning-to-give is that money is fungible; anybody can contribute, and probabilistically our grantees and would-be grantees are likely to be people with amon

What disagreements do the LTFF fund managers tend to have with each other about what's worth funding?

5
Linch
I'm answering both this question and Neel Nanda's question  in the same comment. As usual, other fund managers are welcome to disagree. :P A few cases that comes to mind: * When a grant appears to have both high upside and high downside risks (eg red/yellow flags in the applicant, wants to work in a naturally sensitive space, etc). * Fund managers often have disagreements with each other on how to weigh upside and downside risks. * Sometimes research projects that are exciting according to one (or a few) fund managers are object-level useless for saving the world according to other fund managers. * Sometimes a particular fund manager champions a project and inside-view believes it has world-saving potential when other fund managers disagrees, sometimes nobody inside-view believes that it has world-saving potential but the project has outside-view indications of being valuable (eg the grantee has done useful work in the past, or has endorsements by people who had), and different fund managers treat the outside-view evidence more or less strongly. * Grants with unusually high stipend asks. * Sometimes a better-than-average grant application will ask for a stipend (or other expense) that's unusually high by our usual standards. * We have internal disagreements both on the object-level of whether approving such grants is a good idea and also under which set of policies or values we ought to use to set salaries ("naive EV" vs "fairness among grantees" vs "having a 'equal sacrifice' perspective between us and the grantee" vs "not wanting to worsen power dynamics" vs "wanting to respect local norms in other fields" etc). * For example, academic stipends for graduate students in technical fields are often much lower than salaries in the corporate world. A deliberation process might look like: * A. Our normal policy of "paying 70% of counterfactual" would suggest a very high stipend * B. But this will be very "out of line" with prevailing
2
Daniel_Eth
To add to what Linch said, anecdotally, it seems like there's more disagreements when the path to impact of the grant is less direct (as opposed to, say, AI technical research), such as with certain types of governance work, outreach, or forecasting. 

What projects to reduce existential risk would you be excited to see someone work on (provided they were capable enough) that don't already exist?

6
Linch
One thing I'd be interested in seeing is more applications from people outside of the Anglosphere and Western Europe. Both because of intellectual diversity reasons and fairly naive arguments like lower cost-of-living means we can fund more projects, technical talent in those countries might be less tapped, etc. Sometimes people ask me why we haven't funded many projects by people from developing countries, and (at least in my view) the short answer is that we haven't received that many relevant applications.
4
Daniel_Eth
Personally, I'd like to see more work being done to make it easier for people to get into AI alignment without becoming involved in EA or the rationality community. I think there are lots of researchers, particularly in academia, who would potentially work on alignment but who for one reason or another either get rubbed the wrong way by EA/rationality or just don't vibe with it. And I think we're missing out on a lot of these people's contributions. To be clear, I personally think EA and rationality are great, and I hope EA/rationality continue to be on-ramps to alignment; I just don't want them to be the ~only on-ramps to alignment. [I realize I didn't answer your question literally, since there are some people working on this, but I figured you'd appreciate an answer to an adjacent question.]

Can grantees return money if their plans change, eg they get hired during a period of upskilling? If so, how often does this happen?

4
Linch
Yep, grantees are definitely allowed to do so and it sometimes happens!  I'll let someone who knows the numbers better answer with stats. 

How do you internally estimate you compare against OP/SFF/Habryka's new thing/etc.

So I wish the EA funding ecosystem was a lot more competent than we currently are. Like if we were good consequentialists, we ought to have detailed internal estimates of the value of various grants and grantmakers, models for under which assumptions one group or another is better, detailed estimates for marginal utility, careful retroactive evaluations, etc. 

But we aren't very competent. So here's some lower-rigor takes:

  • My current guess is that of the reasonably large longtermist grantmakers, solely valued at expected longtermist impact/$, our marginal grants are at or above the quality of all other grantmakers, for any given time period.
  • Compared to Open Phil longtermism, before ~2021 LTFF was just pretty clearly more funding constrained. I expect this means more triaging for good grants (though iiuc the pool of applications was also worse back then; however I expect OP longtermism to face similar constraints). 
  • In ~2021 and 2022 (when I joined) LTFF was to some degree trying to adopt something like a "shared longtermist bar" across funders, so in practice we were trying to peg our bar to be like Open Phil's.
    • So during that time I'm not sure there's much difference, naivel
... (read more)
1
NunoSempere
Awesome reply, thanks

My sense is that many of the people working on this fund are doing this part time. Is this the case? Why do that rather than hiring a few people to work full time?

1
Lauro Langosco
Yes, everyone apart from Caleb is part-time. My understanding is LTFF is looking make more full-time hires (most importantly a fund chair to replace Asya).
5
Linch
I'm currently spending ~95% of my work time on EA Funds stuff (and paid to do so), so effectively full-time. We haven't decided how long I'll stay on, but I want to keep working on EA Funds at least until it's in a more stable position (or, less optimistically, we make a call to wind it down).  But this is a recent change, historically Caleb was the only full-time person.

Any thoughts on Meta Questions about Metaphilosophy from a grant maker perspective? For example have you seen any promising grant proposals related to metaphilosophy or ensuring philosophical competence of AI / future civilization, that you rejected due to funding constraints or other reasons?

4
Linch
(Speaking for myself) It seems pretty interesting. If I understand your position correctly, I'm also worried about developing and using AGI before we're a philosophically competent civilization, though my own framing is more like "man it'd be kind of sad if we lost most of the value of the cosmos because we sent von Neumann probes before knowing what to load the probes with."  I'm confused about how it's possible to know whether someone is making substantive progress on metaphilosophy; I'd be curious if you have pointers.  As a practical matter, I don't recall any applications related to metaphilosophy coming across my desk, or voting on metaphilosophy grants that other people investigated. The closest I can think of are applicants for a few different esoteric applications of decision theory. I'll let others at the fund speak about their experiences.
2
Wei Dai
I guess it's the same as any other philosophical topic, either use your own philosophical reasoning/judgement to decide how good the person's ideas/arguments are, and/or defer to other people's judgements. The fact that there is currently no methodology for doing this that is less subjective and informal is a major reason for me to be interested in metaphilosophy, since if we solve metaphilosophy that will hopefully give us a better methodology for judging all philosophical ideas, assuming the correct solution to metaphilosophy isn't philosophical anti-realism (i.e., philosophical questions don't have right or wrong answers), or something like that.

How do you think about applications to start projects/initiatives that would compete with existing projects? 

2
Linch
From my perspective, they seem great! If there is an existing project in a niche, this usually means that the niche is worth working on. And of course it seems unlikely that any of the existing ways of doing things are close to optimal, so more experimentation is often worthwhile! That said, 3 caveats I can think of: * If you are working in a space that's already well-trodden, I expect that you're already familiar with the space and can explain why your project is different (if it is different). For example, if you're working in adversarial robustness for AI safety, then you should be very aware that this is a subject that's well-studied both in and outside of EA (eg in academia). So from my perspective, applicants not being aware of prior work is concerning, as is people being aware of prior work but not having a case for why their project is different/better.  * If your project isn't aiming to be different/better, that's also okay! For example, your theory of change might be "a total of 2 FTE-years have been spent on this research area, I think humanity should spend at least 10+ years on it to mine for more insights; I'm personally unusually excited about this area." * But if that's the case, you should say so explicitly. * I'm more hesitant to fund projects entering a space with natural monopolies. For example, if your theory of change is "persuade the Californian government to set standards for mandatory reporting of a certain class of AI catastrophic failures by talking to policymakers[1]", this is likely not something where several different groups can realistically pursue in parallel without stepping on each other's toes.  * I'm wary of new projects that tries to carve out a large space for itself in its branding and communications, especially when there isn't a good reason to do so. I'm worried about it both in cases when there are already other players in a similar niche, and when there isn't. For example I think "80,000 Hours" is a better name

How many evaluators typically rate each grant application?

3
Linch
Right now, ~2-3 
3
Daniel_Eth
[personal observations, could be off] I want to add that the number tends to be higher for grants that are closer to the funding threshold or where the grant is a "bigger deal" to get right (eg larger, more potential for both upside and downside) than for those that are more obvious yes/no or where getting the decision wrong seems lower cost.

What are some past LTFF grants that you disagree with?

2
Daniel_Eth
In my personal opinion, the LTFF has historically funded too many bio-related grants and hasn't sufficiently triaged in favor of AI-related work.
2
calebp
Hmm, I think most of these grants were made when EA had much more money (pre-FTX crash), which made funding bio work much more reasonable than funding bio work rn, by my lights. I think on the current margin, we probably should fund stellar bio work. Also, I want to note that talking negatively about specific applications might be seen as "punching down" or make applying to the LTFF higher risk than an applicant could have reasonably thought so fund managers may be unwilling to give concrete answers here.
2
Daniel_Eth
I think that's true, but I also notice that I tend to vote lower on bio-related grants than do others on the fund, so I suspect there's still somewhat of a strategic difference of opinion between me and the fund average on that point.
2
Linch
Yeah I tend to have higher uncertainty/a flatter prior about the EV of different things compared to many folks in a similar position; it's also possible I haven't sufficiently calibrated to the new funding environment.

Is there a place to donate to the operations / running of LTFF or the funds in general?

4
Linch
Not a specific place yet! In the past we've asked specific large donors to cover our costs (both paid grantmaker time and operational expenses), going forwards we'd like to move towards a model where all donors pay a small percentage, but this is not yet enacted. In the meantime, you can make a donation to EA Funds, email us and say you want the donation to be earmarked for operational expenses. :)

Given the rapid changes to the word that we're expecting to happen in the next few decades, how important do you feel that it is to spend money sooner rather than later?

Do you think there is a possibility of money becoming obsolete, which would make spending it now make much more sense than sitting on it and not being able to use it?

This could apply to money in general, with AI concerns, or any particular currency or of store value.

5
Daniel_Eth
Speaking personally, I think there is a possibility of money becoming obsolete, but I also think there's a possibility of money mattering more, as (for instance) AI might allow for an easier ability to turn money into valuable labor. In my mind, it's hard to know how this all shakes out on net. I think there are reasons for expecting the value of spending to be approximately logarithmic with total spending for many domains, and spending on research seems to fit this general pattern pretty well, so I suspect that it's prudent to generally plan to spread spending around a fair bit over the years. I also just want to note that I wouldn't expect this whole question to affect behavior of the LTFF much, as we decide what grants to fund, but we don't make plans to save/invest money for future years anyways (though, of course, it could affect behavior of potential donors to the LTFF).

On LessWrong, jacquesthibs asks:

If someone wants to become a grantmaker (perhaps with an AI risk focus) for an organization like LTFF, what do you think they should be doing to increase their odds of success?

3
Linch
On LessWrong, Lauro said: To add to that, I'd expect practice with communication and reasoning transparency and having a broad (not just deep) understanding of other work in your cause area to be quite helpful. Also, to the extent that this is trainable, it's probably good to model yourself as training to become a high-integrity and reasonably uncompromising person now, because of course integrity failures "on the job" are very costly. My thoughts on who could make a good LTFF fund chair might also be relevant.

In light of this (worries about contributing to AI capabilities and safetywashing) and/or general considerations around short timelines, have you considered funding work directly aimed and slowing down AI, as opposed to the traditional focus on AI Alignment work? E.g. advocacy work focused on getting a global moratorium on AGI development in place (examples). I think this is by far the highest impact thing we could be funding as a community (as there just isn't enough time for Alignment research to bear fruit otherwise), and would be very grateful if a fun... (read more)

4
Lawrence Chan
Newbie fund manager here, but: I strongly agree that governance work along these lines is very important; in fact, I'm currently working on governance full time instead of technical alignment research.  Needless to say, I would be interested in funding work that aims to buy time for alignment research. For example, I did indeed fund this kind of AI governance work in the Lightspeed Grants S-process. But since LTFF doesn't currently do much if any active solicitation of grants, we're ultimately bottlenecked by the applications we receive.
8
calebp
Fwiw I’m pretty unsure of the sign on governance interventions like the above, both at the implementation level and the strategic level. I’d guess that I am more concerned about over hangs than most LTFF members, whilst thinking that the slow down plans that don’t create compute overhangs are pretty intractable. I don’t think my views are common on the LTFF, though I’ve only discussed substantially with one other member (Thomas Larsen).
2
Greg_Colbourn
One way of dealing with overhangs is a taboo going along with the moratorium and regulation (we aren't constantly needing to shut down underground human cloning labs). This is assuming that any sensible moratorium will last as long as is necessary - i.e. until there is a global consensus on the safety of running more powerful models (FLI's 6 month suggestion was really just a "foot in the door").
2
Greg_Colbourn
Thank you. This is encouraging. Hopefully there will be more applications soon.

Do you know how/where people usually find out about the LTFF (to apply for funding and to donate)? Are some referral/discovery pathways particularly successful? 

4
calebp
From a brief look at our How applicant heard about us question data, I think the breakdown over the top 200 or so applications we have received is something like: * EA community (EA Forum, local groups, events, personal connections): 50-60% * AI safety/EA programs (SERI-MATS, GovAI, etc.): 10-15% * Direct LTFF outreach: 5-10% * Recommendations from experienced members: 5-10% * LessWrong, 80K Hours: 5-10% * Career advising services: <5% * Previous applicants/grantees: <5% * Online searches: <5% The EA community seems to be the dominant source, accounting for around half or more of referrals. Focused AI safety/EA programs and direct LTFF outreach collectively account for 15-25%. The remaining sources are more minor, each representing less than 10% likely. But this is an approximate estimate given the limitations of the data. The overall picture is that most people hear about LTFF through being part of the broader community.

On LessWrong, jacquesthibs asks:

Are there any plans to fundraise from high net-worth individuals, companies or governments? If so, does LTFF have the capacity/expertise for this? And what would be the plan beyond donations through the donation link you shared in the post?

3
Linch
We do have some plans to fundraise from high net-worth individuals, including doing very basic nonprofit things like offering to chat with some of our biggest past donors, as well as more ambitious targets like actively sourcing and reaching out to HNWs who have (eg) expressed concerns about AGI x-risk/GCRs but have never gotten around to actually donating to any AI x-safety projects. I don't know if we have the expertise for this, to some degree this is an empirical question. We have no current plans to raise money from companies, governments, or (non-OP) large foundations.  I haven't thought about this much at all, but my current weakly-held stance is that I think a longtermist grantmaking organization is just a pretty odd project for governments and foundations to regrant to. I'd be more optimistic about fundraising and grantwriting efforts from organizations which are larger and have an easier-to-explain direct impact case: ARC, Redwood, FAR, CHAI, MIRI(?), etc. I think raising money from companies is relatively much more tractable. But before we were to go down that route, I'd need to think a bit more about effects on moral licensing, safetywashing, etc. I don't want to (eg) receive money from Microsoft or OpenAI now on the grounds that it's better for us to have money to spend on safety than for them to spend such $s on capabilities, and then in a few years regret the decision because the nebulous costs of being tied with AI companies [1]ended up being much higher than I initially modeled. 1. ^ One advantage of our current ignorance re:donors is that fund managers basically can't be explicitly or subtly pressured to Goodhart on donor preferences, simply because we don't actually know what donor preferences are (and in some cases don't even know who the donors are).

How does your infrastructure look like? In particular, how much are you relying on Salesforce?

6
calebp
We use paper form for the application form, Airtable and Google docs for evaluation infrastructure (making decisions on what we want to fund) along with many Airtable and zapier automations. EV then uses some combination of salesforce, xero etc. to conduct due diligence, make the payment to the grant recipients etc. My impression is that we are pretty Salesforce reliant on the grant admin side, and moving away from this platform would be very costly. We are not salesforce reliant at all on the evaluation side. We don’t have any internal tooling for making botecs, people tend to use whatever system the like for this. I have been using squiggle recently and quite like it though I do think it’s still fairly high friction and slow for me for some reason.

What kind of criteria or plans do you look for in people who are junior in the AI governance field and looking for independent research grants? Is this a kind of application you would want to see more of?

3
Linch
Past experience with fairly independent research and access to high-quality mentors (so they are less likely to become directionless and/or depressed) are positives for me.
2
Lauro Langosco
Speaking for myself: it depends a lot on whether the proposal or the person seems promising. I'd be excited about funding promising-seeming projects, but I also don't see a ton of low-hanging fruit when it comes to AI gov research.

Can applicants update their application after submitting?

This was an extremely useful feature of Lightspeed Grants, because the strength of my application significantly improved every couple of weeks.

If it’s not a built-in feature, can applicants link to a google doc?

Thank you answering our questions!

5
Linch
There's no built-in feature but you can email us or link to google docs; as a practical matter I think it's much more common that applications are updated by having different funding needs or the applicants decided to pursue different projects (whether their new project needs or doesn't need funding) than because the applicant now looks significantly stronger. You can also reapply after being rejected if your application is now substantially more competitive. I'm hoping that LTFF will work towards being much more efficient going forwards, so it becomes less practically useful for applicants to feel a need to update their applications mid-evaluation. But this is aspirational; in the meantime I can totally see value in doing this.

I’ll phrase this as a question to not be off-vibe: Would you like to create accounts with AI Safety Impact Markets so that you’ll receive a regular digest of the latest AI safety projects that are fundraising on our platform? 

That would save them time since they don’t have to apply to you separately. If their project descriptions left open any questions you have, you can ask them in the Q & A section. You can also post critiques there, which may be helpful for the project developers and other donors.

Conversely, you can also send any rejected projects our way, especially if you think they’re net-positive but just don’t meet your funding bar.

2
Linch
Thanks for the offer! I think we won't have the capacity (or tbh, money) to really work on soliciting new grants in the next few weeks but feel free to ping Caleb or I again in say a month from now!
2
Dawn Drescher
Will do, thanks!

How small and short can a grant be? Is it possible for a grant to start out small, and then gradually gets bigger and sources more people if the research area turns out to be significantly more valuable than it initially appeared? If there's very few trustworthy math/quant/AI people in my city, could you help me source some hours from some reliable AI safety people in the Bay Area if the research area clearly ends up being worth their time?

2
Linch
In general, yes it can be arbitrarily short and small. In practice I think EV, who does our operations, has said that they prefer we don't make grants <$2,000 (? can't remember exact number), because the operational overhead from them might be too high per grant to justify the benefits. 

In relations to EA related content (photography, YouTubevideos, documentary, podcasts, TikTok accounts) what type of projects would you like to see more of?

3
Linch
I don't have strong opinions here on form; naively I'd prefer some combination of longform work (so the entire message gets across without sacrificing nuance), popularity, and experimentation value. In terms of content, I suspect there's still value left in detailed and nuanced explanations of various aspects of the alignment problem, as well as distillation for the best current work on partial progress on solutions (including by some of our grantees!) In general I expect this type of communication to be rather tail-heavy, so the specific person and their fit with the specific project to matter heavily. Ideally I think I'd want someone who  * a) has experience (and preferably success) with their target type of communications,  * b) who has or can easily acquire a fairly deep understanding of the relevant technical subjects (at all levels of abstraction), * c) who actually likes the work,  * d) and d has some form of higher-than-usual-for-the-field integrity (so they won't eg Goodhart on getting more people to sign up for 80k by giving unnuanced but emotionally gripping pitches). Note that I haven't been following the latest state-of-the-art in either pitches or distillations, so it's possible that ideas that I think are good are already quite saturated.

Is there any way for me to print out and submit a grant in paper, non-digital form, also without mailing it? e.g. I send an intermediary to meet one of your intermediaries at some berkeley EA event or something, and they hand over an envelope containing several identical paper copies of the grant proposal. No need for any conversation, or fuss, or awkwardness, and the papers can be disposed of afterwards and normal communication would take place if the grant is accepted. I know it sounds weird, but I'm pretty confident that this mitigates risks of a specific class.

2
calebp
I'd be interested in hearing what specific class of risks this mitigates. I'd be open to doing something like this, but my guess is that the plausible upside won't be high enough to justify the operational overhead. If a project is very sensitive fund managers have been happy to discuss things in person with applicants, (e.g. at EA/rationality events) but we don't have a systematic way to make this happen rn and it's necessary infrequently enough that I don't plan to set one up.

Infohazard policy/committment? I'd like to make sure that the person who read the grant takes AI safety seriously and much more seriously than other X-risks, to me that's the main and only limiting factor (I don't worry about taking credit for others ideas, profiting off of knowledge, or sharing info with with others as long as the sharing is done in a way that takes AI safety seriously, only that the reader is not aligned with AI safety). I'm worried that my AI-related grant proposal will distract large numbers of people from AI safety, and I think that someone who also prioritizes AI safety would, like me, act to prevent that (consistently enough for the benefits of the research to outweigh the risks).

2
Linch
I think we (especially the permanent fund managers; some of the guest fund managers are very new) are reasonably good at discretion with infohazards. But ultimately we neither have processes nor software in place to prevent with reasonably high confidence either social or technical breaches; if you are very worried about infohazard risks of your proposals; I'm not entirely sure what to do and suspect we'd be a bad place to host such an evaluation. Depending on the situation, it's plausible one of us could advise you re: who else to reach out to, likely a funder at Open Philanthropy. This link might also be helpful. 
1
Lauro Langosco
FWIW I fit that description in the sense that I think AI X-risk is higher probability. I imagine some / most others at LTFF would as well.
2
Linch
I would guess more likely than not that this belief is universal at the fund tbh. (eg nobody objected to the recent decision to triage ~all of our currently limited funding to alignment grants).
More from Linch
Curated and popular this week
Relevant opportunities