E

elifland

2781 karmaJoined Working (0-5 years)
www.elilifland.com/

Bio

You can give me anonymous feedback here. I often change my mind and don't necessarily endorse past writings.

Comments
148

Topic contributions
6

Centre for the Governance of AI does alignment research and policy research. It appears to focus primarily on the former, which, as I've discussed, I'm not as optimistic about. (And I don't like policy research as much as policy advocacy.)

I'm confused, the claim here is that GovAI does more technical alignment than policy research?

Would you be interested in making quantitative predictions on the revenue of OpenAI/Anthropic in upcoming years, and/or when various benchmarks like these will be saturated (and OSWorld, released since that series was created), and/or when various Preparedness/ASL levels will be triggered?

Want to discuss bot-building with other competitors? We’ve set up a Discord channel just for this series. Join it here.

 

I get "Invite Invalid"

How did you decide to target Cognition? 

IMO it makes much more sense to target AI developers who are training foundation models with huge amounts of compute. My understanding is that Cognition isn't training foundation models, and is more of a "wrapper" in the sense that they are building on top of others' foundation models to apply scaffolding, and/or fine-tuning with <~1% of the foundation model training compute. Correct me if I'm wrong.

Gesturing at some of the reasons I think that wrappers should be deprioritized:

  1. Much of the risks from scheming AIs routes through internal AI R&D via internal foundation models
  2. Over time, I'd guess that wrapper companies working on AI R&D-relevant tasks like Cognition either get acquired or fade into irrelevancy since there will be pressure to make AI R&D agents internally (maybe this campaign is still useful if it gets acquired though?)
  3. Accelerating LM agent scaffolding has unclear sign for safety

Maybe the answer is that Cognition was way better than foundation model developers on other dimensions, in which case, fair enough.

Thanks for organizing this! Tentatively excited about work in this domain.

I do think that generating models/rationales is part of forecasting as it is commonly understood (including in EA circles), and certainly don't agree that forecasting by definition means that little effort was put into it!
Maybe the right place to draw the line between forecasting rationales and “just general research” is asking “is the model/rationale for the most part tightly linked to the numerical forecast?" If yes, it's forecasting, if not, it's something else. 

 

Thanks for clarifying! Would you consider OpenPhil worldview investigations reports such Scheming AIs, Is power-seeking AI an existential risk, Bio Anchors, and Davidson's takeoff model forecasting? It seems to me that they are forecasting in a relevant sense and (for all except Scheming AIs maybe?) the sense you describe of the rationale linked tightly to a numerical forecast, but wouldn't fit under the OP forecasting program area (correct me if I'm wrong).

Maybe not worth spending too much time on these terminological disputes, perhaps the relevant question for the community is what the scope of your grantmaking program is. If indeed the months-year-long reports above wouldn't be covered, then it seems to me that the amount of effort spent is a relevant dimension of what counts as "research with a forecast attached" vs. "forecasting as is generally understood in EA circles and would be covered under your program". So it might be worth clarifying the boundaries there. If you indeed would consider reports like worldview investigations ones under your program, then never mind but good to clarify as I'd guess most would not guess that.

Thanks for writing this up, and I'm excited about FutureSearch! I agree with most of this, but I'm not sure framing it as more in-depth forecasting is the most natural given how people generally use the word forecasting in EA circles (i.e. associated with Tetlock-style superforecasting, often aggregation of very part-time forecasters' views, etc.). It might be imo more natural to think of it as being a need for in-depth research, perhaps with a forecasting flavor. Here's part of a comment I left on a draft.

However, I kind of think the framing of the essay is wrong [ETA: I might hedge wrong a bit if writing on EAF :p] in that it categorizes a thing as "forecasting" that I think is more naturally categorized as "research" to avoid confusion. See point (2)(a)(ii) at https://www.foxy-scout.com/forecasting-interventions/ ; basically I think calling "forecasting" anything where you slap a number on the end is confusing, because basically every intellectual task/decision can be framed as forecasting.

It feels like this essay is overall arguing that AI safety macrostrategy research is more important than AI safety superforecasting (and the superforecasting is what EAs mean when they say "forecasting"). I don't think the distinction being pointed to here is necessarily whether you put a number at the end of your research project (though I think that's usually useful as well), but the difference between deep research projects and Tetlock-style superforecasting.

I don't think they are necessarily independent btw, they might be complementary (see https://www.foxy-scout.com/forecasting-interventions/ (6)(b)(ii) ), but I agree with you that the research is generally more important to focus on at the current margin.

[...] Like, it seems more intuitive to call https://arxiv.org/abs/2311.08379 a research project rather than forecasting project even though one of the conclusions is a forecast (because as you say, the vast majority of the value of that research doesn't come from the number at the end).

Thanks Ozzie for chatting! A few notes reflecting on places I think my arguments in the conversation were weak:

  1. It's unclear what short timelines would mean for AI-specific forecasting. If AI timelines are short it means you shouldn’t forecast non-AI things much, but it’s unclear what it means about forecasting AI stuff. There’s less time for effects to compound but you have more info and proximity to the most important decisions. It does discount non-AI forecasting a lot though, and some flavors of AI forecasting.
  2. I also feel weird about the comparison I made between forecasting and waiting for things to happen in the world. There might be something to it, but I think it is valuable to force yourself to think deeply about what will happen, to help form better models of the world, in order to better interpret new events as they happen.

Just chatted with @Ozzie Gooen about this and will hopefully release audio soon. I probably overstated a few things / gave a false impression of confidence in the parent in a few places (e.g., my tone was probably a little too harsh on non-AI-specific projects); hopefully the audio convo will give a more nuanced sense of my views. I'm also very interested in criticisms of my views and others sharing competing viewpoints.

Also want to emphasize the clarifications from my reply to Ozzie:

  1. While I think it's valuable to share thoughts about the value of different types of work candidly, I am very appreciative of both people working on forecasting projects and grantmakers in the space for their work trying to make the world a better place (and am friendly with many of them). As I maybe should have made more obvious, I am myself affiliated with Samotsvety Forecasting, and Sage which has done several forecasting projects (and am for the most part more pessimistic about forecasting than others in these groups/orgs). And I'm also doing AI forecasting research atm, though not the type that would be covered under the grantmaking program.
  2. I'm not trying to claim with significant confidence that this program shouldn't exist. I am trying to share my current views on the value of previous forecasting grants and the areas that seem most promising to me going forward. I'm also open to changing my mind on lots of this!

Thanks Ozzie for sharing your thoughts!

A few things I want to clarify up front:

  1. While I think it's valuable to share thoughts about the value of different types of work candidly, I am very appreciative of both people working on forecasting projects and grantmakers in the space for their work trying to make the world a better place (and am friendly with many of them). As I maybe should have made more obvious, I am myself affiliated with Samotsvety Forecasting, and Sage which has done several forecasting projects. And I'm also doing AI forecasting research atm, though not the type that would be covered under the grantmaking program.
  2. I'm not trying to claim with significant confidence that this program shouldn't exist. I am trying to share my current views on the value of previous forecasting grants and the areas that seem most promising to me going forward. I'm also open to changing my mind on lots of this!

Thoughts on some of your bullet points:

2. I think that for further funding in this field to be exciting, funders should really work on designing/developing this field to emphasize the very best parts. The current median doesn't seem great to me, but I think the potential has promise, and think that smart funding can really triple-down on the good stuff. I think it's sort of unfair to compare forecasting funding (2024) to AI Safety funding (2024), as the latter has had much more time to become mature. This includes having better ideas for impact and attracting better people. I think that if funders just "funded the median projects", then I'd expect the field to wind up in a similar place to it is now - but if funders can really optimize, then I'd expect them to be taking a decent-EV risk. (Decent chance of failure, but some chance at us having a much more exciting field in 3-10 years).

I was trying to compare previous OP forecasting funding to previous AI Safety. It's not clear to me how different these were; sure, OP didn't have a forecasting program but AI safety was also very short-staffed. And re: the field maturing idk Tetlock has been doing work on this for a long time, my impression is that AI safety also had very little effort going into it until like mid-late 2010s. I agree that funding of potentially promising exploratory approaches is good though.

3. I'd prefer funders focus on "increasing wisdom and intelligence" or "epistemic infrastructure" than on "forecasting specifically". I think that the focus on forecasting is over-limiting. That said, I could see an argument to starting from a forecasting angle, as other interventions in "wisdom and intelligence / epistemic infrastructure" are more speculative.

Seems reasonable. I did like that post!

4. If I were deploying $50M here, I'd probably start out by heavily prioritizing prioritization work itself - work to better understand this area and what is exciting within it. (I explain more of this in the wisdom/intelligence post above). I generally think that there's been way too little good investigation and prioritization work in this area.

Perhaps, but I think you gain a ton of info from actually trying to do stuff and iterating. I think prioritization work can sometimes seem more intuitively great than it ends up being, relative to the iteration strategy.

6. I'd like to flag that I think that Metaculus/Manifold/Samotsvety/etc forecasting has been valuable for EA decision-making. I'd hate to give this up or de-prioritize this sort of strategy.

I would love for this to be true! Am open to changing mind based on a compelling analysis.

7. I don't particularly trust EA decision-making right now. It's not that I think I could personally do better, but rather that we are making decisions about really big things, and I think we have a lot of reason for humility. When choosing between "trying to better figure out how to think and what to do" vs. "trying to maximize the global intervention that we currently think is highest-EV," I'm nervous about us ignoring the former and going all-in on the latter. That said, some of the crux might be that I'm less certain about our current marginal AI Safety interventions than I think Eli is.

There might be some difference in perceptions of the direct EV of marginal AI Safety interventions. There might also be differences in beliefs in the value of (a) prioritization research vs. (b) trying things out and iterating, as described above (perhaps we disagree on absolute value of both (a) and (b)).

8. Personally, around forecasting, I'm most excited about ambitious, software-heavy proposals. I imagine that AI will be a major part of any compelling story here.

Seems reasonable, though I'd guess we have different views on which ambitious AI-related software-heavy projects.

9. I'd also quickly flag that around AI Safety - I agree that in some ways AI safety is very promising right now. There seems to have been a ton of great talent brought in recently, so there are some excellent people (at very least) to give funding to. I think it's very unfortunate how small the technical AI safety grantmaking team is at OP. Personally I'd hope that this team could quickly get to 5-30 full time equivalents. However, I don't think this needs to come at the expense of (much) forecasting/epistemics grantmaking capacity. 

I think you might be understating how fungible OpenPhil's efforts are between AI safety (particularly governance team) and forecasting. Happy to chat in DM if you disagree. Otherwise reasonable point, though you'd ofc still have to do the math to make sure the forecasting program is worth it.

(edit: actually maybe the disagreement is still in the relative value of the work, depending on what you mean by "much" grantmaking capacity)

10. I think you can think of a lot of "EA epistemic/evaluation/forecasting work" as "internal tools/research for EA". As such, I'd expect that it could make a lot of sense for us to allocate ~5-30% of our resources to it. Maybe 20% of that would be on the "R&D" to this part - perhaps more if you think this part is unusually exciting due to AI advancements. I personally am very interested in this latter part, but recognize it's a fraction of a fraction of the full EA resources. 

Seems unclear what should count as internal research for EA, e.g. are you counting OP worldview investigation team / AI strategy research in general? And re: AI advancements, it both improves the promise of AI for forecasting/epistemics work but also shortens timelines which points toward direct AI safety technical/gov work.

Load more