A

Akash

3948 karmaJoined Oct 2020

Bio

AI safety governance/strategy research & field-building.

Formerly a PhD student in clinical psychology @ UPenn, college student at Harvard, and summer research fellow at the Happier Lives Institute.

Comments
105

Akash
4mo36
11
8
3
2

Congrats to Zach! I feel like this is mostly supposed to be a "quick update/celebratory post", but I feel like there's a missing mood that I want to convey in this comment. Note that my thoughts mostly come from an AI Safety perspective, so these thoughts may be less relevant for folks who focus on other cause areas.

My impression is that EA is currently facing an unprecedented about of PR backlash, as well as some solid internal criticisms among core EAs who are now distancing from EA. I suspect this will likely continue into 2024. Some examples:

  • EA has acquired several external enemies as a result of the OpenAI coup. I suspect that investors/accelerationists will be looking for ways to (further) damage EA's reputation.
  • EA is acquiring external enemies as a result of its political engagements. There have been a few news articles recently criticizing EA-affiliated or EA-influenced fellowship programs and think-tanks.
  • EA is acquiring an increasing number of internal critics. Informally, I feel like many people I know (myself included) have become increasingly dissatisfied with the "modern EA movement" and "mainstream EA institutions". Examples of common criticisms include "low integrity/low openness", "low willingness to critique powerful EA institutions", "low willingness to take actions in the world that advocate directly/openly for beliefs", "cozyness with AI labs", "general slowness/inaction bias", and "lack of willingness to support groups pushing for concrete policies to curb the AI race." (I'll acknowledge that some of these are more controversial than others and could reflect genuine worldview differences, though even so, my impression is that they're meaningfully contributing to a schism in ways that go beyond typical worldview differences).

I'd be curious to know how CEA is reacting to this. The answer might be "well, we don't really focus much on AI safety, so we don't really see this as our thing to respond to." The answer might be "we think these criticisms are unfair/low-quality, so we're going to ignore them." Or the answer might be "we take X criticism super seriously and are planning to do Y about it."

Regardless, I suspect that this is an especially important and challenging time to be the CEO of CEA. I hope Zach (and others at CEA) are able to navigate the increasing public scrutiny & internal scrutiny of EA that I suspect will continue into 2024.

Do you know anything about the strategic vision that Zach has for CEA? Or is this just meant to be a positive endorsement of Zach's character/judgment? 

(Both are useful; just want to make sure that the distinction between them is clear). 

I appreciate the comment, though I think there's a lack of specificity that makes it hard to figure out where we agree/disagree (or more generally what you believe).

If you want to engage further, here are some things I'd be excited to hear from you:

  • What are a few specific comms/advocacy opportunities you're excited about//have funded?
  • What are a few specific comms/advocacy opportunities you view as net negative//have actively decided not to fund?
  • What are a few examples of hypothetical comms/advocacy opportunities you've been excited about?
  • What do you think about EG Max Tegmark/FLI, Andrea Miotti/Control AI, The Future Society, the Center for AI Policy, Holly Elmore, PauseAI, and other specific individuals or groups that are engaging in AI comms or advocacy? 

I think if you (and others at OP) are interested in receiving more critiques or overall feedback on your approach, one thing that would be helpful is writing up your current models/reasoning on comms/advocacy topics.

In the absence of this, people simply notice that OP doesn't seem to be funding some of the main existing examples of comms/advocacy efforts, but they don't really know why, and they don't really know what kinds of comms/advocacy efforts you'd be excited about.

Answer by AkashDec 16, 202325
17
1

I expect that your search for a "unified resource" will be unsatisfying. I think people disagree enough on their threat models/expectations that there is no real "EA perspective".

Some things you could consider doing:

  • Having a dialogue with 1-2 key people you disagree with
  • Pick one perspective (e.g., Paul's worldview, Eliezer's worldview) and write about areas you disagree with it.
  • Write up a "Matthew's worldview" doc that focuses more on explaining what you expect to happen and isn't necessarily meant as a "counterargument" piece. 

Among the questions you list, I'm most interested in these:

  • How bad human disempowerment would likely be from a utilitarian perspective
  • Whether there will be a treacherous turn event, during which AIs violently take over the world after previously having been behaviorally aligned with humans
  • How likely AIs are to kill every single human if they are unaligned with humans
  • How society is likely to respond to AI risks, and whether they'll sleepwalk into a catastrophe

Thanks for this overview, Trevor. I expect it'll be helpful– I also agree with your recommendations for people to consider working at standard-setting organizations and other relevant EU offices.

One perspective that I see missing from this post is what I'll call the advocacy/comms/politics perspective. Some examples of this with the EU AI Act:

  • Foundation models were going to be included in the EU AI Act, until France and Germany (with lobbying pressure from Mistral and Aleph Alpha) changed their position.
  • This initiated a political/comms battle between those who wanted to exclude foundation models (led by France and Germany) and those who wanted to keep it in (led by Spain).
  • This political fight rallied lots of notable figures, including folks like Gary Marcus and Max Tegmark, to publicly and privately fight to keep foundation models in the act.
  • There were open letters, op-eds, and certainly many private attempts at advocacy.
  • There were attempts to influence public opinion, pieces that accused key lobbyists of lying, and a lot of discourse on Twitter.

It's difficult to know the impact of any given public comms campaign, but it seems quite plausible to me that many readers would have more marginal impact by focusing on advocacy/comms than focusing on research/policy development.

More broadly, I worry that many segments of the AI governance/policy community might be neglecting to think seriously about what ambitious comms/advocacy could look like in the space.

I'll note that I might be particularly primed to bring this up now that you work for Open Philanthropy. I think many folks (rightfully) critique Open Phil for being too wary of advocacy, campaigns, lobbying, and other policymaker-focused activities. I'm guessing that Open Phil has played an important role in shaping both the financial and cultural incentives that (in my view) leads to an overinvestment into research and an underinvestment into policy/advocacy/comms. 

(I'll acknowledge these critiques are pretty high-level and I don't claim that this comment provides compelling evidence for them. Also, you only recently joined Open Phil, so I'm of course not trying to suggest that you created this culture, though I guess now that you work there you might have some opportunities to change it).

I'll now briefly try to do a Very Hard thing which is like "put myself in Trevor's shoes and ask what I actually want him to do." One concrete recommendation I have is something like "try to spend at least 5 minutes thinking about ways in which you or others around you might be embedded in a culture that has blind spots to some of the comms/advocacy stuff." Another is "make a list of people you read actively or talked to when writing this post. Then ask if there were any other people/orgs you could've reached out, particularly those that might focus more on comms+adovacy". (Also, to be clear, you might do both of these things and conclude "yea, actually I think my approach was very solid and I just had Good Reasons for writing the post the way I did.")

I'll stop here since this comment is getting long, but I'd be happy to chat further about this stuff. Thanks again for writing the post and kudos to OP for any of the work they supported/will support that ends up increasing P(good EU AI Act goes through & gets implemented). 

I'm excited to see the EAIF share more about their reasoning and priorities. Thank you for doing this!

I'm going to give a few quick takes– happy to chat further about any of these. TLDR: I recommend (1) getting rid of the "principles-first" phrase & (2) issuing more calls for proposals focused on the specific projects you want to see (regardless of whether or not they fit neatly into an umbrella term like "principles-first")

  • After skimming the post for 5 minutes, I couldn't find a clear/succinct definition of what "principles-first" actually means. I think it means something like "focus more on epistemics and core reasoning" and "focus less on specific cause areas". But then some of the examples of the projects that Caleb is excited about are basically just like "get people together to think about a specific cause area– but not one of the mainstream ones, like one of the more neglected ones."
  • I find the "principles-first" frame a bit icky at first glance. Something about it feels... idk... just weird and preachy or something. Ok, what's actually going on there?
    • Maybe part of it is that it seems to imply that people who end up focusing on specific cause areas are not "principles-first" people, or like in the extreme case they're not "good EAs". And then it paints a picture for me where a "good EA" is one who spends a bunch of time doing "deep reasoning", instead of doing cause-area-specific work. Logically, it's pretty clear to me that this isn't what the posters are trying to say, but I feel like that's part of where the system 1 "ick" feeling is coming from.
  • I worry that the term "principles-first EA" might lead to a bunch of weird status things and a bunch of unhelpful debates. For me, the frame naturally invokes questions like "what principles?" and "who gets to decide what those principles are?" and all sorts of "what does it truly mean to be an EA?" kinds of questions. Maybe the posters think that, on the margin, more people should be asking these questions. But I think that should be argued for separately– if EAIF adopts this phrase as their guiding phrase, I suspect a lot of people will end up thinking "I need to understand what EAIF thinks the principles of EA are and then do those things".
  • Personally, I don't think the EAIF needs to have some sort of "overarching term" that summarizes what it is prioritizing. I think it's quite common for grantmaking organizations to just say "hey, here's a call for proposals with some examples of things we're excited about."
  • Personally, I'm very excited about the projects that Caleb listed in the appendix. Some of these don't really seem to me to fall neatly under the "principles-first" label (a bunch of them just seem like "let's do deconfusion work or make progress in specific areas that are important and highly neglected."
  • Historically, my impression is that EAIF hasn't really done many calls for proposals relating to specific topics. It has been more like "hey anyone with any sort of meta idea can apply." I'm getting the sense from this post that Caleb wants EAIF to have a clearer focus. Personally, I would encourage EAIF to do more "calls for proposals" focused on specific projects that they want to see happen in the world. As an example, EAIF could say something like "we are interested in seeing proposals about acausal trade and ethics of digital minds. Here are some examples of things you could do."
    • I think there are a lot of "generally smart and agentic people" around who don't really know what to do, and some guidance from grantmakers along the lines of "here are some projects that we want to see people apply to" could considerably lower the amount of agency/activation energy/confidence/inside-viewness that such people need.
    • On the flip side, we'd want to avoid a world in which people basically just blindly defer to grantmakers. I don't suspect calls for proposals to contribute to that too much, and I also suspect there's a longer conversation that could be had about how to avoid these negative cultural externalities.

Personally, I still think there is a lot of uncertainty around how governments will act. There are at least some promising signs (e.g., UK AI Safety Summit) that governments could intervene to end or substantially limit the race toward AGI. Relatedly, I think there's a lot to be done in terms of communicating AI risks to the public & policymakers, drafting concrete policy proposals, and forming coalitions to get meaningful regulation through. 

Some folks also have hope that internal governance (lab governance) could still be useful. I am not as optimistic here, but I don't want to rule it out entirely. 

There's also some chance that we end up getting more concrete demonstrations of risks. I do not think we should wait for these, and I think there's a sizable chance we do not get them in time, but I think "have good plans ready to go in case we get a sudden uptick in political will & global understanding of AI risks" is still important. 

I would be interested in seeing your takes about why building runway might be more cost-effective than donating.

Separately, if you decide not to go with 10% because you want to think about what is actually best for you, I suggest you give yourself a deadline. Like, suppose you currently think that donating 10% would be better than status quo. I suggest doing something like “if I have not figured out a better solution by Jan 1 2024, I will just do the community-endorsed default of 10%.”

I think this protects against some sort of indefinite procrastination. (Obviously less relevant if you never indefinitely procrastinate on things like this, but my sense is that most people do at least sometimes).

Load more