This is a special post for quick takes by Heramb Podar. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

At this point, we need an 80k page on "What to do after leaving Open AI"

  1. Don't start another AI safety lab

Did something happen?

Appreciate that, I woke up this morning after finally quitting (the new "ChatGPT 5 is your boss" pilot was too much!), about to register "AI4Gud.com" and get me in the race, but have reconsidered based on this excellent advice. 

We as a movement do a terrible job of communicating just how hard it might be to get a job in AI Safety and honestly cause people to anchor/rely too much on EA resources which sets unrealistic expectations and also not being fair 

Any suggestions?

So, some high-level suggestions based on my interactions with other people I have are:

  1. Being more explicit about this in 80K hours calls or talking about the funding bar (potentially somehow with grantmakers/ intro'ing to successful candidates who do independent stuff). Maybe organisations could explicitly state this in their fellowship/intern/job applications: "Only 10 out of 300 last year got selected" so that people don't over-rely on some applications. 
  2. There is a very obvious point that Community Builders can only do so much because their general job is to point resources out and set initial things rolling. I think that as community builders, being vocal about this from an early point is important. This could look like, "Hey, I only know as much as you do now that you have read AGI SF and Superintelligence."  Community builders could also try connecting with slightly more senior people and doing intros on a selective basis(e.g., I know a few good community builders who try to go out of their way to an EAGx to score convos with such people).
  3. I think metrics for 80K, and CBs need to be more heavily weighted towards(if not already) "X went on to do an internship and publish a paper" and away from "this guy read superintelligence and did a fun hackathon". The latter also creates weird sub-incentives for community members to score brownie points with CBs and make a lot of movement with little impactful progress.
  4. Talking about creating your own opportunities seems really untalked about in EA circles- there is a lot of talk about finding opportunities and overwhelming newcomers with EA org webpages, which, coupled with neglectedness, causes them to overestimate the opportunities. Maybe there could be a guide for this, some sort of a group/support for this?
  5. For early career folks, maybe there could be some sort of a peer buddy system where people who are a little bit further down the road can get matched and collaborate/talk. A lot of these conversations involve safe spaces, building trust and talking about really sensitive issues(like finances, runway planning and critical feedback on applications). I have been lucky to build such a circle within EA, but I recognize that's only because of certain opportunities I got early on, along with being comfortable with reaching out to people, something which not necessarily everyone is.
  6. We need to identify more proactive people who already have a track record of social impact/being driven by certain kinds of research instead of just high-potential people- these are probably the only people who will actually convert to returns for the movement(very crudely speaking). This is even more true in non-EA hubs where good connections aren't just one local meetup away as with NYC or Oxford. I think there is a higher attrition rate of high-potential people in LMICs, at least partly due to this.

I agree, and I think part of the problem is giving a false impression of what kind of training or experience is most useful. You mention this in the over-reliance on EA resources which I think is a major problem. This is especially an issue when someone applies for AI Safety related jobs outside of the EA sphere - because they're competing with people with a much wider range of experience.

I've always felt there should be a 'useful non-EA courses/resources' guide.

I think I'm in some ways confused about this. I think it's true that the hiring situation is hard, but my priors say that this is likely to change fast[1] and that the downside risk for many people is probably low: especially in the technical side, time upskilling for AI Safety is probably not time completely wasted for the industry at large[2]

Are there any particular things you think we could do better? I think one could be just in general being less quick to suggest AIS as a career path for people who might be in risk for financial hardship as a result. Career guides in general do seem very oriented to people who can often take the risk of spending months unemployed, and doing upskilling or job hunting.[3]

  1. ^

    Both as a result of higher funding and people funding a lot of orgs whenever there is both excess talent and funding overhang.

  2. ^

    Especially ML-wise. But this is probably less true (if at all) for people upskilling in AI Safety policy and governance, strategy and fieldbuilding, etc.

  3. ^

    Something something rich western countries

my priors say that this is likely to change fast

I predict scaling up organizations will be slow and painful rather than easy and fast. I predict organizations will have a hard time productively scaling fast, though they may be either productive or scale fast.

This seems mostly independent of funding.

(I don't particularly disagree with the rest of the comment.)

I think this would be more the result of new orgs rather than bigger orgs? Like I would argue that we currently don't have anything near the optimal amount of orgs dedicated to training programs, and as funding increases, we will probably get a lot of them.

I'd also predict something similar about well functioning founding orgs.

Doing things is hard.
 

It's interesting how we also have scope neglect of key historical events:

Death Toll: Siege of Athens in 86 BC ~ People killed in Hiroshima <<< Battle of Stalingrad 

A lot of policy research seems to be written with an agenda in mind to shape the narrative. And this kind of destroys the point of policy research which is supposed to inform stakeholders and not actively convince or really nudge them.

This might cause polarization in some topics and is in itself, probably snatching legitimacy away from the space.

I have seen similar concerning parallels in the non-profit space, where some third-sector actors endorse/do things which they see as being good but destroys trust in the whole space.

This gives me scary unilaterist's curse vibes..

I feel like being exposed early on to longer form GovAI-type reports has made me set the bar high for writing my thoughts out in short form, which really sucks in terms of an output standpoint.

People in EA end up optimizing for EA credentials so they can virtue signal to grantmakers, but grantmakers would probably like people to scope out non-EA opportunities because that allows us to introduce unknown people to the concerns we have

For policy recommendations, put forth things that actually build on or move the status quo.

For example, recommending a "National Youth Council" without a mandate can be a uniquely bad idea- instead of ignoring your (usually inactive) youth org, now, policymakers will ignore the Council of (usually inactive)youth orgs all while you(the actually proactive person), walk away with the false notion of a job well done.

I find the Biden chip export controls a step in the right direction, and it also made me update my world model of compute governance being an impactful lever. However, I am concerned that our goals aren't aligned with theirs; US policymakers' incentive right now is to curb China's tech growth and fun trade war reasons, not pause AI.

This optimization for different incentives is probably going to create some split between US policymakers and AI safety folks as time goes on.

It also makes China more likely to treat this as a tech race which sets up interesting competitive race dynamics between the US and China which I don't see talked about enough. 

After talking and working for some time with non-EA organisations in the AI Policy space, I believe that we need to give more credence to the here-and-now of AI safety policy as well to get the attention of policymakers and get our foot in the door. That also gives us space to collaborate with other think tanks and organisations outside of the x-risk space that are proactive and committed to AI policy. Right now, a lot of those people also see X-risks as being fringe and radical(and these are people who are supposed to be on our side).

Governments tend to move slowly, with due process, and in small increments(think, "We are going to first maybe do some risk monitoring, only then auditing"). Policymakers are only visionaries with horizons until the end of their terms(hmm, no surprise). Usually, broad strokes in policy require precedents of a similar size for it to be feasible within a policymakers' agenda and the Overton window. 

Every group that comes to a policy meeting thinks that their agenda item is the most pressing because, by definition, most of the time, contacting and getting meetings with policymakers means that you are proactive and have done your homework.

I want to see more EAs respond to Public Voice Opportunities, for instance- something I rarely hear on the EA forum or via EA channels/material. 

This is a really good point, and perhaps the number one mistake I see in this area. People also forget that policy changes have colossal impacts on very complex human systems - the bigger the change, the bigger the impact. A small step is a lot easier for end-user buy-in to stomach than a large one.

I often advise to think the cost and effort difference between "I have to re-wallpaper one wall" as opposed to "I need to tear my house down to the foundations and rebuild it".

That said, I think a lot of it is because actual policy is super hard to break into and get experience. There needs to be more training etc available to people - particularly early careers researchers. 

The problem with AI safety policy is that if we don't specify and attempt to answer the technical concerns then someone else will and safety wash the concerns away.

CSOs need to understand what they themselves mean when they say "explainable" and "algorithmic transparency."

It's important to think about the policy space from a meta-level incentives/factors that might get in the way of having an impact, such as making AI safer.

One I heard today was that policy people thrive in moments of regulatory uncertainty, while this is bad for companies.

I see way too many people confusing movement with progress in the policy space. 

There can be a lot of drafts becoming bills with still significant room for regulatory capture in the specifics, which will be decided later on. Take risk levels, for instance, which are subjective - lots of legal leeway for companies to exploit. 

The real danger isn't just from AI getting better- it's from it getting good enough that humans start over-relying on it and offloading tasks to it. 

Remember that Petrov had automatic detection systems, too; he just independently came to the conclusion not to fire nukes back.

Communicating by keeping human rights at the centre of AI Policy discussion is extremely underappreciated.
For e.g., the UN Human Rights chief in 2021 called for a moratorium on the sale and use of artificial intelligence (AI) systems until adequate safeguards are put in place.

Respect for human rights is a well-established central norm; leverage it  

Everyone who seems to be writing policy papers/ doing technical work seems to be keeping generative AI at the back of their mind, when framing their work or impact. 

 

This narrow-eyed focus on gen AI might almost certainly be net-negative for us- unknowingly or unintentionally ignoring ripple effects of the gen AI boom in other fields (like robotics companies getting more funding leading to more capabilities, and that leads to new types of risks).

 

And guess who benefits if we do end up getting good evals/standards in place for gen AI? It seems to me companies/investors are clear winners because we have to go back to the drawing board and now advocate for the same kind of stuff for robotics or a different kind of AI use-case/type all while the development/capability cycles keep maturing.

We seem to be in whack-a-mole territory now because of the overton window shifting for investors.

I don't think we have a good answer to what happens after we do auditing of an AI model and find something wrong.

 

Given that our current understanding of AI's internal workings is at least a generation behind, it's not exactly like we can isolate what mechanism is causing certain behaviours. (Would really appreciate any input here- I see very little to no discussion on this in governance papers; it's almost as if policy folks are oblivious to the technical hurdles which await working groups)

At some point, one has to ask "Am I in the cause area because I am an EA or am in EA because I am in the cause area?" 

With open-source models being released and on ramps to downstream innovation lowering, the safety challenges may not be a single threshold but rather an ongoing, iterative cat-and-mouse game.

Just underscores the importance of people in the policy/safety field thinking far ahead

It's frankly quite concerning that usually technical specifications are only worked on by Working Groups after high-level qualitative goals are set by policymakers- seems to open a can of worms for different interpretations and safety washing.

Updated away from this generally- there is a balance.
Good example for why I updated away is 28:27 from the video at:

Agreeing on building safe, trustworthy, and human-centric AI is akin to making an open call for a DIY definitions for different regulatory environments. 

Curated and popular this week
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Omnizoid
 ·  · 9m read
 · 
Crossposted from my blog which many people are saying you should check out!    Imagine that you came across an injured deer on the road. She was in immense pain, perhaps having been mauled by a bear or seriously injured in some other way. Two things are obvious: 1. If you could greatly help her at small cost, you should do so. 2. Her suffering is bad. In such a case, it would be callous to say that the deer’s suffering doesn’t matter because it’s natural. Things can both be natural and bad—malaria certainly is. Crucially, I think in this case we’d see something deeply wrong with a person who thinks that it’s not their problem in any way, that helping the deer is of no value. Intuitively, we recognize that wild animals matter! But if we recognize that wild animals matter, then we have a problem. Because the amount of suffering in nature is absolutely staggering. Richard Dawkins put it well: > The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In fact, this is a considerable underestimate. Brian Tomasik a while ago estimated the number of wild animals in existence. While there are about 10^10 humans, wild animals are far more numerous. There are around 10 times that many birds, between 10 and 100 times as many mammals, and up to 10,000 times as many both of reptiles and amphibians. Beyond that lie the fish who are shockingly numerous! There are likely around a quadrillion fish—at least thousands, and potentially hundreds of thousands o