Hide table of contents

I haven't shared this post with other relevant parties – my experience has been that private discussion of this sort of thing is more paralyzing than helpful. I might change my mind in the resulting discussion, but, I prefer that discussion to be public.

 

I think 80,000 hours should remove OpenAI from its job board, and similar EA job placement services should do the same.

(I personally believe 80k shouldn't advertise Anthropic jobs either, but I think the case for that is somewhat less clear)

I think OpenAI has demonstrated a level of manipulativeness, recklessness, and failure to prioritize meaningful existential safety work, that makes me think EA orgs should not be going out of their way to give them free resources. (It might make sense for some individuals to work there, but this shouldn't be a thing 80k or other orgs are systematically funneling talent into)

There plausibly should be some kind of path to get back into good standing with the AI Risk community, although it feels difficult to imagine how to navigate that, given how adversarial OpenAI's use of NDAs was, and how difficult that makes it to trust future commitments. 

The things that seem most significant to me:

  • They promised the superalignment team 20% of their compute-at-the-time (which AFAICT wasn't even a large fraction of their compute over the coming years), but didn't provide anywhere close to that, and then disbanded the team when Leike left.
  • Their widespread use of non-disparagement agreements, with non-disclosure clauses, which generally makes it hard to form accurate impressions about what's going on at the organization. 
  • Helen Toner's description of how Sam Altman wasn't forthright with the board. (i.e. "The board was not informed about ChatGPT in advance and learned about ChatGPT on Twitter. Altman failed to inform the board that he owned the OpenAI startup fund despite claiming to be an independent board member, giving false information about the company’s formal safety processes on multiple occasions. And relating to her research paper, that Altman in the paper’s wake started lying to other board members in order to push Toner off the board.")
  • Hearing from multiple ex-OpenAI employees that OpenAI safety culture did not seem on track to handle AGI. Some of these are public (Leike, Kokotajlo), others were in private. 

This is before getting into more openended arguments like "it sure looks to me like OpenAI substantially contributed to the world's current AI racing" and "we should generally have a quite high bar for believing that the people running a for-profit entity building transformative AI are doing good, instead of cause vast harm, or at best, being a successful for-profit company that doesn't especially warrant help from EAs.

I am generally wary of AI labs (i.e. Anthropic and Deepmind), and think EAs should be less optimistic about working at large AI orgs, even in safety roles. But, I think OpenAI has demonstrably messed up, badly enough, publicly enough, in enough ways that it feels particularly wrong to me for EA orgs to continue to give them free marketing and resources. 

I'm mentioning 80k specifically because I think their job board seemed like the largest funnel of EA talent, and because it seemed better to pick a specific org than a vague "EA should collectively do something." (see: EA should taboo "EA should"). I do think other orgs that advise people on jobs or give platforms to organizations (i.e. the organization fair at EA Global) should also delist OpenAI.

My overall take is something like: it is probably good to maintain some kind of intellectual/diplomatic/trade relationships with OpenAI, but bad to continue giving them free extra resources, or treat them as an org with good EA or AI safety standing. 

It might make sense for some individuals to work at OpenAI, but doing so in a useful way seems very high skill, and high-context – not something to funnel people towards in a low-context job board.

I also want to clarify: I'm not against 80k continuing to list articles like Working at an AI Lab, which are more about how to make the decisions, and go into a fair bit of nuance. I disagree with that article, but it seems more like "trying to lay out considerations in a helpful way" than "just straightforwardly funneling people into positions at a company." (I do think that article seems out of date and worth revising in light of new information.  I think "OpenAI seems inclined towards safety" now seems demonstrably false, or at least less true in the ways that matter. And this should update you on how true it is for the other labs, or how likely it is to remain true)

FAQ / Appendix

Some considerations and counterarguments which I've thought about, arranged as a hypothetical FAQ.

Q: It seems that, like it or not, OpenAI is a place transformative AI research is likely to happen, and having good people work there is important. 

Isn't it better to have alignment researchers working there, than not? Are you sure you're not running afoul of misguided purity instincts?

I do agree it might be necessary to work with OpenAI, even if they are reckless and negligent. I'd like to live in the world where "don't work with companies causing great harm" was a straightforward rule to follow. But we might live in a messy, complex world where some good people may need to work with harmful companies anyway. 

But: we've now had two waves of alignment people leave OpenAI. The second wave has multiple people explicitly saying things like "quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI."

The first wave, my guess is they were mostly under non-disclosure/non-disparagement agreements, and we can't take their lack of criticism as much evidence.

It looks to me, from the outside, like OpenAI is just not really structured or encultured in a way that makes it that viable for someone on the inside to help improve things much. I don't think it makes sense to continue trying to improve OpenAI's plans, at least until OpenAI has some kind of credible plan (backed up by actions) of actually seriously working on existential safety.

I think it might make sense for some individuals to go work at OpenAI anyway, who have an explicit plan for how to interface with the organizational culture. But I think this is a very high context, high skill job. (i.e. skills like "keeping your eye on the AI safety ball", "interfacing well with OpenAI staff/leadership while holding onto your own moral/strategic compass", "knowing how to prioritize research that differentially helps with existential safety, rather than mostly amounting to near-term capabilities work.") 

I don't think this is the sort of thing you should just funnel people into on a jobs board.

I think it makes a lot more sense to say "look, you had your opportunity to be taken on faith here, you failed. It is now OpenAI's job to credibly demonstrate that it is worthwhile for good people to join there trying to help, rather than for them to take that on faith."

Q: What about jobs like "security research engineer?". 

That seems straightforwardly good for OpenAI to have competent people for, and probably doesn't require a good "Safety Culture" to pay off?

The argument for this seems somewhat plausible. I still personally think it makes sense to fully delist OpenAI positions unless they've made major changes to the org (see below).  

I'm operating here from a cynical/conflict-theory-esque stance. I think OpenAI has exploited the EA community and it makes sense to engage with them from more of a cynical "conflict theory" stance. I think it makes more sense to say, collectively, "knock it off", and switch to default "apply pressure." I think if OpenAI wants to find good security people, that should be their job, not EA organizations. 

But, I don't have a really slam dunk argument that this is the right stance to take. For now, I list it as my opinion, but acknowledge there are other worldviews where it's less clear what to do.

Q: What about offering a path towards "good standing?" to OpenAI?

It seems plausibly important to me to offer some kind of roadmap back to good standing. I do kinda think regulating OpenAI from the outside isn't likely to be sufficient, because it's too hard to specify what actually matters for existential AI safety.

So, it feels important to me not to fully burn bridges. 

But, it seems pretty hard to offer any particular roadmap. We've got three different lines of OpenAI leadership breaking commitments, and being manipulative. So we're long past the point where "mere words" would reassure me.

Things that would be reassure me are costly actions that are extremely unlikely in worlds where OpenAI would (intentionally or no) lure more people in and then still turn out to, nope, just be taking advantage of them for safety-washing / regulatory capture reasons.

Such actions seem pretty unlikely by now. Most of the examples I can think to spell out seem too likely to be gameable (i.e. if OpenAI were to announce a new Superalignment-equivalent team, or commitments to participate in eval regulations, I would guess they would only do the minimum necessary to look good, rather than a real version of the thing).

An example that'd feel pretty compelling is if Sam Altman actually really, for real, left the company, that would definitely have me re-evaluating my sense of the company. (This seems like a non-starter, but, listing for completeness). 

I wouldn't put much stock in a Sam Altman apology. If Sam is still around, the most I'd hope for is some kind of realistic, real-talk, arms-length negotiation where it's common knowledge that we can't really trust each other but maybe we can make specific deals.

I'd update somewhat if Greg Brockman and other senior leadership (i.e. people who seem to actually have the respect of the capabilities and product teams), or maybe new board members, made clear statements indicating:

  • they understand: how OpenAI messed up (in terms of not keeping commitments, and the manipulativeness of non-disclosure non-disparagement agreements)
  • they take some actions that are holding Sam (and maybe themselves in some cases) accountable.
  • they take existential risk seriously on a technical level. They have real cruxes for what would change their current scaling strategy. This is integrated into org-wide decisionmaking. 

This wouldn't make me think "oh everything's fine now." But would be enough of an update that I'd need to evaluate what they actually said/did and form some new models.

Q: What if we left up job postings, but with an explicit disclaimer linking to a post saying why people should be skeptical?

This idea just occurred to me as I got to the end of the post. Overall, I think this doesn't make sense given the current state of OpenAI, but thinking about it opens up some flexibility in my mind about what might make sense, in worlds where we get some kind of costly signals or changes in leadership from OpenAI.

(My actual current guess is this sort of disclaimer makes sense for Anthropic and/or DeepMind jobs. This feels like a whole separate post though)


My actual range of guesses here are more cynical than this post focuses on. I'm focused on things that seemed easy to legibly argue for. 

I'm not sure who has decisionmaking power at 80k, or most other relevant orgs. I expect many people to feel like I'm still bending over backwards being accommodating to an org we should have lost all faith in. I don't have faith in OpenAI, but I do still worry about escalation spirals and polarization of discourse. 

When dealing with a potentially manipulative adversary, I think it's important to have backbone and boundaries and actual willingness to treat the situation adversarially. But also, it's important to leave room to update or negotiate.

But, I wanted to end with explicitly flagging the hypothesis that OpenAI is best modeled as a normal profit-maximizing org, that they basically co-opted EA into being a lukewarm ally it could exploit, when it'd have made sense to treat OpenAI more adversarially from the start (or at least be more "ready to pivot towards treating adversarially". 

I don't know that that's the right frame, but I think the recent revelations should be an update towards that frame.

256

32
12
2

Reactions

32
12
2

More posts like this

Comments70
Sorted by Click to highlight new comments since:

Hi, I run the 80,000 Hours job board, thanks for writing this out! 

I agree that OpenAI has demonstrated a significant level of manipulativeness and have lost confidence in them prioritizing existential safety work. However, we don’t conceptualize the board as endorsing organisations. The point of the board is to give job-seekers access to opportunities where they can contribute to solving our top problems or build career capital to do so (as we write in our FAQ). Sometimes these roles are at organisations whose mission I disagree with, because the role nonetheless seems like an opportunity to do good work on a key problem.

For OpenAI in particular, we’ve tightened up our listings since the news stories a month ago, and are now only posting infosec roles and direct safety work – a small percentage of jobs they advertise. See here for the OAI roles we currently list. We used to list roles that seemed more tangentially safety-related, but because of our reduced confidence in OpenAI, we limited the listings further to only roles that are very directly on safety or security work. I still expect these roles to be good opportunities to do important work. Two live examples:

... (read more)

Nod, thanks for the reply.

I won't argue more for removing infosec roles at the moment. As noted in the post, I think this is at least a reasonable position to hold. I (weakly) disagree, but for reasons that don't seem worth getting into here.

The things I'd argue here:

  • Safetywashing is actually pretty bad, for the world's epistemics and for EA and AI safety's collective epistemics. I think it also warps the epistemics of the people taking the job, so while they might be getting some career experience... they're also likely getting a distorted view of what what AI safety is, and becoming worse researchers than they would otherwise. 
  • As previously stated – it's not that I don't think anyone should take these jobs, but I think the sort of person who should take them is someone who has a higher degree of context and skill than I expect the 80k job board to filter for. 
  • Even if you disagree with those points, you should have some kind of crux for "what would distinguish an 'impactful AI safety job?'" vs a fake safety-washed role. It should be at least possible for OpenAI to make a role so clearly fake that you notice and stop listing it.
  • If you're set on continuing to list Ope
... (read more)
8
Conor Barnes
Re: On whether OpenAI could make a role that feels insufficiently truly safety-focused: there have been and continue to be OpenAI safety-ish roles that we don’t list because we lack confidence they’re safety-focused. For the alignment role in question, I think the team description given at the top of the post gives important context for the role’s responsibilities: OpenAI’s Alignment Science research teams are working on technical approaches to ensure that AI systems reliably follow human intent even as their capabilities scale beyond human ability to directly supervise them.  With the above in mind, the role responsibilities seem fine to me. I think this is all pretty tricky, but in general, I’ve been moving toward looking at this in terms of the teams: Alignment Science: Per the above team description, I’m excited for people to work there – though, concerning the question of what evidence would shift me, this would change if the research they release doesn’t match the team description. Preparedness: I continue to think it’s good for people to work on this team, as per the description: “This team … is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.” Safety Systems: I think roles here depend on what they address. I think the problems listed in their team description include problems I definitely want people working on (detecting unknown classes of harm, red-teaming to discover novel failure cases, sharing learning across industry, etc), but it’s possible that we should be more restrictive in which roles we list from this team. I don’t feel confident giving a probability here, but I do think there’s a crux here around me not expecting the above team descriptions to be straightforward lies. It’s possible that the teams will have limited resources to achieve their goals, and with the Safety Systems team in particular, I think there’s an extra risk of safety work blending into product work. However, my impres

Thanks.

Fwiw while writing the above, I did also think "hmm, I should also have some cruxes for 'what would update me towards 'these jobs are more real than I currently think.'" I'm mulling that over and will write up some thoughts soon.

It sounds like you basically trust their statements about their roles. I appreciate you stating your position clearly, but, I do think this position doesn't make sense:

  • we already have evidence of them failing to uphold commitments they've made in clear cut ways. (i.e. I'd count their superalignment compute promises as basically a straightforward lie, and if not a "lie", it at least clearly demonstrates that their written words don't count for much. This seems straightforwardly relevant to the specific topic of "what does a given job at OpenAI entail?", in addition to being evidence about their overall relationship with existential safety)
  • we've similarly seen OpenAI change it's stated policies, such as removing restrictions on military use. Or, initially being a nonprofit and converting into "for-profit-managed by non-profit" (where the "managed by nonprofit board" part turned out to be pretty ineffectual) (not sure if I endorse this, mulling over Hab
... (read more)
9
Habryka
As I've discussed in the comments on a related post, I don't think OpenAI meaningfully changed any of its stated policies with regards to military usage. I don't think OpenAI really ever promised anyone they wouldn't work with militaries, and framing this as violating a past promise weakens the ability to hold them accountable for promises they actually made.  What OpenAI did was to allow more users to use their product. It's similar to LessWrong allowing crawlers or jurisdictions that we previously blocked to now access the site. I certainly wouldn't consider myself to have violated some promise by allowing crawlers or companies to access LessWrong that I had previously blocked (or for a closer analogy, let's say we were currently blocking AI companies from crawling LW for training purposes, and I then change my mind and do allow them to do that, I would not consider myself to have broken any kind of promise or policy).
2
Raemon
Mmm, nod. I will look into the actual history here more, but, sounds plausible. (edited the previous comment a bit for now)

The arguments you give all sound like reasons OpenAI safety positions could be beneficial. But I find them completely swamped by all the evidence that they won't be, especially given how much evidence OpenAI has hidden via NDAs.

But let's assume we're in a world where certain people could do meaningful safety work an OpenAI. What are the chances those people need 80k to tell them about it? OpenAI is the biggest, most publicized AI company in the world; if Alice only finds out about OpenAI jobs via 80k that's prima facie evidence she won't make a contribution to safety. 

What could the listing do? Maybe Bob has heard of OAI but is on the fence about applying. An 80k job posting might push him over the edge to applying or accepting. The main way I see that happening is via a halo effect from 80k. The mere existence of the posting implies that the job is aligned with EA/80k's values. 

I don't think there's a way to remove that implication with any amount of disclaimers. The job is still on the board. If anything disclaimers make the best case scenarios seem even better, because why else would you host such a dangerous position?

So let me ask: what do you see as the upside to highlighting OAI safety jobs on the job board? Not of the job itself, but the posting. Who is it that would do good work in that role, and the 80k job board posting is instrumental in them entering it?

8
Conor Barnes
Update: We've changed the language in our top-level disclaimers: example. Thanks again for flagging! We're now thinking about how to best minimize the possibility of implying endorsement.
9
Raemon
Following up my other comment: To try to be a bit more helpful rather than just complaining and arguing: when I model your current worldview, and try to imagine a disclaimer that helps a bit more with my concerns but seems like it might work for you given your current views, here's a stab. Changes bolded. (it's not my main crux, by "frontier" felt both like a more up-to-date term for what OpenAI does, and also feels more specifically like it's making a claim about the product than generally awarding status to the company the way "leading" does)
2
Raemon
Thanks. This still seems pretty insufficient to me, but, it's at least an improvement and I appreciate you making some changes here.

I think that given the 80k brand (which is about helping people to have a positive impact with their career), it's very hard for you to have a jobs board which isn't kinda taken by many readers as endorsement of the orgs. Disclaimers help a bit, but it's hard for them to address the core issue — because for many of the orgs you list, you basically do endorse the org (AFAICT).

I also think it's a pretty different experience for employees to turn up somewhere and think they can do good by engaging in a good faith way to help the org do whatever it's doing, and for employees to not think that but think it's a good job to take anyway.

My take is that you would therefore be better splitting your job board into two sections:

  • In one section, only include roles at orgs where you basically feel happy standing behind them, and think it's straightforwardly good for people to go there and help the orgs be better
    • You can be conservative about inclusion here — and explain in the FAQ that non-inclusion in this list doesn't mean that it wouldn't be good to straightforwardly help the org, just that this isn't transparent enough to 80k to make the recommendation
  • In another more expansive section, you cou
... (read more)

I am generally very wary of trying to treat your audience as unsophisticated this way. I think 80k taking on the job of recommending the most impactful jobs, according to the best of their judgement, using the full nuance and complexity of their models, is much clearer and straightforward than a recommendation which is something like "the most impactful jobs, except when we don't like being associated with something, or where the case for it is a bit more complicated than our other jobs, or where our funders asked us to not include it, etc.". 

I do think that doing this well requires the ability to sometimes say harsh things about an organization. I think communicating accurately about job recommendations will inevitably require being able to say "we think working at this organization might be really miserable and might involve substantial threats, adversarial relationships, and you might cause substantial harm if you are not careful, but we still think it's overall still a good choice if you take that into account". And I think those judgements need to be made on an organization-by-organization level (and can't easily be captured by generic statements in the context of the associated career guide). 

I don't think you should treat your audience as unsophisticated. But I do think you should acknowledge that you will have casual readers who will form impressions from a quick browse, and think it's worth doing something to minimise the extent to which they come away misinformed.

Separately, there is a level of blunt which you might wisely avoid being in public. Your primary audience is not your only audience. If you basically recommend that people treat a company as a hostile environment, then the company may reasonably treat the recommender as hostile, so now you need to recommend that they hide the fact they listened to you (or reveal it with a warning that this may make the environment even more hostile) ... I think it's very reasonable to just skip this whole dynamic.

9
Habryka
Yeah, I agree with this. I like the idea of having different kinds of sections, and I am strongly in favor of making things be true at an intuitive glance as well as on closer reading (I like something in the vicinity of "The Onion Test" here) I feel like this dynamic is just fine? I definitely don't think you should recommend that they hide the fact they listened to you, that seems very deceptive. I think you tell people your honest opinion, and then if the other side retaliates, you take it. I definitely don't think 80k should send people to work at organizations as some kind of secret agent, and I think responding by protecting OpenAIs reputation by not disclosing crucial information about the role, feels like straightforwardly giving into an unjustified threat.

Hmm at some level I'm vibing with everything you're saying, but I still don't think I agree with your conclusion. Trying to figure out what's going on there.

Maybe it's something like: I think the norms prevailing in society say that in this kind of situation you should be a bit courteous in public. That doesn't mean being dishonest, but it does mean shading the views you express towards generosity, and sometimes gesturing at rather than flat expressing complaints.

With these norms, if you're blunt, you encourage people to read you as saying something worse than is true, or to read you as having an inability to act courteously. Neither of which are messages I'd be keen to send.

And I sort of think these norms are good, because they're softly de-escalatory in terms of verbal spats or ill feeling. When people feel attacked it's easy for them to be a little irrational and vilify the other side. If everyone is blunt publicly I think this can escalate minor spats into major fights.

Maybe it's something like: I think the norms prevailing in society say that in this kind of situation you should be a bit courteous in public. That doesn't mean being dishonest, but it does mean shading the views you express towards generosity, and sometimes gesturing at rather than flat expressing complaints.

I don't really think these are the prevailing norms, especially not regards with an adversary who has leveraged illegal threats of destroying millions of dollars of value to prevent negative information from getting out. 

Separately about whether these are the norms, I think the EA community plays a role in society where being honest and accurate about our takes of other people is important. There were a lot of people who took what the EA community said about SBF and FTX seriously and this caused enormous harm. In many ways the EA community (and 80k in-particular) are playing the role of a rating agency, and as a rating agency you need to be able to express negative ratings, otherwise you fail at your core competency. 

As such, even if there are some norms in society about withholding negative information here, I think the EA and AI-safety communities in-particular cannot hold itself to these norms within the domains of their core competencies and responsibilities.

I don't regard the norms as being about witholding negative information, but about trying to err towards presenting friendly frames while sharing what's pertinent, or something?

Honestly I'm not sure how much we really disagree here. I guess we'd have to concretely discuss wording for an org. In the case of OpenAI, I imagine it being appropriate to include some disclaimer like:

OpenAI is a frontier AI company. It has repeatedly expressed an interest in safety and has multiple safety teams. However, some people leaving the company have expressed concern that it is not on track to handle AGI safely, and that it wasn't giving its safety teams resources they had been promised. Moreover, it has a track record of putting inappropriate pressure on people leaving the company to sign non-disparagement agreements. [With links]

I largely agree with the rating-agency frame.

I don't regard the norms as being about witholding negative information, but about trying to err towards presenting friendly frames while sharing what's pertinent, or something?

I agree with some definitions of "friendly" here, and disagree with others. I think there is an attractor here towards Orwellian language that is intentionally ambiguous about what it's trying to say, in order to seem friendly or non-threatening (because in some sense it is), and that kind of "friendly" seems pretty bad to me.

I think the paragraph you have would strike me as somewhat too Orwellian, though it's not too far off from what I would say. Something closer to what seems appropriate to me: 

OpenAI is a frontier AI company, and as such it's responsible for substantial harm by assisting in the development of dangerous AI systems, which we consider among the biggest risks to humanity's future. In contrast to most of the jobs in our job board, we consider working at OpenAI more similar to working at a large tobacco company, hoping to reduce the harm that the tobacco company causes, or leveraging this specific tobacco company's expertise with tobacco to produce more competetive and less harmful variat

... (read more)

So it may be that we just have some different object-level views here. I don't think I could stand behind the first paragraph of what you've written there. Here's a rewrite that would be palatable to me:

OpenAI is a frontier AI company, aiming to develop artificial general intelligence (AGI). We consider poor navigation of the development of AGI to be among the biggest risks to humanity's future. It is complicated to know how best to respond to this. Many thoughtful people think it would be good to pause AI development; others think that it is good to accelerate progress in the US. We think both of these positions are probably mistaken, although we wouldn't be shocked to be wrong. Overall we think that if we were able to slow down across the board that would probably be good, and that steps to improve our understanding of the technology relative to absolute progress with the technology are probably good. In contrast to most of the jobs in our job board, therefore, it is not obviously good to help OpenAI with its mission. It may be more appropriate to consider working at OpenAI as more similar to working at a large tobacco company, hoping to reduce the harm that the tobacco company c

... (read more)

... That paragraph doesn't distinguish at all between OpenAI and, say, Anthropic. Surely you want to include some details specific to the OpenAI situation? (Or do your object-level views really not distinguish between them?)

6
Owen Cotton-Barratt
I was just disagreeing with Habryka's first paragraph. I'd definitely want to keep content along the lines of his third paragraph (which is pretty similar to what I initially drafted).
4
Habryka
Yeah, this paragraph seems reasonable (I disagree, but like, that's fine, it seems like a defensible position).
2
Raemon
Yeah same. (although, this focuses entirely on their harm as an AI organization, and not manipulative practices) I think it leaves the question "what actually is the above-the-fold-summary" (which'd be some kind of short tag).

Otherwise I think that you are in part spending 80k's reputation in endorsing these organizations

Agree on this. For a long time I've had a very low opinion of 80k's epistemics[1] (both podcast, and website), and having orgs like OpenAI and Meta on there was a big contributing factor[2].


  1. In particular that they try to both present as an authoritative source on strategic matters concerning job selection, while not doing the necessary homework to actually claim such status & using articles (and parts of articles) that empirically nobody reads & I've found are hard to find to add in those clarifications, if they ever do. ↩︎

  2. Probably second to their horrendous SBF interview. ↩︎

I think this is a good policy and broadly agree with your position.

It's a bit awkward to mention, but as you've said that you've delisted other roles at OpenAI and that OpenAI has acted badly before - I think you should consider explicitly saying that you don't necessarily endorse other roles at OpenAI and suspect that some other role may be harmful on the OpenAI jobs board cards.

I'm a little worried about people seeing OpenAI listed on the board and inferring that the 80k recommendation somewhat transfers to other roles at OpenAI (which, imo is a reasonable heuristic for most companies listed on the board - but fails in this specific case).

I think this halo effect could be reduced by making small UI changes:

  • Removing the OpenAI logo
  • Replacing the "OpenAI" name in the search results by "Harmful frontier AI Lab" or similar
  • Starting with a disclaimer on why this specific job might be good despite the overall org being bad

I would be all for a cleanup of 80k material to remove mentions of OpenAI as a place to improve the world.

9
calebp
The first two bullets don't seem like small UI changes to me; the second, in particular, seems too adversarial imo.

fwiw I don't think replacing the OpenAI logo or name makes much sense.

I do think it's pretty important to actively communicate that even the safety roles shouldn't be taken at face value. 

7
yanni kyriacos
I agree with your second point Caleb, which is also why I think 80k need to stop having OpenAI (or similar) employees on their podcast. Why? Because employer brand Halo Effects are real and significant.
8
calebp
Fwiw, I don't think that being on the 80k is much of an endorsement of the work that people are doing. I think the signal is much more like "we think this person is impressive and interesting", which is consistent with other "interview podcasts" (and I suspect that it's especially true of podcasts that are popular amongst 80k listeners). I also think having OpenAI employees discuss their views publicly with smart and altruistic people like Rob is generally pretty great, and I would personally be excited for 80k to have more OpenAI employees (particularly if they are willing to talk about why they do/don't think AIS is important and talk about their AI worldview). Having a line at the start of the podcast making it clear that they don't necessarily endorse the org the guest works for would mitigate most concerns - though I don't think it's particularly necessary.

I would agree with this if 80k didn’t make it so easy for the podcast episodes to become PR vehicles for the companies: some time back 80k changed their policy and now they send all questions to interviewees in advance, and let them remove any answers they didn’t like upon reflection. Both of these make it very straightforward for the companies’ PR teams to influence what gets said in an 80k podcast episode, and remove any confidence that you’re getting an accurate representation of the researcher’s views, rather than what the PR team has approved them to say.

These still seem like potentially very strong roles with the opportunity to do very important work. We think it’s still good for the world if talented people work in roles like this! 

I think given that these jobs involved being pressured via extensive legal blackmail into signing secret non-disparagement agreements that forced people to never criticize OpenAI, at great psychological stress and at substantial cost to many outsiders who were trying to assess OpenAI, I don't agree with this assessment. 

Safety people have been substantially harmed by working at OpenAI, and safety work at OpenAI can have substantial negative externalities.

8
Remmelt
This misses aspects of what used to be 80k's position: ❝ In fact, we think it can be the best career step for some of our readers to work in labs, even in non-safety roles. That’s the core reason why we list these roles on our job board.  – Benjamin Hilton, February 2024 ❝ Top AI labs are high-performing, rapidly growing organisations. In general, one of the best ways to gain career capital is to go and work with any high-performing team — you can just learn a huge amount about getting stuff done. They also have excellent reputations more widely. So you get the credential of saying you’ve worked in a leading lab, and you’ll also gain lots of dynamic, impressive connections. – Benjamin Hilton, June 2023 - still on website 80k was listing some non-safety related jobs: – From my email on May 2023: – From my comment on February 2024:
2
Rebecca
Where do they say the handpicked line?

Insofar as you are recommending the jobs but not endorsing the organization, I think it would be good to be fairly explicit about this in the job listing. The current short description of OpenAI seems pretty positive to me:

OpenAI is a leading AI research and product company, with teams working on alignment, policy, and security. You can read more about considerations around working at a leading AI company in our career review on the topic. They are also currently the subject of news stories relating to their safety work. 

I think this should say something like "We recommend jobs at OpenAI because we think these specific positions may be high impact. We would not necessarily recommend working at other jobs at OpenAI (especially jobs which increase AI capabilities)."

I also don't know what to make of the sentence "They are also currently the subject of news stories relating to their safety work." Is this an allusion to the recent exodus of many safety people from OpenAI? If so, I think it's misleading and gives far too positive an impression. 

Relatedly, I think that the "Should you work at a leading AI company?" article shouldn't start with a pros and cons list which sort of buries the fact that you might contribute to building extremely dangerous AI. 

I think "Risk of contributing to the development of harmful AI systems" should at least be at the top of the cons list. But overall this sort of reminds me of my favorite graphic from 80k:

?? It's the second bullet point in the cons list, and reemphasized in the third bullet?

If you're saying "obviously this is the key determinant of whether you should work at a leading AI company so there shouldn't even be a pros / cons table", then obviously 80K disagrees given they recommend some such roles (and many other people (e.g. me) also disagree so this isn't 80K ignoring expert consensus). In that case I think you should try to convince 80K on the object level rather than applying political pressure.

This thread feels like a fine place for people to express their opinion as a stakeholder.

Like, I don't even know how to engage with 80k staff on this on the object level, and seems like the first thing to do is to just express my opinion (and like, they can then choose to respond with argument).

1
Conor Barnes
(Copied from reply to Raemon) Yeah, I think this needs updating to something more concrete. We put it up while ‘everything was happening’ but I’ve neglected to change it, which is my mistake and will probably prioritize fixing over the next few days.

Hey Conor!

Regarding

we don’t conceptualize the board as endorsing organisations.

And

 contribute to solving our top problems or build career capital to do so

It seems like EAs expect the 80k job board to suggest high impact roles, and this has been a misunderstanding for a long time (consider looking at that post if you haven't). The disclaimers were always there, but EAs (including myself) still regularly looked at the 80k job board as a concrete path to impact.

I don't have time for a long comment, just wanted to say I think this matters.

I don't read those two quotes as in tension? The job board isn't endorsing organizations, it's endorsing roles. An organization can be highly net harmful while the right person joining to work on the right thing can be highly positive.

I also think "endorsement" is a bit too strong: the bar for listing a job shouldn't be "anyone reading this who takes this job will have significant positive impact" but instead more like "under some combinations of values and world models that the job board runners think are plausible, this job is plausibly one of the highest impact opportunities for the right person".

My own intuition on what to do with this situation - is to stop trying to change your reputation using disclaimers. 

There's a lot of value in having a job board with high impact job recommendations. One of the challenging parts is getting a critical mass of people looking at your job board, and you already have that.

4
Rebecca
What are the relevant disclaimers here? Conor is saying 80l does think that alignment roles at OpenAI are impactful. Your article mentions the career development tag, but the roles under discussion don’t have that tag right?
2
Yonatan Cale
1. If Conor thinks these roles are impactful then I'm happy we agree on listing impactful roles. (The discussion on whether alignment roles are impactful is separate from what I was trying to say in my comment) 2. If the career development tag is used (and is clear to typical people using the job board) then - again - seems good to me.
3
Rebecca
I’m still confused about what the misunderstanding is
8
Rebecca
Echoing Raemon, it’s still a value judgement about an organisation to say that 80k believes that a given role is one where, as you say, “they can contribute to solving our top problems or build career capital to do so”. You are saying that you have sufficient confidence that the organisation is run well enough that someone with little context of internal politics and pressures that can’t be communicated via a job board can come in and do that job impactfully. But such a person would be very surprised to learn that previous people in their role or similar ones at the company have not been able to do their job due to internal politics, lies, obfuscation etc, and that they may not be able to do even the basics of their job (see the broken promise of dedicated compute supply). It’s difficult to even build career capital as a technical researcher when you’re not given the resources to do your job and instead find yourself having to upskill in alliance building and interpersonal psychology.
5
Raemon
I have slightly complex thoughts about the "is 80k endorsing OpenAI?" question. I'm generally on the side of "let people make individual statements without treating it as a blanket endorsement."  In practice, I think the job postings will be read as an endorsement by many (most?) people. But I think the overall policy of "social-pressure people to stop making statements that could be read as endorsements" is net harmful.  I think you should at least be acknowledging the implication-of-endorsement as a cost you are paying. I'm a bit confused about how to think about it here, because I do think listing people on the job site, with the sorts of phrasing you use, feels more like some sort of standard corporate political move than a purely epistemic move.  I do want to distinguish the question of "how does this job-ad funnel social status around?" from "does this job-ad communicate clearly?". I think it's still bad to force people only speak words that can't be inaccurately read into, but, I think this is an important enough area to put extra effort in. An accurate job posting, IMO, would say "OpenAI-in-particular has demonstrated that they do not follow through on safety promises, and we've seen people leave due to not feeling effectual." I think you maybe both disagree with that object level fact (if so, I think you are wrong, and this is important), as well as, well, that'd be a hell of a weird job ad. Part of why I am arguing here is I think it looks, from the outside, like 80k is playing a slightly confused mix of relating to orgs politically and making epistemic recommendations.  I kind of expect at this point you to leave the job ad up, and maybe change the disclaimer slightly in a way that is leaves some sort of plausibly-deniable veneer.

Alignment concerns aside, I think a job board shouldn't host companies that have taken already-earned compensation hostage. Especially without noting this fact. That's a primary thing about good employers, they don't retroactively steal stock they already gave you.


 

7
Jeff Kaufman
I agree that was pretty terrible behavior, but there are lots of anti-employee things an organization could do which are orthogonal (especially if you know this going in, which OpenAI employees previously didn't but we're talking about new ones here) to whether the work is impactful. There are lots of hard lines that seem like they would make sense, but I'm not in favor of them: at some point there will be a job worth listing where it really is very impactful despite serious downsides. For example, I think good employers pay you enough for a reasonably comfortable life, but if, say, some key government role is extremely poorly paid it may still make sense to take it if you have savings you're willing to spend down to support yourself. Or, I think graduate school is often pretty bad for people, where PIs have far more power than corporate world bosses, but while you should certainly think hard about this before going to grad school it's not determinative.
7
Elizabeth
No argument from me that it's sometimes worth it to take low paying or miserable jobs. But low pay isn't a surprise fact you learn years into working for a company, it's written right on the tin[1]. The issue for me isn't that OpenAI paid undermarket rates, it's that it lied about material facts of the job. You could put up a warning that OpenAI equity is ephemeral, but the bigger issue is that OpenAI can't be trusted to hold to any deal.   1. ^ The power PIs hold can be a surprise, and I'm disappointed 80k's article on PhDs doesn't cover that issue. 
3
Jeff Kaufman
I agree that's a big issue and it's definitely a mark against it, but I don't think that should firmly rule out working there or listing it as a place EAs might consider working.
2
Elizabeth
I don't think the dishonesty entirely rules out working at OpenAI. Whether or not OpenAI safety positions should be on the 80k job board depends on the exact mission of the job board. I have my models, but let me ask you: who is it you think will have their plans changed for the better by seeing OpenAI safety positions[1] on 80k's board? 1. ^ I'm excluding IS positions from this question because it seems possible someone skilled in IS would not think to apply to OpenAI. I don't see how anyone qualified  for OpenAI safety positions could need 80k to inform them the positions exist. 
2
Jeff Kaufman
I don't object to dropping OpenAI safety positions from the 80k job board on the grounds that the people who would be highly impactful in those roles don't need the job board to learn about them, especially when combined with the other factors we've been discussing. In this subthread I'm pushing back on your broader "I think a job board shouldn't host companies that have taken already-earned compensation hostage".
2
Elizabeth
I still think the question of "who is the job board aimed at?" is relevant here, and would like to hear your answer.
4
Jeff Kaufman
As I tried to communicate in my previous comment, I'm not convinced there is anyone who "will have their plans changed for the better by seeing OpenAI safety positions on 80k's board", and am not arguing for including them on the board. EDIT: after a bit of offline messaging I realize I misunderstood Elizabeth; I thought the parent comment was pushing me to answer the question posed in the great grandcomment but actually it was accepting my request to bring this up a level of generality and not be specific to OpenAI. Sorry! I think the board should generally list jobs that, under some combinations of values and world models that the job board runners think are plausible, are plausibly one of the highest impact opportunities for the right person. I think in cases like working in OpenAI's safety roles where anyone who is the "right person" almost certainly already knows about the role, there's not much value in listing it but also not much harm. I think this mostly comes down to a disagreement over how sophisticated we think job board participants are, and I'd change my view on this if it turned out that a lot of people reading the board are new-to-EA folks who don't pay much attention to disclaimers and interpret listing a role as saying "someone who takes this role will have a large positive impact in expectation". If there did turn out to be a lot of people in that category I'd recommend splitting the board into a visible-by-default section with jobs where conditional on getting the role you'll have high positive impact in expectation (I'd biasedly put the NAO's current openings in this category) and a you-need-to-click-show-more section with jobs where you need to think carefully about whether the combination of you and the role is a good one.

I think an assumption 80k makes is something like "well if our audience thinks incredibly deeply about the Safety problem and what it would be like to work at a lab and the pressures they could be under while there, then we're no longer accountable for how this could go wrong. After all, we provided vast amounts of information on why and how people should do their own research before making such a decision"

The problem is, that is not how most people make decisions. No matter how much rational thinking is promoted, we're first and foremost emotional creatures that care about things like status. So, if 80k decides to have a podcast with the Superalignment team lead, then they're effectively promoting the work of OpenAI. That will make people want to work for OpenAI. This is an inescapable part of the Halo effect.

Lastly, 80k is explicitly targeting very young people who, no offense, probably don't have the life experience to imagine themselves in a workplace where they have to resist incredible pressures to not conform, such as not sharing interpretability insights with capabilities teams.

The whole exercise smacks of nativity and I'm very confident we'll look back and see it as an incredibly obvious mistake in hindsight.

Thanks for raising this. I'm kind of on board with 80k's current strategy, but I think it's useful to have a public discussion like this nevertheless.

To the extent the proposed move is a political / diplomatic move, how much leverage do we actually have here? Does OpenAI nontrivially value their jobs being listed in our job boards? Are they still concerned about having a good relationship with us at all at this point? If I were running the job board, I'd probably imagine no-one much at OpenAI really losing sleep over my decision either way, so I'd tend to do just whatever seemed best to me in terms of the direct consequences.

9
Raemon
I do basically agree we don't have bargaining power, and that they most likely don't care about having a good relationship with us.  The reason for the diplomatic "line of retreat" in the OP is more because: * it's hard to be sure how adversarial a situation you're in, and it just seems like generally good practice to be clear on what would change your mind (in case you have overestimated the adversarialness) * it's helpful for showing others, who might not share exactly my worldview, that I'm "playing fairly." I'm not sure about "direct consequences" being quite the right frame. I agree the particular consequence-route of "OpenAI changes their policies because of our pushback" isn't plausible enough to be worth worrying about, but, I think indirect consequences on our collective epistemics are pretty important.

I haven't shared this post with other relevant parties – my experience has been that private discussion of this sort of thing is more paralyzing than helpful.


Fourteen months ago, I emailed 80k staff with concerns about how they were promoting AGI lab positions on their job board. 

The exchange:

  • I offered specific reasons and action points.
  • 80k staff replied by referring to their website articles about why their position on promoting jobs at OpenAI and Anthropic was broadly justified (plus they removed one job listing). 
  • Then I pointed out what those
... (read more)

Raemon -- I strongly agree, and I don't think EAs should be overthinking this as much as we seem to be in the comments here. Some ethical issues are, actually, fairly simple.

OpenAI, Deepmind, Meta, and even Anthropic are pushing recklessly ahead with AGI capabilities development. We all understand the extinction risks and global catastrophic risks that this imposes on humanity. These companies are not aligned with EA values of preserving human life, civilization, and sentient well-being. 

Therefore, instead of 80k Hours advertising jobs at such compani... (read more)

I do want to acknowledge: 

I refer to Jan Leike's and Daniel Kokotajlo's comments about why the left, and reference other people leaving the company. 

I do think this is important evidence.

I want to acknowledge I wouldn't actually bet that Jan and Daniel would endorse everyone else leaving OpenAI, and only weakly bet that they'd endorse not leaving up the current 80k-ads as written.

I am grateful to them for having spoken up publicly, but I know that a reason people hesitate to speak publicly about this sort of thing is that it's easier for soundbyte words to get taken and runaway by people arguing for positions stronger than you endorse, and I don't want them to regret that.

I know at least one person who has less negative (but mixed) feelings who left OpenAI for somewhat different reasons, and another couple people who still work at OpenAI I respect in at least some domains.

(I haven't chatted with either of them about this recently)

I wonder if it would be worthwhile for a bunch of AI Safety societies at elite universities to make some kind of public commitment about something in this vein. This probably has more weight/influence than 80,000 Hours, however, it would be more valuable if we were trying to influence them, but it's less valuable since we probably don't have any plausibly satisfiable asks so long as Sam is there.

If OpenAI doesn't hire an EA they will just hire someone else. I'm not sure if you tackle this point directly (sorry if I missed it) but doesn't it straightforwardly seem better to have someone safety-conscious in these roles rather than someone who isn't safety-conscious? 

To reiterate, it's not like if we remove these roles from the job board that they will less likely be filled. They would still definitely be filled, just by someone less safety-conscious in expectation. And I'm not sure the person who would get the role would be "less talented" in e... (read more)

I attempted to address this in the Isn't it better to have alignment researchers working there, than not? Are you sure you're not running afoul of misguided purity instincts? FAQ section.

I think the evidence we have from OpenAI is that it isn't very helpful to "be a safety conscious person there." (i.e. combo of people leaving who did not find it tractable to be helpful there, and NDAs making it hard to reason about, and IMO better to default assume bad things rather than good things given the NDAs)

I think it's especially not helpful if you're a low-context person, who reads an OpenAI job board posting, and isn't going in with a specific plan to operate in an adversarial environment. 

If the job posting literally said "to be clear, OpenAI has a pretty bad track record and seems to be an actively misleading environment, take this job if you are prepared to deal with that", that'd be a different story. (But, that's also a pretty weird job ad, and OpenAI would be rightly skeptical of people coming from that funnel. I think taking jobs at OpenAI that are net helpful to the world requires a mix of a very strong moral and epistemic backbone, and nonetheless still able to make good fa... (read more)

1
JackM
It's insanely hard to have an outsized impact in this world. Of course it's hard to change things from inside OpenAI, but that doesn't mean we shouldn't try. If we succeed it could mean everything. You're probably going to have lower expected value pretty much anywhere else IMO, even if it does seem intractable to change things at OpenAI. Surely this isn't the typical EA though?
2
Raemon
I think job ads in particular are a filter for "being more typical." I expect the people who have a chance of doing a good job to be well connected to previous people who worked at OpenAI, with some experience under their belt navigating organizational social scenes while holding onto their own epistemics. I expect such a person to basically not need to see the job ad.
2
JackM
You're referring to job boards generally but we're talking about the 80K job board which is no typical job board. I would expect someone who will do a good job to be someone going in wanting to stop OpenAI destroying the world. That seems to be someone who would read the 80K Hours job board. 80K is all about preserving the future. They of course also have to be good at navigating organizational social scenes while holding onto their own epistemics which in my opinion are skills commonly found in the EA community!
4
Raemon
I think EAs vary wildly. I think most EAs do not have those skills – I think it is a very difficult skill. Merely caring about the world is not enough.  I think most EAs do not, by default, prioritize epistemics that highly, unless they came in through the rationalist scene, and even then, I think holding onto your epistemics while navigating social pressure is a very difficult skill that even rationalists who specialize in it tend to fail at. (Getting into details here is tricky because it involves judgment calls about individuals, in social situations that are selected for being murky and controversial, but, no, I do not think the median EA or even the 99th percentile EA is going to competent enough at this for it to be worthwhile for them to join OpenAI. I think ~99.5th percentile is the point where it seems even worth talking about, and I don't think those people get most of their job leads through the job board).
0
JackM
The person who gets the role is obviously going to be highly intelligent, probably socially adept, and highly-qualified with experience working in AI etc. etc. OpenAI wouldn't hire someone who wasn't. The question is do you want this person also to care about safety. If so I would think advertising on the EA job board would increase the chance of this. If you think EAs or people who look at the 80K Hours job board are for some reason less good epistemically than others then you will have to explain why because I believe the opposite.

Executive summary: 80,000 Hours and similar EA organizations should remove OpenAI job listings from their platforms due to OpenAI's demonstrated recklessness, manipulativeness, and failure to prioritize existential AI safety.

Key points:

  1. OpenAI has broken promises, used non-disclosure agreements manipulatively, and shown poor safety culture, warranting removal from EA job boards.
  2. While some individuals may still choose to work at OpenAI, EA orgs should not systematically funnel talent there.
  3. OpenAI's path back to good standing with the AI risk community is unc
... (read more)
More from Raemon
Curated and popular this week
Relevant opportunities