R

Raemon

2399 karmaJoined

Comments
214

Topic contributions
1

I recall previously hearing there might be a final round of potential amendments in response to things Gavin Newsom requests. Was/is that accurate?

(several years late, whoops!)

Yeah, my intent here was more "be careful deciding to scale your company to the point you need a lot of middle managers, if you have a nuanced goal", rather than "try to scale your company without middle managers."

In the context of an EA jobs list it seems like both are pretty bad. (there's the "job list" part, and the "EA" part)

Yeah, this does seem like an improvement. I appreciate you thinking about it and making some updates.

Can you say a bit more about:

and (2) worse in private than in public.

?

Mmm, nod. I will look into the actual history here more, but, sounds plausible. (edited the previous comment a bit for now)

Following up my other comment:

To try to be a bit more helpful rather than just complaining and arguing: when I model your current worldview, and try to imagine a disclaimer that helps a bit more with my concerns but seems like it might work for you given your current views, here's a stab. Changes bolded.

OpenAI is a frontier AI research and product company, with teams working on alignment, policy, and security. We recommend specific opportunities at OpenAI that we think may be high impact. We recommend applicants pay attention to the details of individual roles at OpenAI, and form their own judgment about whether the role is net positive.  We do not necessarily recommend working at other positions at OpenAI

You can read considerations around working at a frontier AI company in our career review on the topic.

(it's not my main crux, by "frontier" felt both like a more up-to-date term for what OpenAI does, and also feels more specifically like it's making a claim about the product than generally awarding status to the company the way "leading" does)

Thanks.

Fwiw while writing the above, I did also think "hmm, I should also have some cruxes for 'what would update me towards 'these jobs are more real than I currently think.'" I'm mulling that over and will write up some thoughts soon.

It sounds like you basically trust their statements about their roles. I appreciate you stating your position clearly, but, I do think this position doesn't make sense:

  • we already have evidence of them failing to uphold commitments they've made in clear cut ways. (i.e. I'd count their superalignment compute promises as basically a straightforward lie, and if not a "lie", it at least clearly demonstrates that their written words don't count for much. This seems straightforwardly relevant to the specific topic of "what does a given job at OpenAI entail?", in addition to being evidence about their overall relationship with existential safety)
  • we've similarly seen OpenAI change it's stated policies, such as removing restrictions on military use. Or, initially being a nonprofit and converting into "for-profit-managed by non-profit" (where the "managed by nonprofit board" part turned out to be pretty ineffectual) (not sure if I endorse this, mulling over Habryka's comment)

Surely, this at at least updates you downward on how trustworthy their statements are? How many times do they have to "say things that turned out not to be true" before you stop taking them at face value? And why is that "more times than they have already?". 

Separate from straightforward lies, and/or altering of policy to the point where any statements they make seem very unreliable, there is plenty of degrees of freedom of "what counts as alignment." They are already defining alignment in a way that is pretty much synonymous with short-term capabilities. I think the plan of "iterate on 'alignment' with nearterm systems as best you can to learn and prepare" is not necessarily a crazy plan. There are people I respect who endorse it, who previously defended it as an OpenAI approach, although notably most of those people have now left OpenAI (sometimes still working on similar plans at other orgs).

But, it's very hard to tell the difference from the outside between:

  • "iterating on nearterm systems, contributing to AI race dynamics in the process, in a way that has a decent chance of teaching you skills that will be relevant for aligning superintelligences"
  • "iterating on nearterm systems, in a way that you think/hope will teach you skills for navigating superintelligence... but, you're wrong about how much you're learning, and whether it's net positive"
  • "iterating on nearterm systems, and calling it alignment because it makes for better PR, but not even really believing that it's particularly necessary to navigate superintelligence.

When recommending jobs for organizations that are potentially causing great harm, I think 80k has a responsibility to actually form good opinions on whether the job makes sense, independent on what the organization says it's about. 

You don't just need to model whether OpenAI is intentionally lying, you also need to model whether they are phrasing things ambiguously, and you need to model whether they are self-decelving about whether these roles are legitimate alignment work, or valuable enough work to outweigh the risks. And, you need to model that they might just be wrong and incompetent at longterm alignment development (or: "insufficiently competent to outweigh risks and downsides"), even if their heart were in the right place. 

I am very worried that this isn't already something you have explicit models about.

Thanks. This still seems pretty insufficient to me, but, it's at least an improvement and I appreciate you making some changes here.

Load more