857 karmaJoined


Sorted by New


I don't think it's odd at all. As the Bloomberg article notes, this was in response to the martyrdom of Saint Altman, when everyone thought the evil Effective Altruists were stealing/smashing OA for no reason and destroying humanity's future (as well as the fortune of many of the individuals involved, to a degree few of them bothered to disclose) and/or turning it over to the Chinese commies. An internal memo decrying 'Effective Altruism' was far from the most extreme response at the time; but I doubt Gomez would write it today, if only because so much news has come out since then and it no longer looks like such a simple morality play. (For example, a lot of SV people were shocked to learn he had been fired from YC by Paul Graham et al for the same reason. That was a very closely held secret.)

Good report overall on tacit knowledge & biowarfare. This is relevant to the discussion over LLM risks: the Aum Shinrikyo chemist could make a lot of progress by reading papers and figuring out his problems as he went, but the bacteriologist couldn't figure out his issues for what seems like what had been a viable plan to weaponize & mass-produce anthrax but where lack of feedback led it to fail. Which does sound like something that a superhumanly-knowledgeable (but not necessarily that intelligent) LLM could help a lot with simply by pattern-matching and making lists of suggestions for things that are to the human 'unknown unknowns'.

If a crazy person wants to destroy the world with an AI-created bioweapon

Or, more concretely, nuclear weapons. Leaving aside regular full-scale nuclear war (which is censored from the graph for obvious reasons), this sort of graph will never show you something like Edward Teller's "backyard bomb", or a salted bomb. (Or any of the many other nuclear weapon concepts which never got developed, or were curtailed very early in deployment like neutron bombs, for historically-contingent reasons.)

There is, as far as I am aware, no serious scientific doubt that they are technically feasible: multi-gigaton bombs could be built or that salted bombs in relatively small quantities would render the earth uninhabitable to a substantial degree, for what are also modest expenditures as a percentage of GDP etc. It is just that there is no practical use of these weapons by normal, non-insane people. There is no use in setting an entire continent on fire, or in long-term radioactive poisoning of the same earth on which you presumably intend to live afterwards.

But you would be greatly mistaken if you concluded from historical data that these were impossible because there is nothing in the observed distribution anywhere close to those fatality rates.

(You can't even make an argument from an Outside View of the sort that 'there have been billions of humans and none have done this yet', because nuclear bombs are still so historically new, and only a few nuclear powers were even in a position to consider whether to pursue these weapons or not - you don't have k = billions, you have k < 10, maybe. And the fact that several of those pursued weapons like neutron bombs as far as they did, and that we know about so many concepts, is not encouraging.)

Hm, maybe it was common knowledge in some areas? I just always took him for being concerned. There's not really any contradiction between being excited about your short-term work and worried about long-term risks. Fooling yourself about your current idea is an important skill for a researcher. (You ever hear the joke about Geoff Hinton? He suddenly solves how the brain works, at long last, and euphorically tells his daughter; she replies: "Oh Dad - not again!")

Ilya has always been a doomer AFAICT, he was just loyal to Altman personally, who recruited him to OA. (I can tell you that when I spent a few hours chatting with him in... 2017 or something? a very long time ago, anyway - I don't remember him dismissing the dangers or being pollyannaish.) 'Superalignment' didn't come out of nowhere or surprise anyone about Ilya being in charge. Elon was... not loyal to Altman but appeared content to largely leave oversight of OA to Altman until he had one of his characteristic mood changes, got frustrated and tried to take over. In any case, he surely counts as a doomer by the time Zilis is being added to the board as his proxy. D'Angelo likewise seems to have consistently, in his few public quotes, been concerned about the danger.

A lot of people have indeed moved towards the 'doomer' pole but much of that has been timelines: AI doom in 2060 looks and feels a lot different from AI doom in 2027.

  1. I haven't seen any coverage of the double structure or Anthropic exit which suggests that Amodei helped think up or write the double structure. Certainly, the language they use around the Anthropic public benefit corporation indicates they all think, at least post-exit, that the OA double structure was a terrible idea (eg. see the end of this article).

  2. You don't know that. They seem to have often had near majorities, rather than being a token 1 or 2 board members.

    By most standards, Karnofsky and Sutskever are 'doomers', and Zillis is likely a 'doomer' too as that is the whole premise of Neuralink and she was a Musk representative (which is why she was pushed out after Musk turned on OA publicly and began active hostilities like breaking contracts with OA). Hoffman's views are hard to characterize, but he doesn't seem to clearly come down as an anti-doomer or to be an Altman loyalist. (Which would be a good enough reason for Altman to push him out; and for a charismatic leader, neutralizing a co-founder is always useful, for the same reason no one would sell life insurance to an Old Bolshevik in Stalinist Russia.)

    If I look at the best timeline of the board composition I've seen thus far, at a number of times post-2018, it looks like there was a 'near majority' or even outright majority. For example, 2020-12-31 has either a tie or an outright majority for either side depending on how you assume Sutskever & Hoffman (Sutskever?/Zilis/Karnofsky/D'Angelo/McCauley vs Hoffman? vs Altman/Brockman), and with the 2021-12-31 list the Altman faction needs to pick up every possible vote to match the existing 5 'EA' faction (Zilis/Karnofsky/D'Angelo/McCauley/Toner vs Hurd?/Sutskever?/Hoffman? vs Brockman/Altman) although this has to be wrong because the board maxes out at 7 according to the bylaws so it's unclear how exactly the plausible majorities evolved over time.


EDIT: this is going a bit viral, and it seems like many of the readers have missed key parts of the reporting. I wrote this as a reply to Wei Dai and a high-level summary for people who were already familiar with the details; I didn't write this for people who were unfamiliar, and I'm not going to reference every single claim in it, as I have generally referenced them in my prior comments/tweets and explained the details & inferences there. If you are unaware of aspects like 'Altman was trying to get Toner fired' or pushing out Hoffman or how Slack was involved in Sutskever's flip or why Sutskever flip-flopped back, still think Q* matters, haven't noticed the emphasis put on the promised independent report, haven't read the old NYer Altman profile or Labenz's redteam experience etc., it may be helpful to catch up by looking at other sources; my comments have been primarily on LW since I'm not a heavy EAF user, plus my usual excerpts.

Or even "EA had a pretty weak hand throughout and played it as well as can be reasonably expected"?

It was a pretty weak hand. There is this pervasive attitude that Sam Altman could have been dispensed with easily by the OA Board if it had been more competent; this strange implicit assumption that Altman is some johnny-come-lately where the Board screwed up by hiring him. Commenters seem to ignore the long history here - if anything, it was he who screwed up by hiring the Board!

Altman co-founded OA. He was the face in initial coverage and 1 of 2 board members (with Musk). He was a major funder of it. Even Elon Musk's main funding of OA was through an Altman vehicle. He kicked out Musk when Musk decided he needed to be in charge of OA. Open Philanthropy (OP) only had that board seat and made a donation because Altman invited them to, and he could personally have covered the $30m or whatever OP donated for the seat; and no one cared or noticed when OP let the arrange lapse after the initial 3 years. (I had to contact OP to confirm this when someone doubted that the seat was no longer controlled by OP.) He thought up, drafted, and oversaw the entire for-profit thing in the first place, including all provisions related to board control. He voted for all the board members, filling it back up from when it was just him (& Greg Brockman at one point IIRC). He then oversaw and drafted all of the contracts with MS and others, while running the for-profit and eschewing equity in the for-profit. He designed the board to be able to fire the CEO because, to quote him, "the board should be able to fire me". He interviewed every person OA hired, and used his networks to recruit for OA. And so on and so forth

Credit where credit is due - Altman may not have believed the scaling hypothesis like Dario Amodei, may not have invented PPO like John Schulman, may not have worked on DL from the start like Ilya Sutskever, may not have created GPT like Alec Radford, may not have written & optimized any code like Brockman's - but the 2023 OA organization is fundamentally his work.

The question isn't, "how could EAers* have ever let Altman take over OA and possibly kick them out", but entirely the opposite: "how did EAers ever get any control of OA, such that they could even possibly kick out Altman?" Why was this even a thing given that OA was, to such an extent, an Altman creation?

The answer is: "because he gave it to them." Altman freely and voluntarily handed it over to them.

So you have an answer right there to why the Board was willing to assume Altman's good faith for so long, despite everyone clamoring to explain how (in hindsight) it was so obvious that the Board should always have been at war with Altman and regarding him as an evil schemer out to get them. But that's an insane way for them to think! Why would he undermine the Board or try to take it over, when he was the Board at one point, and when he made and designed it in the first place? Why would he be money-hungry when he refused all the equity that he could so easily have taken - and in fact, various partner organizations wanted him to have in order to ensure he had 'skin in the game'? Why would he go out of his way to make the double non-profit with such onerous & unprecedented terms for any investors, which caused a lot of difficulties in getting investment and Microsoft had to think seriously about, if he just didn't genuinely care or believe any of that? Why any of this?

(None of that was a requirement, or even that useful to OA for-profit. Other double systems like Mozilla or Hershey don't have such terms, they're just normal corporations with a lot of shares owned by a non-profit, is all. OA for-profit could've been the same way. Certainly, if all of this was for PR reasons or some insidious decade-long scheme of Altman to 'greenwash' OA, it was a spectacular failure - nothing has occasioned more confusion and bad PR for OA than the double structure or capped-profit. See, for example, my shortly-before-the-firing Twitter argument with well-known AI researcher Delip Rao who repeatedly stated & doubled down on the claim that the OA non-profit legally owned the OA for-profit was not just factually wrong but misinformation. He helpfully linked to a page about political misinformation & propaganda campaigns online in case I had any doubt about what the term 'misinformation' meant.)

What happened is, broadly: 'Altman made the OA non/for-profits and gifted most of it to EA with the best of intentions, but then it went so well & was going to make so much money that he had giver's remorse, changed his mind, and tried to quietly take it back; but he had to do it by hook or by crook, because the legal terms said clearly "no takesie backsies"'. Altman was all for EA and AI safety and an all-powerful nonprofit board being able to fire him, and was sincere about all that, until OA & the scaling hypothesis succeeded beyond his wildest dreams†, and he discovered it was inconvenient for him and convinced himself that the noble mission now required him to be in absolute control, never mind what restraints on himself he set up years ago - he now understands how well-intentioned but misguided he was and how he should have trusted himself more. (Insert Garfield meme here.)

No wonder the board found it hard to believe! No wonder it took so long to realize Altman had flipped on them, and it seemed Sutskever needed Slack screenshots showing Altman blatantly lying to them about Toner before he finally, reluctantly, flipped. The Altman you need to distrust & assume bad faith of & need to be paranoid about stealing your power is also usually an Altman who never gave you any power in the first place! I'm still kinda baffled by it, personally.

He concealed this change of heart from everyone, including the board, gradually began trying to unwind it, overplayed his hand at one point - and here we are.

So, what could the EA faction of the board have done? ...Not much, really. They only ever had the power that Altman gave them in the first place.

* I don't really agree with this framing of Sutskever/Toner/McCauley/D'Angelo as "EA", but for the sake of argument, I'll go with this labeling.
† Please try to cast your mind back to when Altman et al would be planning all this in 2018-2019, with OA rapidly running out of cash after the mercurial Musk's unexpected-but-inevitable betrayal, its DRL projects like OA5 remarkable research successes but commercially worthless, and just some interesting results like GPT-1 and then GPT-2-small coming out of their unsupervised learning backwater from Alec Radford tinkering around with RNNs and then these new 'Transformer' things. The idea that OA might somehow be worth over ninety billion dollars, yes, that's 'billion' with a 'b' in scarcely 3 years would have been insane, absolutely insane, not a single person in the AI world would have taken you seriously if you had suggested that and if you emailed any of them asking about how plausible that was, they would have added an email filter to send your future emails to the trash bin. It is very easy to be sincerely full of the best intentions and discuss how to structure your double corporation to deal with windfalls like growing to a $1000b market cap when no one really expects that to happen, certainly not in the immediate future... Thus, no one is also sitting around going, 'well wait, we required the board to not own equity, but if the company is worth even fraction of our long-term targets, and it's recruiting with stock options like usual, then each employee is going to have, like, $10m or even $100m of pseudo-equity in the OA for-profit. That seems... problematic. Do we need to do something about it?'

By the time Musk (and Altman et al) was starting OA, it was in response to Page buying Hassabis. So there is no real contradiction here between being spurred by Page's attitude and treating Hassabis as the specific enemy. It's not like Page was personally overseeing DeepMind (or Google Brain) research projects, and Page quasi-retired about a year after the DM purchase anyway (and about half a year before OA officially became a thing).

The discussion of the Abbey, er, I mean, 'castle', has been amusing for showing how much people are willing to sound off on topics from a single obviously-untrustworthy photograph. Have you ever seen a photograph of the interior or a layout? No, you merely see the single aerial real estate brochure shot using a telephoto zoom lenses framed as flatteringly as possible to include stuff that isn't even the Abbey - like that turreted 'castle' you see in the photo up above isn't even part of the Abbey - because that's an active church, All Saints Church!* (Really, apply some critical thinking here: you think some manor house one can buy will just have a bunch of visible graves in it...?)

Let me ask something: how many of the people debating the Abbey on this page have been there? I don't see anyone directly addressing the core claim of 'luxury', so I will.

I was there for an AI workshop earlier this year in Spring and stayed for 2 or 3 days, so let me tell you about the 'luxury' of the 'EA castle': it's a big, empty, cold, stone box, with an awkward layout. (People kept getting lost trying to find the bathroom or a specific room.) Most of the furnishings were gone. Much of the layout you can see in Google Maps was nonfunctional, and several wings were off-limits or defunct, so in practice it was maybe a quarter of the size you'd expect from the Google Maps overview. There were clearly extensive needs for repair and remodeling of a lot of ancient construction, and most of the gardens are abandoned as too expensive to maintain. It is, as a real estate agent might say, very 'historical' and a 'good fixer-upper'.

The kitchen area is pretty nice, but the part of the Abbey I admired most, from the standpoint of 'luxury', was the view of the neighboring farmer's field. (It was extraordinarily green and verdant and picturesque, truly exemplifying "green and pleasant land". I tried to take some photos on my phone, but they never capture the wetness & coloring.)

Otherwise, the best efforts of the hardworking staff at the workshop notwithstanding - and I'm trying not to make this sound like an insult - I would rate the level of 'luxury' as roughly 'student hostel' level. (Which is fully acceptable to me, but anyone expecting 'luxury' or the elite lifestyle of the Western nomenklatura is going to be disappointed. Windsor or Balmoral or a 5-star hotel, this is not.) Indeed, I'm not sure how the place could be in much rougher shape while still being an acceptable setting for a conference. (Once you're down to a big mattress in an empty room, it's hard to go down further without, like, removing electricity and indoor plumbing.)

The virtue of the Abbey is that it can be relatively easily reached from London/the rest of the world by simple public transit routes that even a first-time foreigner can navigate successfully, and contain a decent number of people without paying extortionate Oxford hotel rates or forcing people to waste hours a day going back & forth between their own lodgings creating lots of overhead in coordination. ("Oh, you should talk to Jack about that! oh, he just called an Uber for his hotel. Never mind.")

Buying it seems entirely reasonable to me assuming adequate utilization in terms of hosting events. (Which may or may not be the case, but no one here or elsewhere is even attempting to make it.) Nor do I see why any sort of public discussion would be so important and so scandalous to not have, because this is a subject on which any sort of 'public discussion' would be pointless - random Internet commenters don't have a better idea of the FHI/EA event calendar or constraints of Oxford hotel booking than the people who were making the decisions here.

* I don't know if technically it sits on the Abbey parcel or what, because England has lots of weird situations like that, but EA and EA visitors are obviously not getting any good or 'luxury' out of an active church regardless of its de jure status (we made no use of it in any way I saw), and including it in the image is misleading in an ordinary realtor sort of way.

Load more