CLC

Closed Limelike Curves

178 karmaJoined

Comments
41

Hi, thanks for this! As someone who's very interested in social choice and mechanism design, I'll make more suggestions on the submissions form later. Social choice and mechanism design are the branches of economics that ask "How can we extend decision theory to society as a whole, to make rational social decisions?" and "How do we do that if people can lie?", respectively.

Here's one very important recommendation I will make explicitly here, though: TALK TO PEOPLE IN MECHANISM DESIGN AND SOCIAL CHOICE OR EVERYTHING WILL EXPLODE AND YOU CAN MAKE EVERYTHING WAY WORSE IF YOU MESS UP EVEN MINOR DETAILS.

If you don't believe me, here's an example: how you handle equal-ranked candidates in the Borda count can take it from "top-tier voting rule" to "complete disaster". With Borda's original truncation rule (candidates not listed on a ballot get 0 points), the Borda count is pretty good! But if you require a complete ranking, i.e. every voter has to list all the candidates going from best to worst, your rule ends up having the candidates chosen basically at random. That's because the optimal strategy involves finding the best candidates and putting them all at the bottom of your ballot, with the worst candidates you can find taking up all of the middle ranks. If everyone realizes this, the winner is effectively chosen at random, and can even end up being a candidate who everyone agrees is the absolute worst option.

This sounds like a job for someone in mechanism design.

In my experience, they're mostly just impulsive and have already written their bottom line ("I want to work on AI projects that look cool to me"), and after that they come up with excuses to justify this to themselves.

Anyone around here happen to know any investigative reporters or journalists? I've happened to hit on a case of an influential nonprofit CEO engaging in unethical behavior, but I don't have the kind of time or background I'd need to investigate this thoroughly.

I had substantial discussions with people on this, even prior to Sam Altman's firing; every time I mentioned concerns about Sam Altman's personal integrity, people dismissed it as paranoia.

In OpenAI's earliest days, the EA community provided critical funds and support that allowed it to be established, despite several warning signs already having appeared regarding Sam Altman's previous behavior at Y Combinator and Looped.

I think this is unlike the SBF situation in that there is a need for some soul-searching of the form "how did the EA community let this happen". By contrast, there was very little need for it in the case of SBF.

Like I said, you investigate someone before giving money, not after receiving money. The answer to SBF is just "we never investigated him because we never needed to investigate him; the people who should have investigated him were his investors".

With Sam Altman, there's a serious question we need to answer here. Why did EAs choose to sink a substantial amount of capital and talent into a company run by a person with such low integrity?

The usefulness of the "bad people" label is exactly my point here. The fact of the matter is some people are bad, no matter what excuses they come up with. For example, Adolf Hitler was clearly a bad person, regardless of his belief that he was the single greatest and most ethical human being who had ever lived. The argument that all people have an equally strong moral compass is not tenable.

More than that, when I say "Sam Altman is a bad person", I don't mean "Sam Altman's internal monologue is just him thinking over and over again 'I want to destroy the world'". It means "Sam Altman's internal monologue is really good at coming up with excuses for unethical behavior".

Like:

I'm not concerned that Dario Amodei will consciously think to himself: "I'll go ahead and press this astronomically net-negative button over here because it will make me more powerful". But he can easily end up pressing such a button anyway.

I would like to state, for the record, that if Sam Altman pushes a "50% chance of making humans extinct" button, this makes him a bad person, no matter what he's thinking to himself. Personally I would just not press that button.

If I had to guess, the EA community is probably a bit worse at this than most communities because A) bad social skills and B) high trust.

This seems like a good tradeoff in general. I don't think we should be putting more emphasis on smooth-talking CEOs—which is what got us into the OpenAI mess in the first place. 

But at some point, defending Sam Altman is just charlie_brown_football.jpg

In the conversations I had with them, they very clearly understood the charges against him and what he'd done. The issue was they were completely unable to pass judgment on him as a person.

This is a good trait 95% of the time. Most people are too quick to pass judgment. This is especially true because 95% of people pass judgment based on vibes like "Bob seems weird and creepy" instead of concrete actions like "Bob has been fired from 3 of his last 4 jobs for theft".

However, the fact of the matter is some people are bad. For example, Adolf Hitler was clearly a bad person. Bob probably isn't very honest. Sam Altman's behavior is mostly motivated by a desire for money and power. This is true even if Sam Altman has somehow tricked himself into thinking his actions are good. Regardless of his internal monologue he's still acting to maximize his money and power.

EAs often have trouble going "Yup, that's a bad person" when they see someone who's very blatantly a bad person.

"Trust but verify" is Reagan's famous line on this.

Most EAs would agree with "90% of people are basically trying to do the right thing". But most of them have a very difficult time acting as though there's a 10% chance anyone they're talking to is an asshole. You shouldn't be expecting people to be assholes, but you should be considering the 10% chance they are and updating that probability based on evidence. Maya Angelou wrote "If someone shows you who they are, believe them the first time".

As a Bayesian who recognizes the importance of not updating too quickly away from your prior, I'd like to amend this to "If someone shows you who they are, believe them the 2nd or 3rd time they release a model that substantially increases the probability we're all going to die".

The empirical track record is that the top 3 AI research labs (Anthropic, DeepMind, and OpenAI) were all started by people worried that AI would be unsafe, who then went on to design and implement a bunch of unsafe AIs.

Load more