C

chinscratch

617 karmaJoined Sep 2023

Posts
1

Sorted by New

Comments
42

Habryka seems to think there was significant underreaction to shady info: https://forum.effectivealtruism.org/posts/b83Zkz4amoaQC5Hpd/time-article-discussion-effective-altruist-leaders-were?commentId=nGxkHbrikGeTxrLjZ

I think you have to balance cost of false negatives against cost of false positives.

To be clear, what I'm saying is that SBF would just flat out win, and really easily too, I wouldn't expect a war. The people who had criticized him would be driven out of EA on various grounds; I wouldn't expect EA as a whole to end up fighting SBF; I would expect SBF would probably end up with more control over EA than he had in real life, because he'd be able to purge his critics on various grounds.

What would it take for EA to become the kind of movement where SBF would've lost?

I don't think that's enough; you'd need to not only fund some investigators anonymously, you'd also need to (a) have good control over selecting the investigators, and (b) ban anybody from paying or influencing investigators non-anonymously, which seems unenforceable. (Also, in real life, I think the investigators would eventually have just assumed that they were being paid by SBF or by Dustin Moskovitz.)

I agree that the ideal proposal would have answers here. However, this is also starting to sound like a proof that there's no such thing as a clean judicial system, quality investigative journalism, honest scientific research into commercial products like drugs, etc. Remember, it's looking like SBF is going to rot in jail despite all of the money he gave to politicians. The US judicial system is far from perfect, but let's not let the perfect be the enemy of the good.

If EA just isn't capable of trustworthy institutions for some reason, maybe there's some clever way to outsource to an entity with a good track record? Denmark, Finland, and Norway seem to do quite well in international rankings based on a quick Google: 1, 2. Perhaps OpenAI should've incorporated in Denmark?

(1) The fraud wouldn't have become publicly known under this norm, so I don't think this actually helps.

If EA disavowed SBF, he wouldn't have been able to use EA to launder his reputation.

(2) I don't think it would be correct for EA to react strongly in response to the rumors about SBF- there are similar rumors or conflicts around a very substantial number of famous people, e.g. Zuckerberg vs. the Winklevoss Twins.

In this case it would've been correct, because the rumors were pointing at something real. We know that with the benefit of hindsight. One has to weigh false positives against false negatives.

I'm not saying rumors alone are enough for a disavowal, I'm saying rumors can be enough to trigger investigation.

(3) Most importantly, how we get from "see something? say something?" to "the billionaire sending money to everybody, who has a professional PR firm, somehow ends up losing out" is just a gigantic question mark here. To me, the outcome here is that SBF now has a mandate to drive anybody he can dig up or manufacture dirt on out of EA. (I seem to recall that the sources of the rumors about him went to another failed crypto hedge fund that got sued; I can't find a source, but even if that didn't actually happen it would be easy him to make that happen to Lantern Ventures.) (Similarly, I expect that such an "EA investigative journalist" would have probably been directly paid by SBF, had one existed.)

I think a war between SBF and EA would have been good for FTX users -- the sooner things come to a head, the fewer depositors lose all their assets. It also would've been good for EA in the long run, since it would be more clear to the public that fraud isn't what we're about.

Your point about conflict of interest for investigative journalists is a good one. Maybe we should fund them anonymously so they don't know which side their bread is buttered on. Maybe the ideal person is a freelancer who's confident they can find other gigs if their relationship with EA breaks down.

I feel like we should also be discussing FTX here. My model of the Lightcone folks is something like:

  1. They kinda knew SBF was sketchy.

  2. They didn't do anything because of diffusion of responsibility (and maybe also fear of reputation warring).

  3. FTX fraud was uncovered.

  4. They resolved to not let diffusion of responsibility/fear of reputation warring stop them from sharing sketchiness info in the future.

If you grant that the Community Health Team is too weak to police the community (they didn't catch SBF), and also that a stronger institution may never emerge (the FTX incident was insufficient to trigger the creation of a stronger institution, so it's hard to imagine what event would be sufficient), there's the question of what "stopgap norms" to have in place until a stronger institution hypothetically emerges.

Even if you think Lightcone misfired here -- If you add FTX in your dataset too, then the "see something? say something!" norm starts looking better overall.

With regard to explicit agreements: One could also argue from the other direction. No one in EA explicitly agreed to safeguard the reputation of other EAs. You say: "If individuals want to give a company a bad review, they can do so publicly online or privately to whomever they want." Do the ethics of "giving Nonlinear a bad review" change depending on whether the person writing the bad review is a person in the EA community or outside of it? Depending on whether the bad review is written on the EA Forum vs some other website?

Suppose someone raised their hand and offered to work as an investigative journalist funded by and for the EA community. It seems fairly absurd to tell e.g. an investigative journalist from ProPublica that they're only allowed to cover subjects who explicitly agreed to be covered. Why would such a hypothetical EA-funded investigative journalist be any different?

The best argument I can think of against such an EA investigative journalist is that it seems unfair to pick on people who are putting so much time and money towards doing good. However, insofar as EAs involve themselves in public issues, public scrutiny will often be warranted. I think the best policy would be: the journalist's job is to cover people both inside and outside the EA community, who are working in areas of public and EA interest. They aspire to neutrality in their coverage, so the valence of their stories isn't affected by a person's EA affiliation.

We should also discuss what "stopgap norms" to have in place until something actually happens, because if FTX is any guide, nothing will ever happen. (Perhaps the simplest stopgap norm is: If Ben Pace is concerned with Nonlinear, he should hire a pro investigative journalist on the spot to look into it. This looks like a straightforward arbitrage anyway, since Ben says he values his time at $800K/year.)

IIRC, Truman said something at the United Nations like "we need to keep the world free from war", right after having fought one of the largest wars in history (WW2). Doesn't seem that weird to me.

So you endorse "always cooperate" over "tit-for-tat" in the Prisoner's Dilemma?

Seems to me there are 2 consistent positions here:

  • The thing is bad, in which case the person who did it first is worse. (They were the first to defect.)

  • The thing is OK, in which case the person who did it second did nothing wrong.

I don't think it's particularly blameworthy to both (a) participate in a defect/defect equilibrium, and (b) try to coordinate a move away from it.

EDIT: A couple other points

  1. I know the payoff structure here might not be an actual Prisoner's Dilemma, but I think my point still stands.

  2. David's consistent use of "doing X" seems important here. If someone does X (e.g. blows the whistle on unethical practices), and someone else does Y in response (e.g. fires the person who blew the whistle), that's a different situation.

In my experience, observing someone getting dogpiled and getting dogpiled yourself feel very different. Most internet users have seen others get dogpiled hundreds of times, but may never have been dogpiled themselves.

Even if you have been dogpiled yourself, there's a separate skill in remembering what it felt like when you were dogpiled, while observing someone else getting dogpiled. For example, every time I got dogpiled myself, I think I would've greatly appreciated if someone reached out to me via PM and said "yo, are you doing OK?" But it has never occurred to me to do this when observing someone else getting dogpiled -- I just think to myself "hm, seems like a pretty clear case of unfair dogpiling" and close the tab.

In any case, I've found getting dogpiled myself to be surprisingly stressful, relative to the experience of observing it -- and I usually think of myself as fairly willing to be unpopular. (For example, I once attended a large protest as the only counter-protester, on my own initiative.)

It's very easy say in the abstract: "If I was getting dogpiled, I would just focus on the facts. I would be very self-aware and sensitive, I wouldn't dismiss anyone, I wouldn't say anything bad about my accusers (even if I had serious negative information about them), I wouldn't remind people about scout mindset or anything like that." I think it takes an unusual person to maintain that sort of equanimity when it feels like all of their friends are abandoning them and their career is falling apart. It's not something most of us have practice with. And I hesitate to draw strong inferences about someone's character from their behavior in this situation.

[Note: I'm using the term "dogpiled" because unlike terms like "cancelled", "called out", "scapegoated", "brought to justice", "mobbed", "harassed", etc. it doesn't have any valence WRT whether the person/group is guilty or innocent, and my point is orthogonal to that.]

Another possibility is that Sam came to see EA as an incredibly flawed movement, to the point where he wanted EAs like Toner off his board, and just hasn't elaborated the details of his view publicly. See these tweets from 2022 for example.

I think Sam is corrupted by self-interest and that's the primary explanation here, but I actually agree that EA is pretty flawed. (Better than the competition, but still pretty flawed.) As a specific issue OpenAI might have with EA, I notice that EA seems significantly more interested in condemning OpenAI publicly than critiquing the technical details of their alignment plans. It seems like EAs historically either want to suck up to OpenAI or condemn them, without a lot of detailed technical engagement in between.

I was watching the recent DealBook Summit interview with Elon Musk, and he said the following about OpenAI (emphasis mine):

the reason for starting OpenAI was to create a counterweight to Google and DeepMind, which at the time had two-thirds of all AI talent and basically infinite money and compute. And there was no counterweight. It was a unipolar world. And Larry Page and I used to be very close friends, and I would stay at his house, and I would talk to Larry into the late hours of the night about AI safety. And it became apparent to me that Larry [Page] did not care about AI safety. I think perhaps the thing that gave it away was when he called me a speciest for being pro-humanity, as in a racist, but for species. So I’m like, “Wait a second, what side are you on, Larry?” And then I’m like, okay, listen, this guy’s calling me a speciest. He doesn’t care about AI safety. We’ve got to have some counterpoint here because this seems like we could be, this is no good.

I'm posting here because I remember reading a claim that Elon started OpenAI after getting bad vibes from Demis Hassabis. But he claims that his actual motivation was that Larry Page is an extinctionist. That seems like a better reason.

I feel that we haven't taken seriously the lessons from SBF given what happened at OpenAI

How specifically? Seems to me you could easily argue that SBF should make us more skeptical of charismatic leaders like Sam Altman.

Load more