M

MichaelDickens

7093 karmaJoined
mdickens.me

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).

I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
945

Are you assuming the deployment of ASI will be analogous to an omnipotent civilisation with values completely disconnected from humans suddenly showing up on Earth?

Something like that, yeah.

However, that would be very much at odds with historical gradual technological development shaped by human values.

ASI would have a level of autonomy and goal-directedness that's unlike any previous technology. The case for caring about AI risk doesn't work if you take too much of an outside view, you have to reason about what properties ASI would have.

Donating at the end of the year.

I used to donate mid-year for the reasons you gave. The last couple years I donated at the end of the year because the EA Forum was running a donation election in early December, and I wanted to publish my "where I'm donating" post shortly before the donation election, and I don't want to donate until after I've published the post. But perhaps syncing with the donation election is less important and I should publish and donate mid-year instead?

A concern with donating to political candidates is that every candidate who puts major focus on AI safety is a Democrat; funding their campaigns risks dialing up polarization of AI safety. I'm pretty uncertain about how big a deal this is.

Eliezer comes to mind as a positive example:

  • He's pretty explicit about explaining why he believes what he believes.
  • He gives the true reason for his beliefs, not the reason he expects to be most persuasive, even if the true reason is "weird".

I’ve been a fool trying to influence people who are on the AI industry’s money and glory payroll. I’m going to take my own advice now, write you off, and focus on the moral majority who wants to protect the world.

I've donated $30,000 to PauseAI. Some of your past posts played a role in that, such as The Case for AI Safety Advocacy to the Public and Pausing AI is the only safe approach to digital sentience. I don't think writing off people like me is a good idea.

You should all be ashamed of your complicity in bringing about potentially world-ending technology.

I am literally donating to PauseAI. I don't think you are being fair. I fully agree that some EAs are directly increasing x-risk by working on AI development, and they should stop doing that. I don't think it's fair to paint all of us with that brush.

Wouldn't this sort of reasoning also say that FTX was justified in committing fraud if they could donate users' money to global health charities? They metaphorically conscripted their users to fight against a great problem. People in the developed world failed to coordinate to fund tractable global health interventions, and FTX attempted to fix this coordination problem by defrauding them.

(I don't think that's an accurate description of what FTX did, but it doesn't matter for the purposes of this analogy.)

I agree. The resolution is that ordinarily-unethical behavior done during wartime is still unethical. (At least in the majority of cases, I don't want to claim there are never exceptions.)

Agreed that extreme power concentration is an important problem, and this is a solid writeup.

Regarding ways to reduce risk: My favorite solution (really a stopgap) to extreme power concentration is to ban ASI [until we know how to ensure it's safe], a solution that is notably absent from the article's list. I wrote more about my views here and about how I wish people would stop ignoring this option. It's bad that the 80K article did not consider what is IMO the best idea.

Didn't Sam tell several straightforward lies? e.g. claiming that they had enough assets to fully cover the account values of all users, which they didn't; claiming that Alameda never borrowed users' deposits, which it did.

Good note. Also worth keeping in mind the base rate of companies going under. FTX committing massive fraud was weird; but a young, fast-growing, unprofitable company blowing up was decidedly predictable, and IMO the EA community was banking too hard on FTX money being real.

Plus the planning fallacy, i.e., if someone says they want to do something by some date, then it'll probably happen later than that.

My off-the-cuff guess is

  • 30% chance Anthropic IPOs by end of 2028
  • 20% chance Anthropic IPOs in 2029 or later
  • 50% chance Anthropic never IPOs—because they go under for normal business-y reasons, or they build AGI first, or we're all dead, or whatever
Load more