Dawn Drescher

Cofounder @ AI Safety GiveWiki
2551 karmaJoined Working (6-15 years)8303 Bassersdorf, Switzerland
givewiki.org

Bio

I’m working on impact markets – markets to trade nonexcludable goods. (My profile.)

I have a conversation menu and a Calendly for you to pick from! 

If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.

Pronouns: Ideally she or they. I also still go by Denis and Telofy in various venues.

How others can help me

GoodX needs: advisors/collaborators for marketing, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets. We're a PBC and seek SAFE investments over donations.

How I can help others

I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.

Please check out my Conversation Menu!

Sequences
2

Impact Markets
Researchers Answering Questions

Comments
575

I love this research! Thank you so much for doing it!

My gut reaction to the results is that it's odd that humans are so high up in terms of their capacity for welfare. Just as an uninformative prior, I would've expected us to be somewhere in the middle. Less confidently, I would've expected a similar number of orders of magnitude deviation from the human baseline in either direction, within reason. E.g. +/- ~.5 OOM.

Plus, we are humans, so there's a risk that we're biased in our favor. It could be simply a bias from our ability to emphasize with other humans. But it could also be the case that there are countless more markers of sentience that humans don't have (but many other sentient animals do) that we are prone to overlook.

Have you investigated what the sources of this effect might be? There might be any number of biases at work as I mentioned, but perhaps our lives have become so comfy most of the time that we perceive slight problems very strongly (e.g., a disapproving gaze). If then something really bad happens, it feels enormously bad?

(I've in the past explicitly assumed that most beings with a few (million) neurons have a roughly human capacity for welfare – not because I thought that was likely but because I couldn't tell in which direction it was off. Do you maybe already have a defense of the results for people like me?)

In any case, I'll probably just adopt your results into my thinking now. I don't expect them to change my priorities much given all the other factors.

Thank you again! <3

Only half a person per sandal I think!

Even scandal-prone individuals can't survive in a vacuum. (You may be thinking of sandals, not scandals?)

We have sympathies towards both movements, and consider ourselves to take the middle path. We race forward and accelerate as quickly as possible while mentioning safety.

Mentioning safety is a waste of resources that you could direct toward attaching propulsion to asteroids to get them here faster.

In fact, asteroids will inevitably hit earth earlier or later, and if they kill humanity, clearly they are superior to humanity. The true masters of our future lightcone are the asteroids. That which can be destroyed by asteroids ought to be destroyed by asteroids.

True progress is in speeding the inevitable. Resistance is futile.

This post is also a great info hazard. It risks causing impostors with sub-146 IQs (2009 LW survey) to feel adequate!

That's a good point. Time discounting (a “time premium” for the past) has also made me very concerned about the welfare of Boltzmann brains in the first moments after the Big Bang. It might seem intractable to still improve their welfare, but if there is any nonzero chance, the EV will be overwhelming!

But I also see the value in longtermism because if these Boltzmann brains had positive welfare, it'll be even more phenomenally positive from the vantage points of our descendants millions of years from now!

There is a Swiss canton Appenzell Innerrhoden (AI). Maybe we can hide there and trick the AI into thinking it already invaded it?

Scandals don't just happen in the vacuum. You need to create the right conditions for them. So I suggest:

  1. We spread concern about the riskiness of all altruistic action so that conscientious people (who are often not sufficiently scandal-prone) self-select out of powerful positions and open them up to people with more scandal potential.
  2. We encourage more scathing ad-hom attacks on leadership so that those who take any criticism to heart self-select out of leadership roles.
  3. We make these positions more attractive to scandal-prone people by abandoning cost-effectiveness analyses and instead base strategy and grantmaking on vibes and relationships.
  4. We further improve the cushiness of these positions by centralizing power and funding around them to thwart criticism and prevent Hayekian diversity and experimentation.
  5. We build stronger relationships with powerful, unscrupulous people and companies by, e.g., helping them with their hiring.
  6. We emphasize in-person networking and move the most valuable networks to some of the most expensive spots in the world. That way access to the network comes with even greater dependency on centralized funding, making it easier to control.

[Meta: I'm not claiming anyone is doing these things on purpose! It would be nice, though, if more people were trying to counter these risk factors for scandals and generally bad epistemics.]

Great summary!

You probably base “Even though this use of funds was unintentional and sounds extremely sketchy, FTX's general counsel testified that FTX's terms of service did not prohibit it” on:

The government didn’t want to focus you on that. Why? Again, the only witness who said he had read the terms of service was Can Sun, the general counsel who had helped to draft it. Even though he was very careful in what he told you, he admitted that nowhere do the terms of service contain language that prevents FTX from loaning customer fiat deposits to Alameda or anyone else.

Can Sun didn't think so. (Unless I misunderstand something.) He said that there was the margin lending program that did allow that but that had a few hundred million USD in it, so by far not enough to explain Alameda's borrowing. He didn't think that FTX or Alameda could've borrowed from capital outside the margin lending program because it was owned by the customers.

So I think what the defense lawyer is trying to do here is to say that the ToS did not explicitly prohibit such borrowing, but he omits that the borrowing is still implicitly prohibited just like it is generally prohibited to borrow other people's fund without their permission.

Load more