Background in cognitive science. I run a workshop with the aim to teach methods to manage strong disagreements (including non-EA people). Also community building.
Interested in cyborgism and AIS via debate.
https://typhoon-salesman-018.notion.site/Date-me-doc-be69be79fb2c42ed8cd4d939b78a6869?pvs=4
I often get tremendous amounts of help from people knowing how to program being enthusiastic for helping over an evening.
"Goes well for humans" (i.e for a very long time) worlds are mostly worlds where AGI is fully theoretically and empirically aligned with a CEV-shaped alignment target, which for me logically requires animal welfare. (I also currently believe those worlds to be implausible because no company seems focused on this)
I struggle to imagine any deliberative or reflective-preference oriented process that does not give the right answer to the animal welfare question. If it doesn't care about non-human animals, then it means animals are not sentient, or that the CEV is misaligned with human interests and some humans will die because they don't check the right boxes (and sentience isn't a box), or that morality is weird and it's actually fine to torture sentient beings (possible but implausible).
There are other worlds where "goes well for humans" means corrigible and aligned on some unaltered human values. In those worlds, I expect animals to take a blow on the short term, and possibly on the very long if the principal does not care about animal suffering. I also expect humanity to do other morally wrong things that they don't suspect to be wrong, and die counterfactually way sooner.
Hopefully americans will take this as a strong signal that their administration is in complete support of mass surveillance and autonomous weapons. I could scarcely think of a clearer signal.
Edit : After reading different sides of the story, I'm actually less certain about that. But I'm still updating towards more probability that "unfettered access" meant "occasionally unlawful use" rather than "lawful use but there's drama between us" given this reaction. I would certainly not have lended more probability if the administration hadn't reacted so abruptly.
New Edit: after reading more, I'm pretty convinced that both issues (or any action that I'd personally consider falling into both categories) are pretty much legal in the US.
Strong upvoted.
I find this refreshing and going back to what EA is about, and would definitely point at it as "the sort of experience I expect an EA org to allow for". I also appreciate the moral character that you're showing up. I want more testimonies like this on the forum to help people get [back] in touch with this spirit.
I sometimes feel like I had partially lost my initial ambition, and your post corrected for some value drift. Thank you.
Sorry, I understand this is a bit confusing.
I was hesitant to spell it out, because I'm afraid of building a strawman:
My interpretation is that some people have an issue with non-self-oriented wishes or desires, because they can feel like virtue-signalling or guilt-tripping. Expressing things such as "I really want a world without malaria" can be interpreted as condoning the use of suffering as a negotiation tool.
I.e :
Step 1: People are suffering from malaria
Step 2: This prompts me to fight malaria
Step 3: Someone concludes that suffering causes me to help them
Step 4: They self-inflict suffering to them
Step 5: This prompts me to help them regardless
Step 6: The world is now made up of people who self-inflict suffering as a way to manipulate others, which suck.
I'm not sure this is an accurate reconstruction, but this is what I can do to the best of my abilities.
I'd rather not encourage arguing with this version of the argument, since I'm not a genuine proponent.
Be helpful, considerate, generous, genuine in your belief that (for example) malaria is bad and a world without malaria is a world you want to see.
Admittedly bitter take: you'd be surprised to learn this is far from consensual in some EA circles. I got surprising reactions for applying this example.
There seems to have been a surge for interest in AI Risk and Safety culminating on August 14th, far surpassing all other levels of interest in time.
I'm not sure what caused this. [Update : the EU Code of Practice ?] Google Trends lists seemingly topic-specific points (the "chain-of-thought as a fragile opportunity" paper from different AI Companies, an apparent interest in China about AI Safety) while one could intuitively bring up events that happened over the summer (several papers, the suicide case, etc).
Marcus, Austin, thank you so much! This is exactly the sort of tools Effective Giving Initiatives sorely lack whenever they're asked about AI Safety (so far the answer was "well we spoke to an evaluator and they supported that org"). @Romain Barbe🔸 hopefully that'll inspire you!
On my side, I'd be happy to compare that to the cost- effectiveness of reaching out to established YouTubers and encouraging them to talk about a specific topic. I guess it can turn out more cost-effective, per intervention, than a full-blown channel. I'm unwilling to discuss it at length but France has some pretty impressive examples.
I'll give it a try !
Open question: would it be useful to frame this an "impact insurance" for people in impactful careers?
As in:
-I expect this goal to be good for the world (say ~100 WELLBYs in expectation)
-If I don't achieve this goal, then I definitely owe something that's a least comparably good for the world (say ~$200 to PureEarth (since I've chosen WELLBYs))
(or maybe at least half as good)
I think using it this way could help people who have a real hesitation between impactful work and impactful donations.