What’s a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
I think the problem is that I just don't have a grand vision of the future I am trying to contribute to.
For what it's worth, I'm skeptical of approaches that try to design the perfect future from first principles and make it happen. I'm much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn't cure all illness.)
How much we can help doesn't depend on how awful or how great the world is, we can save the drowning child whether there's a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of "EA specific" things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
I think the sort of world that could be achieved by the massive funding of effective charities is a rather inspiring vision. Natalie Cargill, Longview Philanthropy's CEO, lays out a rather amazing set of outcomes that could be achieved in her TED Talk.
I think that a realistic method of achieving these levels of funding are Profit for Good businesses, as I lay out in my TEDx Talk. I think it is realistic because most people don't want to give something up to fund charities -as donation would require- but if they could help solve world problems by buying products or services they want or need of similar quality at the same price, they would.
I was thinking on ways to reduce political polarization and thought about AI chatbots like Talkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example:
Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others.
The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authentici... (read more)
People already use chatbots and they will become much better. I imagine they eventually will also incorporate audio and video better, it will be like talking to a real person, very engaging. I want that technology to be used for good.
EAG and covid [edit: solved, I'm not attending the EAG (I'm still testing positive as of Saturday)]
I have many meetings planned for the EAG London that starts tomorrow but I’m currently testing very faintly positive for covid. I feel totally good. I’m looking for a bit of advice on what to do. I only care to do what’s best for altruistic impact. Some of my meetings are important for my current project and trying to schedule them online would delay and complicate some things a little bit. I will also need to use my laptop during meetings to take notes. I first tested positive on Monday evening, and since then all my tests were very faintly positive. No symptoms. I guess my options are roughly:
In terms of advice from the EA Global team we don’t have a strict policy on covid and you can use your best judgement. You may wish to test/mask.
I (Iz) would personally ask that you inform your 1:1 meeting partners and that you aren't unmasked inside whilst still testing positive.
Thanks,
Iz
Most Wild Animal Welfare (WAW) researchers I talked to thought that we are unlikely to find WAW interventions that would be directly competitive with farmed animal welfare interventions in terms of direct short-term cost-effectiveness. After spending some months trying to find such interventions myself, I tentatively agree. In this text, I will try to explain why.
I spent some months trying to find a WAW intervention that is:
The first step in the process was listing all potential interventions. Even though many people contributed to it, I found this list to be underwhelming (unfortunately, I don’t think I can share the list without asking for permission from everyone who contributed to it). I feel that coming up with plausible interventions for farmed animals is much ea... (read more)
I’d be grateful if some people could fill this survey https://forms.gle/RdQfJLs4a5jd7KsQA The survey will ask you to compare different intensities of pain. In case you're interested why you might want to do it, you’ll be helping me to estimate plausible weights for different categories of pain used by the Welfare Footprint Project. This will help me with to summarise their conclusions into easily digestible statements like “switch from battery cage to cage-free reduces suffering of hens by at least 60%” and with some cost-effectiveness estimates. Thanks ❤️
Research grants with outcome-based payouts
If I 1) had savings that cover over a year of my living expenses, 2) wasn’t already employed at an EA think tank, and 3) wanted to do EA research independently, I would probably apply to EA funds to do research on unspecified topics (if they would allow me to do that). I would ask them to give funds not now, but after the research period is over (let’s say 6 months). At the end of the research period, I would produce text that shows instances where I think I had impact and include reasoning why what I did may have had impact. Note that this could include not just published articles, but also comments or in-person communications with trusted advocates that changed how a certain organization does something, reviews of work of others, wikipedia article edits, etc. The amount of funds that I would receive would depend on EA funds manager’s opinion on how good or impactful my work was (or how good of a chance what I did had to be impactful). I imagine that there would be pre-agreed sums of money the manager could choose from. E.g.:
Q: Has anyone estimated what is the risk of catching covid at the EAG London this year? Is it more like 5%, 20%, 50%, or 80%? I still haven't decided whether to go (the only argument for not going being covid) and knowing what is the risk would make it a lot easier. Travelling is not a concern since I live in London not that far from the venue.
Hi Saulius, I've done 3 very basic estimates here:
https://docs.google.com/spreadsheets/d/1C6lU4klgisqG150-yR_jZjt253sVrgp2umIbgkUbKbU/edit#gid=0
To get e.g. more than 20% probability, it seems like you'd have to make some very bad assumptions (weirdly high base rates of Covid amongst presumptive attendees, combined with incompetence or malice when it comes to testing). Seems more like 1-5% risk.
I sometimes meet people who claim to be vegetarians (don't eat meat but consume milk and eggs) out of the desire to help the animals. If appropriate, I show them the http://ethical.diet/ website and explain that the production of eggs likely requires more suffering per calorie than most of the commonly consumed meat products. Hence, if they care about animals, avoiding eggs should be a priority. If they say that this is too many food products to give up, I suggest that perhaps instead of eating eggs, they could occasionally consume some beef (although that is bad for the environment). I think that the production of beef requires less suffering per calorie, even though I'm unsure how to compare suffering between different animals. In general, I'm skeptical about dietary change advocacy, but my intuition is that talking about this with vegetarians in situations where it feels appropriate is worth the effort. But I'm uncertain and either way, I don't think this is very important.
A tip for writing EA forum posts with footnotes First press on your nickname in the top right corner, go to Edit Settings and make sure that a checkbox Activate Markdown Editor is checked. Then write a post in Google docs and then use Google Docs to Markdown add-on to convert it to markdown. If you then paste the resulting markdown into the EA forum editor and save it, you will see your text with footnotes. It might also have some unnecessary text that you should delete.
Tables and images
If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/) and write a code like 
in your markdown. Of course, the image address should be changed to your image. Currently, the only way to add tables is to make a screenshot of a table and add an image of it.
As I understand it, there will be a new EA forum editor some time soon and all this will not be needed anymore but for now this is how I make my EA forum posts.
Why don’t we fund movies and documentaries that explore EA topics?
It seems to me that the way society thinks about the future is largely shaped by movies and documentaries. Why don’t we create movies that shape the views in a way that’s more realistic and useful? E.g., I haven’t read the discussion on whether Terminator is or is not a good comparison for AI risks but it’s almost certainly not a perfect comparison. Why don’t we create a better one that we could point people to? Something that would explore many important points. Now that EA has more m... (read more)
If I were to read one of EA-related books (e.g. Doing Good Better, The Most Good You Can Do, The Life You Can Save, The Precipice, Superintelligence, etc.), I would consider writing/improving a summary of the book in wikipedia while reading it, in a way that conveys main points well. It could help you to digest the book better and help others to understand the ideas a bit. You could do it in english as well as maybe in some other language. To see whether it’s worth putting in the effort, you can check out Wikipedia pageview statistics of the books I mentio
... (read more)Shower thought, probably not new: some EAs think that expanding the moral circle to include digital minds should be a priority. But the more agents care about the suffering of digital minds, the more likely it is that some agent that doesn’t care about it will use creating vast amounts of digital suffering as a threat to make other agents do something. To make the threat more credible, in at least some cases it may follow through, although I don’t know what is the most rational strategy here. Could this be a dominant consideration that could make the expec... (read more)
This is an interesting idea. I'm trying to think of it in terms of analogues: you could feasibly replace "digital minds" with "animals" and achieve a somewhat similar conclusion. It doesn't seem that hard to create vast amounts of animal suffering (the animal agriculture industry has this figured out quite well), so some agent could feasibly threaten all vegans with large-scale animal suffering. And as you say, occasionally following through might help make that threat more credible.
Perhaps the reason we don't see this happening is that nobody really wants to influence vegans alone. There aren't many strategic reasons to target an unorganized group of people whose sole common characteristic is that they care about animals. There isn't much that an agent could gain from a threat.
I imagine the same might be true of digital minds. If it's anything similar to the animal case, moral circle expansion to digital minds will likely occur in the same haphazard, unorganized way--and so there wouldn't be much of a reason to specifically target people who care about digital minds. That said, if this moral circle expansion caught on predominantly in one country (or maybe within one powerful company), a competitor or opponent might then have a real use for threatening the digital mind-welfarists. Such an unequal distribution of digital mind-welfarists seems quite unlikely, though.
At any rate, this might be a relevant consideration for other types of moral circle expansion, too.
Someone could say that they will torture animals unless vegans give them money, I guess. I think this doesn't happen for multiple reasons.
Interestingly, there is at least one instance where this apparently has happened. (It's possible it was just a joke, though.) There was even a law review article about the incident.
People already use chatbots and they will become much better. I imagine they eventually will also incorporate audio and video better, it will be like talking to a real person, very engaging. I want that technology to be used for good.