Hide table of contents

I have a question regarding possible donation opportunities in AI. From my understanding research in AI is not underfunded in general and AI safety research is mostly focussed on the long term risks of AI. In that light I am very curious what you think about the following. 

I received a question from someone who is worried about the short term risks coming from AI. His arguments are along the lines of:  We currently observe serious destabilizing of society and democracy caused by social media algorithms. Over the past months a lot has been written about this, e.g. that this causes a further rise of populist parties. These parties are often against extra climate change measures, against effective global cooperation on other pressing problems and are more agressive on international security. In this way polarization through social media algorithms could increase potential short term X-risks like climate change, nuclear war and even biorisks and AI. 

Could you answer the following quesions? 

  • Do you think that these short term risks of AI are somewhat neglected within the EA community? 
  • Are there any concrete charities we deem effective countering these AI risks, e.g. through making citizens more resilient towards misinformation? 
  • What do we think about the widely hailed Center For Humane Technology?

Thank you all for the response!

6

0
0

Reactions

0
0
New Answer
New Comment


1 Answers sorted by

My off-the-cuff answers:
--Yes, the EA community neglects these things in the sense that it prioritizes other things. However, I think it is right to do so. It's definitely a very important, tractable, and neglected issue, but not as important or neglected as AI alignment, for example. I am not super confident in this judgment and would be happy to see more discussion/analysis. In fact, I'm currently drafting a post on a related topic (persuasion tools).
--I don't know, but I'd be interested to see research into this question. I've heard of a few charities and activist groups working on this stuff but don't have a good sense of how effective they are.
--I don't know much about them; I saw their film The Social Dilemma and liked it.

Thanks! I would love to see more opinions on your first argument: 

  • Do we believe that there is no significant increase in X-risk? (no scale)
  • Do we believe there is nothing we can do about it (not solvable)
  • Do we believe there are many overfunded parties working on this issue (not neglected).
4
kokotajlod
I can't speak for anyone else, but for me: --Short term AI risks like you mention definitely increase X-risk, because they make it harder to solve AI risk (and other x-risks too, though I think those are less probable) --I currently think there are things we can do about it, but they seem difficult: Figuring out what regulations would be good and then successfully getting them passed, probably against opposition, and definitely against competition from other interest groups with other issues. --It's certainly a neglected issue compared to many hot-button political topics. I would love to see more attention paid to it and more smart people working on it. I just think it's probably not more neglected than AI risk reduction. Basically, I think this stuff is currently at the "There should be a couple EAs seriously investigating this, to see how probable and large the danger is and try to brainstorm tractible solutions."  If you want to be such an EA, I encourage you to do so, and would be happy to read and give comments on drafts, video chat to discuss, etc. If no one else was doing it, I might do it myself even. (Like I said, I am working on a post about persuasion tools, motivated by feeling that someone should be talking about this...) I think probably such an investigation will only confirm my current opinions (yup, we should focus on AI risk reduction directly rather than on raising the sanity waterline via reducing short-term risk) but there's a decent chance that it would chance my mind and make me recommend more people switch from AI risk stuff to this stuff.
2
Jan-Willem
Thanks, great response kokotajlod. Do we have any views if there are already other EAs seriously investigating this, to see how probable and large the danger is and try to brainstorm tractible solutions? At the moment I am quite packed with community building work for EA Netherlands but I would love to be in a smaller group to have some discussions about it. I am relatively new to this forum, what would be the best way to find collaborators for this?
5
kokotajlod
Here are some people you could reach out to: Stefan Schubert (IIRC he is skeptical of this sort of thing, so maybe he'll be a good addition to the conversation) Mojmir Stehlik (He's been thinking about polarization) David Althaus (He's been thinking about forecasting platforms as a potential tractible and scalable intervention to raise the sanity waterline) There are probably a bunch of people who are also worth talking to but these are the ones I know of off the top of my head.
2
Jan-Willem
Great thanks! Did you already listen to https://80000hours.org/podcast/episodes/tristan-harris-changing-incentives-social-media/?  New 80k episode, partially dedicated to this argument.
1
kokotajlod
Not yet, thanks for introducing it to me!
Comments3
Sorted by Click to highlight new comments since:

A couple of resources that may be of interest here:

- The work of Aviv Ovadya of the Thoughtful Technology Project; don't think he's an EA (he may be, but it hasn't come up in my discussions with him): https://aviv.me/

- CSER's recent report with Alan Turing Institute and DSTL, which isn't specific to AI and social media algorithms only, but addresses these and other issues in crisis response:
"Tackling threats to informed decisionmaking in democratic societies"
https://www.turing.ac.uk/sites/default/files/2020-10/epistemic-security-report_final.pdf

- Recommendations for reducing malicious use of machine learning in synthetic media (Thoughtful Technology Project's Aviv Ovadya and CFI's Jess Whittlestone)
https://arxiv.org/pdf/1907.11274.pdf

- And a short review of some recent research on online targeting harms by CFI researchers

https://www.repository.cam.ac.uk/bitstream/handle/1810/296167/CDEI%20Submission%20on%20Targeting%202019.pdf?sequence=1&isAllowed=y

@Sean_o_h , Just seeing this now when searching for my name on the forum, actually to find a talk I did for an EA community! Thanks for the shoutout. 

For context, while I've not been super active community-wise, and I don't to find identities, EA or otherwise, particularly useful to my work, I definitely e.g fit all the EA definitions as outlined by CEA, use ITN, etc.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 8m read
 · 
And other ways to make event content more valuable.   I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. Also on my substack. When you imagine a session at an event going wrong, you’re probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there’s another big way that sessions go wrong that is sorely neglected: wasting everyone’s time, often without people noticing. Let’s give talks a break. They often suck, but event organizers are mostly doing the right things to make them not suck. I’m going to pick on two event formats that (often) suck, why they suck, and how to run more useful content instead. Panels Panels. (very often). suck. Reid Hoffman (and others) have already explained why, but this message has not yet reached a wide enough audience: Because panelists know they'll only have limited time to speak, they tend to focus on clear and simple messages that will resonate with the broadest number of people. The result is that you get one person giving you an overly simplistic take on the subject at hand. And then the process repeats itself multiple times! Instead of going deeper or providing more nuance, the panel format ensures shallowness. Even worse, this shallow discourse manifests as polite groupthink. After all, panelists attend conferences for the same reasons that attendees do – they want to make connections and build relationships. So panels end up heavy on positivity and agreement, and light on the sort of discourse which, through contrasting opinions and debate, could potentially be more illuminating. The worst form of shal