Hide table of contents

23

TL;DR we’re conducting a survey about attitudes towards AI risks, targeted towards people in the EA and rationality communities. We're interested in responses whether or not you're involved in AI safety. Google Forms Link here.

Esben Kran and I previously published an older version of this survey which we've edited based on feedback. If you’ve completed a previous version of this survey, you don’t need to fill it out again. See also Esben's question on LessWrong

Motivation

  • Recently, there has been some discussion about how to present arguments about AI safety and existential risks more generally
  • One disagreement is about how receptive people are to existing arguments, which may depend on personal knowledge/background, how the arguments are presented, etc.
  • We hope to take first steps towards a more empirical approach, by first gathering information about existing opinions and using this to inform outreach
    • While other surveys exist, our survey focuses more on perceptions within the EA and rationality communities (not just on researchers), and on AI risk arguments in particular
    • We also think of this as a cheap test for similar projects in the future

The Survey

  • We expect this to take 5-10 min to complete, and hope to receive around 100 responses in total
  • Link to the survey
  • We're hoping to receive responses whether or not you're interested in AI safety

Expected Output

  • Through the survey, we hope to:
    • Get a better understanding of how personal background and particular arguments contribute to perception of AI safety as a field, and to use this as a rough guide for AI safety outreach
    • Test the feasibility of similar projects
  • We intend on publishing the results and our analysis on LessWrong and the EA forum
  • Note that this is still quite experimental - we welcome questions and feedback!
    • While we have done some user tests for the survey, we fully expect there to be things that we missed or are ambiguous
    • If you’ve already filled out the survey or given us feedback, thank you!

23

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

When coming up with a similar project,* I thought the first step should be to conduct exploratory interviews with EAs that would reveal their hypotheses about the psychological factors that may go into one's decision to take AI safety seriously. My guess would be that ideological orientation would explain the most variance.

*which I most likely won't realize (98 %) 
Edit: My project has been accepted for the CHERI summer research program, so I'll keep you posted!

That's a very interesting project. I'd be very curious to see the finished product. That has become a frequently discussed aspect of AI safety. One member of my panel is a significant advocate of the importance of AI risk issues and another is quite skeptical and reacts quite negatively to any discussion approaches the A*I word ("quite" may be a weak way of putting it). 

But concerning policy communication, I think those are important issues to understand and pinpoint. The variance is certainly strange. 

Side note: As a first-time poster, I realized looking at your project, I failed to include a TL;DR and a summary for the expected output on mine. I'll try and edit, or on the next post, I suppose. 

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
Although some of the jokes are inevitably tasteless, and Zorrilla is used to set up punchlines, I enjoyed it and it will surely increase concerns and donations for shrimp. I'm not sure what impression the audience will have of EA in general.  Last week The Daily Show interviewed Rutger Bregman about his new book Moral Ambition (which includes a profile of Zorrilla and SWP). 
 ·  · 2m read
 · 
Americans, we need your help to stop a dangerous AI bill from passing the Senate. What’s going on? * The House Energy & Commerce Committee included a provision in its reconciliation bill that would ban AI regulation by state and local governments for the next 10 years. * Several states have led the way in AI regulation while Congress has dragged its heels. * Stopping state governments from regulating AI might be okay, if we could trust Congress to meaningfully regulate it instead. But we can’t. This provision would destroy state leadership on AI and pass the responsibility to a Congress that has shown little interest in seriously preventing AI danger. * If this provision passes the Senate, we could see a DECADE of inaction on AI. * This provision also violates the Byrd Rule, a Senate rule which is meant to prevent non-budget items from being included in the reconciliation bill.   What can I do? Here are 3 things you can do TODAY, in order of priority: 1. (5 minutes) Call and email both of your Senators. Tell them you oppose AI preemption, and ask them to raise a point of order that preempting state AI regulation violates the Byrd Rule. * Find your Senators here. * Here’s an example of a call:  “Hello, my name is {YOUR NAME} and I’m a resident of {YOUR STATE}. The newest budget reconciliation bill includes a 10-year ban pre-empting state AI legislation without establishing any federal guardrails. This is extremely concerning to me – leading experts warn us that AI could cause mass harm within the next few years, but this provision would prevent states from protecting their citizens from AI crises for the next decade. It also violates the Byrd Rule, since preempting state AI regulation doesn’t impact federal taxes or spending. I’d like the Senator to speak out against this provision and raise a point of order that this provision should not be included under the Byrd Rule.” See here for sample call + email temp