From David Moss, Tom Ash, and the rest of the EA Survey team
 
Preparations for the 2015 Effective Altruism Survey are underway!
 
The 2014 Survey provided interesting information about such topics as: how much the EAs who took our survey were donating and where, what causes they support, what diets they eat or don't eat and many other things. It also made possible projects such as EA Profiles, the EA Donation Registry and the Map of EAs. And it let us put many people in touch with local groups they didn’t know about, and establish presences in over 60 new cities and countries so far.

As a community project run through .impact, we are soliciting suggestions and requests about questions that you would like to see included on this year's survey. 

For example, do you want to know more about EAs' ethical and meta-ethical beliefs? Whether they regard EA actions as an obligation or an opportunity? How they respond to the trolley problem? What kind of jobs they have? In response to community suggestions we are considering adding some of these questions. We're also considering removing other personal questions that we've already gathered recent data on, such as gender, diet and religion.

Anything you might like to know more about or questions you would like asked, please suggest them here! 
 
If you have suggestions about the survey other than about questions to include (e.g. about distribution etc.), there will also be opportunities to discuss these. We'll soon add a second post soliciting broader suggestions. And the ultimate place to discuss anything about the survey is as always a .impact meeting - in particular the survey deep dive that will be held on Sunday 24 May at 9pm UTC (2pm Pacific, 5pm Eastern, 10pm London). A Google Hangouts link to join will be posted in the Facebook event for this at that time. It'll be a chance to talk directly with the survey team and help work things out.
Comments59


Sorted by Click to highlight new comments since:

We're also considering removing other personal questions that we've already gathered recent data on, such as gender, diet and religion.

Really don't think you should cut these. We're trying to diversify the movement, and the survey seems like on the of only comprehensive ways to see if it's working. W/r/t the diet question, I think that's incredibly valuable and useful information for EAA. I'd be interested to also know how many EAs changed their diets after becoming EAs.

could always put it at the end with an opt-out for that section?

Good point, I suppose putting them at the end decreases or removes some potential problems with including these questions, such as that they might lead people to drop out. Every single question should be optional, just like last year.

Also, EA is young and growing so fast. A lot can change in a year.

We could ask people what they attribute their diet change to. Same with donation changes. (Disclaimer: Clearly I don't claim self-reports are fully reliable, but there is usually some signal in them.)

W/r/t the diet question, I think that's incredibly valuable and useful information for EAA.

That's likely true, but I presume the reason for considering removing it was that people would feel uncomfortable/judged/guilty and so stop completing the rest of the survey, which would be a large cost. Last year I was happy or even keen for people to know all my other answers, but didn't want to be judged on failing to be vegetarian.

People might also feel judged if asked how much they donate, but asking this is part of assessing what part of EA principles they're adopting, and how effective EA is at actually driving behavior change in self-identified EAs, which is sort of the point of the survey.

That's a pretty convincing point, at least to me personally. I think that - just as we give a separate opt in/out for sharing your past or planned donations on the EA Donation Registry, whether or not they choose to share their answers in general via an EA Profile -we might want to consider having diet not be publicly shared. That's assuming there's no great benefit to having a commitment device and inspirational/motivational registry for diet, or that EA Profiles/the EA Survey aren't the right place for them.

EA is kinda inherently judgey in this way.

This seems more true for those who take an obligation-oriented perspective than an opportunity-oriented perspective.

Personally, I am concerned with animal suffering but I'm not a vegetarian. I agree with Katja Grace: "I am personally not a vegetarian because I don’t think it is an effective way to be altruistic." I also agree with Chris Hallquist (who is vegan) that vegan activism seems like a relatively bad way to help animals in the long run. (It's hard to measure how vegan activism might polarize people away from caring about animals, which would make passing a law more difficult.) And that's not even accounting for the fact that, like Nick Beckstead, I think the far future is of overwhelming importance and it's dubious to me that my avoiding animal meals now will have any significant positive effect on it.

I think you should have more forced choices and fewer write-ins, to make data analysis easier. It seems that write-in boxes turn everyone into a special snowflake.

This is very true. Write-in boxes are the enemy of the person who does the data analysis.

Yearly salary range (helpful for getting sponsorships in the future of EA events if the average yearly salary turns out to be high)

The question “whether they regard EA actions as an obligation or an opportunity” could be split into which view they prefer now, and which got them interested in EA in the first place.

Here are some suggestions from the Facebook thread which you could upvote or downvote. Alas this could mean losing precious precious karma, but I guess there's no other way to get these votes here (?) so I think I'll live with that ;)

how confident are you that you'll be an EA in 5 years? 10?

Some question that gets at whether they find talking to other EAs to be unpleasant b/c we're too verbally aggressive VS would prefer more forthrightness

'have you found conversations with effective altruists persuasive?'(y/n/haven't talked/mixed) 'Why so?' (freetext)

This would make it especially valuable to get people on the fringes of EA (who've been exposed but not wholely 'signed up') to take the survey. I remember it was open to them and anyone else last year.

have you taken a different job than the one you would have taken, for EA reasons?

how many people in your life know how much you give?

(if no local EA chapter) if there were a local chapter, would you attend?

I'd upvote this one, as I'd use the results for my work creating new EA presences.

how much time do you spend thinking about where to donate?

If its beyond a certain threshold, a few questions getting their subjective cost/QALY (or equivalent) estimates for AMF, Dwtw, SCI, givedirectly and their best alternative bet in freetext might be interesting. I have the feeling that people's subjective estimates are quite variable.

Forced choice with no 'other' option: which topic primarily made you discover EA: philosophy and ethics, charity, or rationality?

Seems like making this a forced choice might mean you lose people from completing the survey.

Yep, I presume the person suggesting this question (Josh Jacobson) only meant that there should be no 'other' option. Partly for the reason you state, every single question should be optional, just like last year.

Some IQ proxy question

Let's not do this.

Why not? LW and SSC do this without issue, and IQ is a very important variable for many things. What's the point of doing a survey if not to understand your population?

LW and SSC do this without issue

I've taken those surveys for years, and it's true that they've often contained questions that would get this (EA) community's jimmies severely rustled, without any problems or complaints or concern trolling. At least, that's my impression, as someone more familiar with the rationalist community than the EA one.

The best IQ proxy questions are demographic variables anyway (age, years of education, and occupation), which predict about 50% of the variance in full-scale IQ - see papers shared here: http://jmp.sh/b/V717o7yuqvQutQYTHIMh

It wouldn't be hard to plug data we're going to get anyway into Crawford's regression equation - the only extra work would be plugging in occupations to the standardized occupational classification system. Reporting it could be bad PR, but it wouldn't hurt for anyone who's interested to take a look.

Not convinced that we want to measure iq but I think the whole point of doing it would be to see if eas are on the whole a lot smarter than would be predicted by demographic variables, like LessWrong seems to be. However, LessWrong's annual process of measuring their iq and then arguing about whether or not it's accurate is a bit of a fiasco, and probably not one that we want to engage in.

I haven't read any of LW's debates on this, so I'm not sure why one would be interested in whether the relationship between demographics and intelligence in EAs is weaker than usual, or what that would imply about EA. Mainly, I'd like to know by what routes people with predicted-to-be-average intelligence and average educational backgrounds are coming to EA, so I hope age, years of education, and occupation will be included so that the option exists of using the estimation techniques referred to above.

Having said that - intelligence research is politically toxic, and I'd also worry that people could spread bad ideas about how to use IQ estimates (e.g., general bragging rights, or "the smartest EAs focus on X, so we should pay more attention to X"), so I wouldn't argue for including anything related to IQ estimation in publicly-announced results.

Mainly, I'd like to know by what routes people with predicted-to-be-average intelligence and average educational backgrounds are coming to EA, so I hope age, years of education, and occupation will be included so that the option exists of using the estimation techniques referred to above.

Last time we asked about age, income last year and highest level of education completed. Pending the community feedback we were planning to keep these, and add a free text box for 'current occupation or career'. Does that all cover it OK? Is asking for years in education better, and if so why? Is it comparable across countries? Is it years of post-secondary education?

Having said that - intelligence research is politically toxic, and I'd also worry that people could spread bad ideas about how to use IQ estimates (e.g., general bragging rights, or "the smartest EAs focus on X, so we should pay more attention to X"), so I wouldn't argue for including anything related to IQ estimation in publicly-announced results.

I personally agree, though the survey team as a whole will be influenced by the community view (which hasn't had a strong consensus in favour of asking about IQ, either last time or - so far - this time).

I doubt it would be done without issue here and I doubt the information would be useful for any purposes. But I'm willing to consider otherwise.

Agree, would have downvoted if I could, but have upvoted you instead!

How many older siblings

Do you believe in acting now or investing to act better later?

This question would be more valuable when it makes clear that it asks for an assessment for the specific respondent at that time. Something like “Do you believe that for you at the moment it is better to act now or invest to act better later?” Then the answers could be faceted by age, student status, or other applicable demographic data (if the power is sufficient).

Some may also consider external factors like the availability of vaccines or progress of prioritization research, but for most the personal factors will probably weigh heavier in this decision, and we wouldn’t be able to distinguish that afterwards.

Simple mamogram-style question

I think a question measuring reflectiveness could also be interesting ala the Cognitive Reflection Test.

Could you expand on that?

I think this is referring to a common probability question, e.g., example 3 here.

Did you fulfil your plan or pledge for last year? (answer to always be anonymous)

Do you think EA should be a broad church or a committed core?

False dichotomy. Perhaps 2 questions here, 1 about diversity, 1 about strength of committment. Example wording (Likert responses?) "How much do you think EA should focus more on strengthening existing strategies for improving the world compared to broadening into new ones?" "How much do you think EA movement building should focus on increasing the commitment and coordination of current EAs as opposed to recruitment and outreach?" and perhaps "In recruiting and reaching out, who should be the primary target?" (example answers: people that are most likely to identify with EA, people with the most to offer in terms of time and resources, people from different walks of life that can reveal EA's blindspots)

What colour is this dress?

A test question to see how much you have to correct for people’s unwillingness to let you lose comment karma, right? ;‑)

Heh, I just copied and pasted people's suggestions from Facebook. This was one of the most upvoted ones!

Haha! Probably all people who wanted to indicate that they got it, not that they thought it’s a valuable survey question. On the other hand it would make the survey funnier, especially without the photo, which may increase people’s motivation to finish it.

How about a question that goes something like "If you donated less than 10% of your income, why not?" (or "If you didn't donate, why not?") with answers like "I'm a student", "I'm not earning to give", "Tough year financially", "Procrastination", "Saving to donate later", "Financing my startup", "10% is too much of an ask", etc.

I do think this would be really valuable to find out - I didn't upvote only because doing so is a tricky diplomatic issue and can put people off, and I don't think the survey is the best place for it for those reasons.

What do people think of asking or not asking about demographic details that are sometimes sensitive, or associated with sensitive issues? Two examples would be religion and gender; as I said, we're thinking of cutting these, having already gathered data on them last year anyway. Another would be race, which we didn't ask about last year because no one could come up with clear benefits that were sufficient to justify it. It wasn't clear what use it'd be to find out that there are at least 200 Asian EAs.

In the Facebook thread Alex Rattee says: "I think that religion questions are interesting- from personal experience as a committed Christian involved in Christian social justice circles there are a lot of people readily primed for being very generous who should be v. up for a good amount of the EA logic. Would be good to track the growth/lack of it in such communities"

I replied: "We're open to following the community view on that Alex. Do you or others think people might find the question offputtingly personal? How many?"

He said: "So I can only speak for evangelical Christianity really but that grouping definitely wouldn't find it offputing, they/we're out to spread the word about Jesus so typically we relish opportunities to let people know, maybe there are nice mutual arrangements to be made, where Christians and EA's agree to listen to each others pitch for 20 mins... on a serious note though I think targeting evangelical Christianity would seem to me to be a good route for some EAs to be going down".

Doesn't seem to personal for me (and, generally speaking, a good idea)

David Barry: "Rather than total donations for the year and list of charities donated to, the amount donated to each of those charities. (I also made this request in a forum thread a while back.)"

Me: "David, that's the format we've moved the EA Donation Registry to (the 2014 survey data there will switch to that format too once someone finishes converting it - limited resources have slowed this down). I'm curious whether anyone thinks that format in the survey would slow them down or be too onerous, leading to dropout?"

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na