This is a special post for quick takes by keller_scholl 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Bad Things Are Bad: A Short List of Common Views Among EAs

  1. No, we should not sterilize people against their will.
  2. No, we should not murder AI researchers. Murder is generally bad. Martyrs are generally effective. Executing complicated plans is generally more difficult than you think, particularly if failure means getting arrested and massive amounts of bad publicity.
  3. Sex and power are very complicated. If you have a power relationship, consider if you should also have a sexual one. Consider very carefully if you have an power relationship: many forms of power relationship are invisible, or at least transparent, to the person with power. Common forms of power include age, money, social connections, professional connections, and almost anything that correlates with money (race, gender, etc). Some of these will be more important than others. If you're concerned about something, talk to a friend who's on the other side of that from you. If you don't have any, maybe just don't.
  4. And yes, also, don't assault people.
  5. Sometimes deregulation is harmful. "More capitalism" is not the solution to every problem.
  6. Very few people in wild animal suffering  think that we should go and deliberately destroy the biosphere today.
  7. Racism continues to be an incredibly negative force in the world. Anti-black racism seems pretty clearly the most harmful form of racism for the minority of the world that lives outside Asia.[1]
  8. Much of the world is inadequate and in need of fixing. That EAs have not prioritized something does not mean that it is fine: it means we're busy.
  9. The enumeration in the list, of certain bad things, being construed to deny or disparage other things also being bad, would be bad.

Hope that clears everything up. I expect with 90% confidence that over 90% of EAs would agree with every item on this list.

  1. ^

    Inside, I don't know enough to say with confidence. Could be caste discrimination, could be ongoing oppression of non-Han, could be something I'm not thinking of. I'm not making a claim about the globe as a whole because I haven't run the numbers, and different EAs will have different values and approaches to how to weight history, cultures, etc. I just refuse to fall into the standard America/Euro-centric framework.

I might change 2 to:

No, we should not murder AI researchers. Working together requires ability to trust one another. Hurting AI researchers will just damage this trust and likely not slow down the work. Likewise, if you are the sort of person who thinks you should do this, you are likely uniquely unsuited to coming up with drastic solutions. Have you tried a research agenda? If not, why did you start at murder. We are all more likely to survive if we can credibly commit not to defect. Please get help.

I’m concerned that less than 90% of the AI safety community would agree. I have heard some disturbing anecdotes.

Some post-EAG thoughts on journalists

For context, CEA accepted at EAG Bay Area 2023 a journalist who has at times written critically of EA and individual EAs, and who is very much not a community member. I am deliberately not naming the journalist, because they haven't done anything wrong and I'm still trying to work out my own thoughts.

On one hand, "journalists who write nice things get to go to the events, journalists who write mean things get excluded" is at best ethically problematic. It's very very very normal: political campaigns do it, industry events do it, individuals do it. "Access journalism" is the norm more than it is the exception. But that doesn't mean that we should. One solution is to be very very careful about maintaining the differentiation between "community member" and "critical or not". Dylan Matthews is straightforwardly an EA and has reported critically on a past EAG: if he was excluded for this I would be deeply concerned.

On the other hand, I think that, when hosting an EA event, an EA organization has certain obligations to the people at that event. One of them is protecting their safety and privacy. EAs who are journalists can, I think, generally be relied upon to be fair and to respect the privacy of individuals. That is not a trust I extend to journalists who are not community members: the linked example is particularly egregious, but tabloid reporting happens.

EAG is a gathering of community members. People go to advance their goals: see friends, network, be networked at, give advice, get advice, learn interesting things, and more. In a healthy movement, I think that EAGs should be a professional obligation, good for the individual, or fun for the individual. It doesn't have to be all of them, but it shouldn't harm them on any axis.

Someone might be out about being bi at an after-party with friends, but not want to see that detail being confirmed by a fact-checker for a national paper. This doesn't seem particularly unusual. They would be right to trust community members, but might not realize that there could be journalists at the after-party. Non-community journalists will not necessarily share norms about privacy or have particularly strong incentives to follow any norms that do exist.

On the gripping hand, it feels more than a little hypocritical to complain about the low quality of criticism of EA and also complain when a journalist wants to attend an EA event to get to know the movement better.

One thing I'm confident of is that I wish that this had been more clearly disclosed. "This year we are excited to welcome X, who will be providing a critical view on EA" is good enough to at least warn people that someone whose bio says that they are interested in 

how the wealthiest people in society spend their money or live their lives

(emphasis mine)

is attending.

I'm still trying to sort out the rest of my views here. Happy to take feedback. It's very possible that I'm missing some information about this.

P.S.

I have been told by someone at CEA that all attending journalists have agreed that everything at EAG is off the record by default. I don't consider this to be an adequate mitigating factor for accepting non-community journalists and not mentioning this to attendees or speakers.

P.P.S.

And no, I'm not using a pseudonym for this. I think that that is a bad and damaging trend on the Forum, and I don't, actually, believe that anyone at CEA will retaliate against me for posting this.

Hi Keller — appreciate the thoughts here! I wanted to quickly note that we did actually give attendees a heads up about this in our attendee guide, and we've done similarly in most of our other recent conference attendee guides. 

Though I generally don't expect attendees to read this all the way through, we did share it multiple times, and I'm not sure whether it would have made sense to  email attendees about the journalist section specifically (if I was going to reiterate something, it probably wouldn't be this).

If someone attends the event as a journalist, why not have their lanyard show that they are a journalist? This seems like it's a very easy thing to do and something like this is probably pretty standard at large events that are not fully public(?) This would probably solve some of the issues, as people know who they are talking to (and eg organisers of private afterparties could just not let journalists in if they don't want them at their party).

I do agree there's a wide spectrum of what "disclosing this" looks like and I think it's entirely possible that you did disclose it enough or maybe even disclosed it more than enough (for example, if perhaps we conclude it didn't need to be disclosed at all, then you did more than necessary). I think - like Keller - I don't really have a view on this. But I think the level of disclosure you did do is also entirely possible to be pretty inadequate (again I'm genuinely not sure) given that is on page 9 of a guide I imagine most people don't read (I didn't). But I imagine you agree with this.

our attendee guide

I feel like the relevant thing isn't mentioning the possibility a journalist might be there; if I was to read this I think I'd assume this meant EA journalists (or at least EA-adjacent / fellow traveller) and hence largely ignore it. 

I think it was implied by this statement but I think it's a fair point that we could make this more explicit: "In interviews, don’t speak on behalf of the entire EA community, or anyone else in the EA community" (as if you were talking to an EA journalist, I think this wouldn't really apply).

Fair enough, though I didn't pick that up on first read I think you're right it is implied. I think my true rejection here is about invitations not disclosure. 

Larks
16
10
0

On the gripping hand, it feels more than a little hypocritical to complain about the low quality of criticism of EA and also complain when a journalist wants to attend an EA event to get to know the movement better.

It seems plausible to me that the presence of hostile journalists might net reduce the quality of criticism by making people feel rationally inhibited from talking frankly.

I appreciate you raising this despite not having an actual view on the topic and I appreciate you being clear that this is a complex topic that's hard to form a view on.

I think I had a lot of freewheeling conversations at EAG and I don't think I thought enough about the fact that journalists I don't trust by default might be able to overhear and comment on those conversations and that thinking through this may have a somewhat chilling effect on how I interact in future EAGs, which I find to be unfortuntate.

That being said, I totally agree with you that excluding these journalists may also be unfair or otherwise based on bad norms, and it's a pretty thorny trade-off. Like you, this is something I don't really fully understand.

Thanks for bringing this to broader attention. I think I am opposed to this (admitting the journalist), mostly for the reasons you state.

It's unfortunate if EA feels they have to block critical journalists from marquee events. Being open and transparent is a hugely valuable part of the EA movement, and even a mixed positive/negative press article is likely to be  net good for the movement - especially at the moment with all the negative press going on.

If a journalist is bent on writing criticism, they may well write criticism regardless of if we let them in to an event or not. The counterfactual of not accepting them could potentially make things worse long term too, if they double down and even use the rejection as a reason to criticise more severely.

One great thing to do could be to spend a lot of time making friends with and understanding journalists at the event.  Make them feel welcome and connected to people who are there, so perhaps they will be more empathetic and fair when they write their piece?

Or am I just being naive about journalists...

Someone might be out about being bi at an after-party with friends, but not want to see that detail being confirmed by a fact-checker for a national paper. This doesn't seem particularly unusual.

This isn't the only thing that could go wrong, but it's a straightforward example. Perhaps they don't want their full name blatantly linked to their online account. There are lots of reasons that people might want privacy. Unless your life is at risk, I would not assume that you have privacy from a journalist who isn't a personal friend unless they have an explicit commitment. I trust journalists who are also community members to not take harmful advantage of access.

Something that is sometimes not obvious to people not used to dealing with journalists is that off-the-record sometimes means "I can't officially tell you this, so please find another source who can corroborate it". It's not remotely the same thing as an expectation of privacy and good sense that one would have with a friend.

Do you mean EAGx Berkeley 2022 or EA Global: Bay Area 2023?

Bay Area 2023. Will edit.

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier