This is a special post for quick takes by billz. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Cross-posted thread.

Some other people including Asya have floated the idea of having a "despair day" where people question their core assumptions of their current work. I like this a lot, and also like encouraging more of this mindset in EA. (I'm not speaking for her, just for myself).

Oftentimes I'm having a 30m one on one with someone, and I don't know where they want me to be on the spectrum from "encouraging their ambitions" to "ruthless honesty about whether it sounds like a good idea."

This is sad because I think the latter is more helpful, but it's also riskier. So often I try to choose some point on the spectrum that is less risky, like just asking hard questions but not saying how I feel.

It's very helpful if people say things like "tell me how I might be screwing this up" or things like that, as it helps me know where on the spectrum to be.

I worry that because so many EA orgs are nonprofits, it's hard for people to have good feedback loops on how useful their orgs are. It's hard to know how hard it is for others to get funding, and how much of the funding is because the people are good vs. the idea is good.

I think grantmakers try to give this feedback, and it's useful. But I think it's a lot worse than having users that one is talking to very frequently (e.g. daily instead of every 6 months).

So I want to encourage anyone that is into more direct feedback to ask for it, both from me and from others. Some of my favorite convos with EA's are when they've asked for "no really, tell me why you don't think I'm working on the right thing."

I made a twitter! Copy/pasted thread.

Lots of young EAs want to found companies. I like encouraging people to be ambitious, and this can be really good. Oftentimes the reasoning seems somewhat confused though.

1. People say it’s for personal growth, but don’t have great models of how startups are good for growth. Starting a 3-5 person organization that never does very big things in the world isn’t good for growth. Joining as employee 10 at a top company that grows to 100 is great

I first came across that idea in a Dustin Moskovitz talk ~7y ago, second half of this video http://youtu.be/CBYhVcO4WgI It worked out well for me in deciding to join a company as it grew from 30-300 instead of trying to do my own thing.

2. For some reason it’s a meme in EA that everyone should either do AI safety or community building. I think a lot of young people look up to other community builders and want to replicate what they’re doing, which looks like running 3-5 person community building orgs.

I’m excited for a bunch of that work. But I also think there are a bunch of high impact and high growth projects people could work on if they were more open to a wider array of projects.

When I joined Aurora, I went from intern —> project lead for a high priority team of eight 6mo later. You have to be willing to put in the leg work, but then people will happily hand you high growth opportunities because there aren’t enough people for all the problems.

Also, helping grow a top AI or bio org is likely great for community building. “It also suggested to me that high-quality object-level work can be as effective at achieving “meta” goals as meta work for a variety of reasons.”

Update from Open Philanthropy’s Longtermist EA Movement-Building team - EA Forum

I think people tend to be too focused on “founding a company” and not focused enough on the people they work with. Much of the impact comes from the top ~10 companies in a 5 year period. Is the company you’re at plausibly one of those?

Tweet thread for "What is operations and why EA needs great people doing it."

For "what is operations," Holden’s post on aptitudes https://forum.effectivealtruism.org/posts/bud2ssJLQ33pSemKH/my-current-impressions-on-career-choice-for-longtermists#_Organization_building__running__and_boosting__aptitudes_1_… is the best thing I’ve read on this— the “organization building, running, and boosting” aptitude, including management, recruiting, legal, hr, finance, events, etc.

I’m always confused when an EA says they want to do community building, but aren’t interested in operations work (this happens regularly). Starting a new community building org has lots of overlap with operations (50-70%?) — operations just means running orgs and doing things.

I think the real objections underlying people's beliefs there are: 

1. Feeling that operations is low status. 

2. People telling them we need lots of community building. 

3. Don’t know what operations is. 

4. Worried that operations is not a good pathway for personal development.

 

I'll come back to 1). For 2), my guess is that there was some over-correction happening, and hopefully things will swing back in the other direction, with e.g. 80k updating their list of priority paths to include operations.

For 3), hopefully this thread helps some. 

For 4), I think people significantly underestimate how hard and useful it is to get better at operations, and also underestimate how useful it is to top orgs to do operations work.

People work for years in various operations roles, and become much better at accomplishing larger projects. There's a lot of demand for mid-senior ops roles in EA right now from the top orgs.

There are a lot of badasses doing operations work. A few examples: James Bregan, Malo Bourgon, Cate Hall, past-Tara https://80000hours.org/podcast/episodes/tara-mac-aulay-operations-mindset/

As one random data point, a while back I looked up the backgrounds of ops people I knew at a handful of top EA orgs, and fond out that ~1/2 of ~n=15 of them had engineering backgrounds (e.g. degree, professional experience as software engineer, etc.).

As I mentioned here https://forum.effectivealtruism.org/posts/ejaC35E5qyKEkAWn2/early-career-ea-s-should-consider-joining-fast-growing… , I think joining early at a fast-growing org is a great way to build skills, and this includes operations roles.

 

Ok so back to 1) (ops being low status) -- I think this is already in the process of changing. It's largely just sharing the answers to 2-4, the list of EA ops badasses, and having people at top orgs repeatedly tell people that ops is important.

I've been tempted at times to stop using the word operations and find some sexier word that we can use instead, but that doesn't feel like the right way to change things. I think we should just make it clear that operations is hard, useful, and high status.

Lots of people who "do community building," are "doing operations" imo.

Lots of people who do operations now will do very interesting things in the future, like starting companies, managing large teams, etc.

Fun project idea: gpt-3 app where whenever you finish a Google doc draft, it gives compliments on the contents, helping build a positive feedback loop to encourage writing. (From convo with Eli Rose and others a while back).

(cross-posted from twitter).

I bet Twitter would be good if all my friends used it and only to talk about interesting things. I’m imagining some slack-Twitter integration where my coworking space slack came with a private Twitter network that everyone was on. Feels possible currently, just annoying to do.

Also I want it to be halfway between Twitter and EA forum short form. More focus on interesting ideas instead of memes (memes still welcome).

It’d also be great if there was auto cross-posting between EA forum short-form and Twitter.

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier