Hide table of contents

I'm posting this in preparation for Draft Amnesty Week (Feb 24- March 2), but it's also (hopefully) valuable outside of that context. The last time I posted this question, there were some great responses. 

When answering in this thread, I suggest putting each idea in a different answer, so that comment threads don't get too confusing and ideas can be voted on separately. 

If you see an answer here describing a post you think has already been written, please lend a hand and link it here. 

A few suggestions for possible answers:

  • A question you would like someone to answer: “How, historically, did AI safety become an EA cause area?”
  • A type of experience you would like to hear about: “I’d love to hear about the experience of moving from consulting into biosecurity policy. Does anyone know anyone like this who might want to write about their experience?”
  • A gap in an argument that you'd like someone to fill.

If you have loads of ideas, consider writing an entire "posts I would like someone to write" post.

Why put this up before Draft Amnesty Week?

If you see a post idea here that you think you might be positioned to answer, Draft Amnesty Week (Feb 24- March 2) might be a great time to post it. During Draft Amnesty Week, your posts don't have to be thoroughly thought through, or even fully drafted. Bullet points and missing sections are allowed so that you can have a lower bar for posting. More details.

New Answer
New Comment


12 Answers sorted by

I would like someone to write a post about almost every topic asked about in the Meta Coordination Forum Survey, e.g.

  • What should the growth rate of EA be?
  • How quickly should we spend EA resources?
  • How valuable is recruiting a highly engaged EA to the community?
  • How much do we value highly engaged EAs relative to a larger number of less engaged people hearing about EA?
  • How should we (decide how to) allocate resources across cause areas?
  • How valuable is a junior/senior staff hire at an EA org (relative to the counterfactual second best hire)?
  • What skills / audiences should we prioritise targeting?

I'm primarily thinking about core EA decision-makers writing up their reasoning, but I think it would be valuable for general community members to do this.

Prima facie, it's surprising that more isn't written publicly about core EA strategic questions.

Similar to Ollie's answer, I don't think EA is prepared for the world in which AI progress goes well. I expect that if that happens, there will be tons of new opportunities for us to spend money/start organizations that improve the world in a very short timeframe. I'd love to see someone carefully think through what those opportunities might be.

Obvious point, but I assume that [having a bunch of resources, mainly money] is a pretty safe bet for these worlds. 

AI progress could/should bring much better ideas of what to do with said resources/money as it happens. 

3
Karthik Tadepalli
Yeah I was referring more to whether it can bring new ways of spending money to improve the world. There will be new market failures to solve, new sorts of technology that society could gain from accelerating, new ways to get traction on old problems

Pertinent to this idea for a post I’m stuck on:

What follows from conditionalizing the various big anthropic arguments on one another? Like, assuming you think the basic logic behind the simulation hypothesis, grabby aliens, Boltzman brains, and many worlds all works, how do these interact with one another? Does one of them “win”? Do some of them hold conditional on one another but fail conditional on others? Do ones more compatible with one another have some probabilistic dominance (like, this is true if we start by assuming it, but also might be true if these others are true)? Essentially I think this confusion is pertinent enough to my opinions on these styles of arguments in general that I’m satisfied just writing about this confusion for my post idea, but I feel unprepared to actually do the difficult, dirty work, of pulling expected conclusions about the world from this consideration, and I would love it if someone much cleverer than me tried to actually take the challenge on.

I would be really interested in a post that outlined 1-3 different scenarios for post-AGI x-risk based on increasingly strict assumptions. So the first one would assume that misaligned superintelligent AI would almost instantly emerge from AGI, and describe the x-risks associated with that. Then the assumptions become stricter and stricter, like AGI would only be able to improve itself slowly, we would be able to align it to our goals etc.

I think this could be a valuable post to link people to, as a lot of debates around whether AI poses an x-risk seem to fall on accepting or rejecting potential scenarios, but it's usually unproductive because everyone has different assumptions about what AI will be capable of. 

So with this post, to say that AI x-risk is not tangible, then for each AI development scenario (with increasingly strict assumptions), you have to either:

  1. reject at least one of the listed assumptions (e.g. argue that computer chips are a limit on exponential intelligence increases)
  2. or argue that all proposed existential risks in that scenario are so impossible that even an AI wouldn't be able to make any of them work.

If you can't do either of those, you accept AI is an x-risk. If you can, you move on to the next scenario with stricter assumptions. Eventually you find the assumptions you agree with, and have to reject all proposed x-risks in that scenario to say that AI x-risk isn't real. 

The post might also help with planning for different scenarios if it's more detailed than I'm anticipating. 

Maybe too much for a Draft Amnesty week, but I'd be excited for someone / some people to think about how we'd prioritise R&D efforts if/when R&D is ~automated by very powerful narrow or general AI. "EA for the post-AGI world" or something.

I wonder if the ITN framework can offer an additional perspective to the one outlined by Dario in Machines of Loving Grace. He uses Alzheimer’s as an example of a problem he thinks could be solved soon, but is that one of the most pressing problems that becomes very tractable post-AGI? How does that trade-off against e.g. increasing life expectancy by a few years for everyone? (Dario doesn't claim Alzheimer’s is the most pressing problem, and I would also be very happy if we could win the fight against Alzheimer’s).

I'd love to read a deep-dive into a non-PEPFAR USAID program. This Future Perfect article mentioned a few. But it doesn't even have to be an especially great program, there are probably plenty of examples which don't near the 100-fold improvement over the average charity (or the marginal government expenditure), but are still very respectable nonetheless. 

There's in general a bit of knowledge gap in EA on the subject of more typical good-doing endeavors. Everyone knows about PlayPumps and Malaria nets, but what about all the stuff in-between? This likely biases our understanding of non-fungible good-doing.

The Groups team did a 3 minute brainstorm about this during our weekly meeting! In no particular order:

Community Building

  • What (mass) (media) campaigns can encourage EA growth?
  • Uni with both AIS and EA group coexisting writes how that works
  • Experience of staying with others in your university group at an EAG(x)
  • What is EA CB strategy in light of AI progress
  • Mistakes you made as a university group organiser
  • When to stop investing in community building
    • A post that explores the lag time required for CB investments to pay off, applied both to cause-neutral EA and cause specific AIS

Other
 

  • Reasons for longer timelines
  • A good post on scope insensitivity that explains what it is (it doesn’t exist)
  • Overview of corporate campaigns - strengths and weaknesses
  • Updated post on ITN we can use in fellowships
  • Exploration on whether AI progress should cause us to value protests more (or in general what tactics should be considered)
  • Aggregation of Will MacAskill comments on EA in the age of AI
  • AIS early career recommendations for non-stem people
  • On being ineffective to be effective

I wrote about mistakes I made as a uni group organiser here, inspired by this list!

I’m not sure if this hits what you mean by ‘being ineffective to be effective’, but you may be interested in Paul Graham’s ‘Bus ticket theory of genius’.

I'd like to see

  1. an overview of simple AI safety concepts and their easily explainable real-life demonstrations
    1. For instance, to explain sycophancy, I tend to mention the one random finding from this paper that hallucinations are more frequent, if a model deems the user uneducated
  2. more empirical posts on near-term destabilization (concentration of power, super-persuasion bots, epistemic collapse)

Maybe an inherently drafty idea, but I would love if someone wrote a post on the feasibility of homemade bivalvegan cat food. I remember there was a cause area profile post a while ago talking about making cheaper vegan cat food, but I'm also hoping to see if there's something practical and cheap right now. Bivalves seem like the obvious candidate for - less morally risky and other animal products, probably enjoyable for cats or able to be made into something enjoyable, and containing the necessary nutrients. I don't know any of that for sure, or if there are other things you can add to the food or supplement on the side that would make a cat diet like this feasible, and I would love if someone wrote up a practical report on this. For current or prospective cat owners.

I had this idea a while ago and meant to see if I could collaborate with someone on the research, but at this point barring major changes I would rather just see someone else do it well and efficiently. Fentanyl tests strips are a useful way to avoid overdoses in theory, and for some drugs can be helpful for this, but in practice the market for opioids is so flooded with adulterated products that they aren't that useful, because opioid addicts will still use drugs with fentanyl in them if it's all that's available. Changes in policy and technology might help with this and obviously the best solution is for opioid addicts to detox on something like suboxone and then abstain, but a sort of speculative harm-reduction idea occurred to me at some point that seems actionable now with no change in the technological or political situation.

Presumably these test-strips have a concentration threshold below which they can't detect fentanyl, so it might be possible to dilute some of the drug enough that, if the concentration of fentanyl is above a given level it will set off the test, and if it's below a given level it won't. There are some complications with this friends have mentioned to me (fentanyl has a bit of a clumping tendency for instance), but I think it would be great if someone figured out a practical guide for how to use test strips to determine the over/under concentration of a given batch of opioids so that active users can adjust their dosage to try to avoid overdoses. Maybe someone could even make and promote an app based on the idea.

I would like to see a strong argument for the risk of "replaceability" as a significant factor in potentially curtailing someone's counterfactual impact in what might otherwise be a high-impact job. This central idea is that the "second choice" applicant, after the person who was chosen, might have done just as well, or near just as well as the "first choice" applicant, making the counterfactual impact of the first small. I would want an analysis of the cascading impact argument: that you "free up" the second choice applicant to do other impactful work, who then "frees up" someone else, etc., and this stream of "freeing up value" mostly addresses the "replaceability concern.

I second this. Mostly because I have doubts about the 80,000 hours cause area. I love their podcast, but I suspect they get a bit shielded from criticism in a way other cause areas aren't by virtue of being such a core EA organization. A more extensive and critical inquiry into "replaceability" would be welcome, whatever the conclusion.

More stuff about systems change! (complexity theory, phase shift, etc)

Being metacrisis aware and criticizing the whole "single cause area specialization" because many of the big problems are interweaving

Curated and popular this week
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not