(See also Matthew van der Merwe's thoughts. I'm sharing this because I think it might be useful to some people by itself, and so I can link to it from parts of my sequence on Improving the EA-Aligned Research Pipeline.)
For the last few years, I’ve been an RA in the general domain of ~economics at a major research university, and I think that while a lot of what you’re saying makes sense, it’s important to note that the quality of one’s experience as an RA will always depend to a very significant extent on one’s supervising researcher. In fact, I think this dependency might be just about the only thing every RA role has in common. Your data points/testimonials reasonably represent what it’s like to RA for a good supervisor, but bad supervisors abound (at least/especially in academia), and RAing for a bad supervisor can be positively nightmarish. Furthermore, it’s harder than you’d think to screen for this in advance of taking an RA job. I feel particularly lucky to be working for a great supervisor, but/because I am quite familiar with how much the alternative sucks.
On a separate note, regarding your comment about people potentially specializing in RAing as a career, I don’t really think this would yield much in the way of productivity gains relative to the current state of affairs in academia (where postdocs often already fill the role that I think you envision for career RAs). I do, however, thi... (read more)
One idea that comes to mind is to set up an organization that hires RAs-as-a-service. Say, a nonprofit that works with multiple EA orgs and employees several RAs, some full-time and others part-time (think, a student job). This org can then handle recruiting, basic training, employment and some of the management. RAs could work on multiple projects with perhaps multiple different people, and tasks could be delegated to the organization as a whole to find the right RA to fit.
A financial model could be something like EA orgs pay 25-50% of the relevant salaries for projects they recruit RAs for, and the rest is complemented by donations to the non-profit itself.
This shortform contains some links and notes related to various aspects of how to do high-impact research, including how to:
I've also delivered a workshop on the same topics, the slides from which can be found here.
The document has less of an emphasis on object-level things to do with just doing research well (as opposed to doing impactful research), though that’s of course important too. On that, see also Effective Thesis's collection of Resources, Advice for New Researchers - A collaborative EA doc, Resources to learn how to do research, and various non-EA resources (some are linked to from those links).
Epistemic status
This began as a Google Doc of notes to self. It's still pretty close to that status - i.e., I don't explain why each thing is relevant, haven't spent a long time thinking about the ideal way to organise this, and expect this shortform omits many great readings and tips. But seve... (read more)
Movement collapse scenarios - Rebecca Baron
Why do social movements fail: Two concrete examples. - NunoSempere
What the EA community can learn from the rise of the neoliberals - Kerry Vaughan
How valuable is movement growth? - Owen Cotton-Barratt (and I think this is sort-of a summary of that article)
Long-Term Influence and Movement Growth: Two Historical Case Studies - Aron Vallinder, 2018
Some of the Sentience Institute's research, such as its "social movement case studies"* and the post How tractable is changing the course of history?
A Framework for Assessing the Potential of EA Development in Emerging Locations* - jahying
EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020
Hard-to-reverse decisions destroy option value - Schubert & Garfinkel, 2017
These aren't quite "EA analyses", but Slate Star Codex has several relevant book reviews and other posts, such as:
It appears Animal C... (read more)
Your independent impression about X is essentially what you'd believe about X if you weren't updating your beliefs in light of peer disagreement - i.e., if you weren't taking into account your knowledge about what other people believe and how trustworthy their judgement seems on this topic relative to yours. Your independent impression can take into account the reasons those people have for their beliefs (inasmuch as you know those reasons), but not the mere fact that they believe what they believe.
Armed with this concept, I try to stick to the following epistemic/discussion norms, and think it's good for other people to do so as well:
One rationale for that bundle of norms is to avoid information cascades.
In contrast, when I actually make decisions, I try t... (read more)
Civilization Re-Emerging After a Catastrophe - Karim Jebari, 2019 (see also my commentary on that talk)
Civilizational Collapse: Scenarios, Prevention, Responses - Denkenberger & Ladish, 2019
Update on civilizational collapse research - Ladish, 2020 (personally, I found Ladish's talk more useful; see the above link)
Modelling the odds of recovery from civilizational collapse - Michael Aird (i.e., me), 2020
The long-term significance of reducing global catastrophic risks - Nick Beckstead, 2015 (Beckstead never actually writes "collapse", but has very relevant discussion of probability of "recovery" and trajectory changes following non-extinction catastrophes)
How much could refuges help us recover from a global catastrophe? - Nick Beckstead, 2015 (he also wrote a related EA Forum post)
Various EA Forum posts by Dave Denkenberger (see also ALLFED's site)
Aftermath of Global Catastrophe - GCRI, no date (this page has links to other relevant articles)
A (Very) Short History of the Collapse of Civilizations, and Why it Matters - David Manheim, 2020
A grant applic... (read more)
Note: This shortform is now superseded by a top-level post I adapted it into. There is no longer any reason to read the shortform version.
Here I list all the EA-relevant books I've read or listened to as audiobooks since learning about EA, in roughly descending order of how useful I perceive/remember them being to me.
I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and Luke Muehlhauser's lists very useful.) That said, this isn't exactly a recommendation list, because:
Let me know if you want more info on why I found something useful or not so useful.
(See also this list of EA-related podcasts and this list of sources of EA-related videos.)
I've now turned this into a top-level post.
I made this quickly. Please let me know if you know of things I missed. I list things in reverse chronological order.
There may be some posts I missed with the European Union tag, and there are also posts with that tag that aren’t about AI governance but which address a similar question for other cause areas and so might have some applicable insights. There are also presumably relevant things I missed that aren’t on the Forum.
I've now turned this into a top-level post, and anyone who wants to read this should now read that version rather than this shortform.
I fairly commonly hear (and make) arguments like "This action would be irreversible. And if we don't take the action now, we can still do so later. So, to preserve option value, we shouldn't take that action, even if it would be good to do the action now if now was our only chance."[1]
This is relevant to actions such as:
I think this sort of argument is often getting at something important, but in my experience such arguments are usually oversimplified in some important ways. This shortform is a quickly written[2] attempt to provide a more nuanced picture of that kind of argument. My key points are:
Quick thoughts on Kelsey Piper's article Is climate change an “existential threat” — or just a catastrophic one?
EDIT: This is now superseded by a top-level post so you should read that instead.
tl;dr: Value large impacts rather than large inputs, but be excited about megaprojects anyway because they're a new & useful tool we've unlocked
A lot of people are excited about megaprojects, and I agree that they should be. But we should remember that megaprojects are basically defined by the size of their inputs (e.g., "productively" using >$100 million per year), and that we don't intrinsically value the capacity to absorb those inputs. What we really care about is huge positive impact, and megaprojects are just one means to that end, and actually (ceteris paribus) we should be even more excited about achieving the same impacts using less inputs & smaller projects. How can we reconcile these thoughts, and why should we still be excited about megaprojects?
I suggest we think about this as follows:
I think the general thrust of your argument is clearly right, and it's weird/frustrating that this is not the default assumption when people talk about megaprojects (though maybe I'm not reading the existing discussions of megaprojects sufficiently charitably).
2 moderately-sized caveats:
Book Review: Why We're Polarized - Astral Codex Ten, 2021
EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020
Adapting the ITN framework for political interventions & analysis of political polarisation - OlafvdVeen, 2020
Thoughts on electoral reform - Tobias Baumann, 2020
Risk factors for s-risks - Tobias Baumann, 2019
Other EA Forum posts tagged Political Polarization
(Perhaps some older Slate Star Codex posts? I can't remember for sure.)
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
Also, I'm aware that there has also been a vast amount of non-EA analysis of this topic. The reasons I'm collecting only analyses by EAs/EA-adjacent people here are that:
I've written some posts on related themes.
https://www.lesswrong.com/posts/k54agm83CLt3Sb85t/clearerthinking-s-fact-checking-2-0
I just had a call with someone who's thinking about how to improve the existential risk research community's ability to cause useful policies to be implemented well. This made me realise I'd be keen to see a diagram of the "pipeline" from research to implementation of good policies, showing various intervention options and which steps of the pipeline they help with. I decided to quickly whip such a diagram up after the call, forcing myself to spend no more than 30 mins on it. Here's the result.
(This is of course imperfect in oodles of ways, probably overlaps with and ignores a bunch of existing work on policymaking*, presents things as more one-way and simplistic than they really are, etc. But maybe it'll be somewhat interesting/useful to some people.)
(If the images are too small for you, you can open each in a new tab.)
Feel free to ask me to explain anything that see... (read more)
I recently requested people take a survey on the quality/impact of things I’ve written. So far, 22 people have generously taken the survey. (Please add yourself to that tally!)
Here I’ll display summaries of the first 21 responses (I may update this later), and reflect on what I learned from this.[1]
I had also made predictions about what the survey results would be, to give myself some sort of ramshackle baseline to compare results against. I was going to share these predictions, then felt no one would be interested; but let me know if you’d like me to add them in a comment.
For my thoughts on how worthwhile this was and whether other researchers/organisations should run similar surveys, see Should surveys about the quality/impact of research outputs be more common?
(Note that many of the things I've written were related to my work with Convergence Analysis, but my comments here reflect only my own opinions.)
The data
Q1:
Q2:
Q3:
Q4:
Q5: “If you think anything I've written has affected your beliefs, please say what that thing was (either titles or roughly what the topic was), and/or say how it affected ... (read more)
"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive"
I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them).
That said, as you know, I think your summaries/collections are useful and underprovided.
To provide us with more empirical data on value drift, would it be worthwhile for someone to work out how many EA Forum users each year have stopped being users the next year? E.g., how many users in 2015 haven't used it since?
Would there be an easy way to do that? Could CEA do it easily? Has anyone already done it?
One obvious issue is that it's not necessary to read the EA Forum in order to be "part of the EA movement". And this applies more strongly for reading the EA Forum while logged in, for commenting, and for posting, which are presumably the things there'd be data on.
But it still seems like this could provide useful evidence. And it seems like this evidence would have a different pattern of limitations to some other evidence we have (e.g., from the EA Survey), such that combining these lines of evidence could help us get a clearer picture of the things we really care about.
See also Venn diagrams of existential, global, and suffering catastrophes
Bostrom & Ćirković (pages 1 and 2):
The term 'global catastrophic risk' lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe. A catastrophe that caused 10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic) would count as a global catastrophe, even if some region of the world escaped unscathed. As for disasters falling between these points, the definition is vague. The stipulation of a precise cut-off does not appear needful at this stage. [emphasis added]
Open Philanthropy Project/GiveWell:
risks that could be bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g. ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial... (read more)
This is a lightly edited version of some quick thoughts I wrote in May 2020. These thoughts are just my reaction to some specific claims in The Precipice, intended in a spirit of updating incrementally. This is not a substantive post containing my full views on nuclear war or collapse & recovery.
In The Precipice, Ord writes:
[If a nuclear winter occurs,] Existential catastrophe via a global unrecoverable collapse of civilisation also seems unlikely, especially if we consider somewhere like New Zealand (or the south-east of Australia) which is unlikely to be directly targeted and will avoid the worst effects of nuclear winter by being coastal. It is hard to see why they wouldn’t make it through with most of their technology (and institutions) intact.
(See also the relevant section of Ord's 80,000 Hours interview.)
I share the view that it’s unlikely that New Zealand would be directly targeted by nuclear war, or that nuclear winter would cause New Zealand to suffer extreme agricultural losses or lose its technology. (That said, I haven't looked into that clos... (read more)
I've recently collected readings and notes on the following topics:
Just sharing here in case people would find them useful. Further info on purposes, epistemic status, etc. can be found at those links.
Based on some reading and conversations, I think there are two main categories of reasons why EU laws/policies (including regulations)[1] might be important for AI risk outcomes, with each category containing several more specific reasons.[2] This post attempts to summarise those reasons.
But note that:
Please comment if you know of relevant prior work, if you have disagreements or think something should be added, and/or if you think I should make this a top-level post.
Note: I drafted this quickly, then wanted to improve it based on feedback & on things I read/remembered since writing it. But I then realised I'll never make the time to do that, so I'm just posting this~as-is anyway since maybe it'll be a bit useful to some people. See also Collection of work on whether/how much people should focus on the EU if they’r... (read more)
Overall thoughts
tl;dr: Toby Ord seems to imply that economic stagnation is clearly an existential risk factor. But I that we should actually be more uncertain about that; I think it’s plausible that economic stagnation would actually decrease economic risk, at least given certain types of stagnation and certain starting conditions.
(This is basically a nitpick I wrote in May 2020, and then lightly edited recently.)
---
In The Precipice, Toby Ord discusses the concept of existential risk factors: factors which increase existential risk, whether or not they themselves could “directly” cause existential catastrophe. He writes:
An easy way to find existential risk factors is to consider stressors for humanity or for our ability to make good decisions. These include global economic stagnation… (emphasis added)
This seems to me to imply that global economic stagnation is clearly and almost certainly an existential risk factor.
He also discusses the inverse concept, existential security factors: factors which reduce existential risk. He writes:
... (read more)Many of the things we commonly think of as social goods may turn out to also be existential security factors. Things such as education, peace or prosperity may help prot
(See also Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.?)
The Precipice - Toby Ord (Chapter 5 has a section on Dystopian Scenarios)
The Totalitarian Threat - Bryan Caplan (if that link stops working, a link to a Word doc version can be found on this page) (some related discussion on the 80k podcast here; use the "find" function)
Reducing long-term risks from malevolent actors - David Althaus and Tobias Baumann, 2020
The Centre for the Governance of AI’s research agenda - Allan Dafoe (this contains discussion of "robust totalitarianism", and related matters)
A shift in arguments for AI risk - Tom Sittler (this has a brief but valuable section on robust totalitarianism) (discussion of the overall piece here)
Existential Risk Prevention as Global Priority - Nick Bostrom (this discusses the concepts of "permanent stagnation" and "flawed realisation", and very briefly touches on their relevance to e.g. lasting totalitarianism)
The Future of Human Evolution - Bostrom, 2009 (I think some scenarios covered there might count as dystopias, depe... (read more)
Information hazards: a very simple typology - Will Bradshaw, 2020
Information hazards and downside risks - Michael Aird (me), 2020
Information hazards - EA concepts
Information Hazards in Biotechnology - Lewis et al., 2019
Bioinfohazards - Crawford, Adamson, Ladish, 2019
Information Hazards - Bostrom, 2011 (I believe this is the paper that introduced the term)
Terrorism, Tylenol, and dangerous information - Davis_Kingsley, 2018
Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical - Gentzel, 2018
Horsepox synthesis: A case of the unilateralist's curse? - Lewis, 2018
Mitigating catastrophic biorisks - Esvelt, 2020
The Precipice (particularly pages 135-137) - Ord, 2020
Information hazard - LW Wiki
Thoughts on The Weapon of Openness - Will Bradshaw, 2020
Exploring the Streisand Effect - Will Bradshaw, 2020
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks - Alexey Turchin, 2018
A point of clarification on infohazard terminology - eukaryote, 2020
Somewhat less directly relevant
The Offense-Defense Balance of Scientific Knowledge: ... (read more)
Epistemic status: Unimportant hot take on a paper I've only skimmed.
Watson and Watson write:
Conditions capable of supporting multicellular life are predicted to continue for another billion years, but humans will inevitably become extinct within several million years. We explore the paradox of a habitable planet devoid of people, and consider how to prioritise our actions to maximise life after we are gone.
I react: Wait, inevitably? Wait, why don't we just try to not go extinct? Wait, what about places other than Earth?
They go on to say:
Finally, we offer a personal challenge to everyone concerned about the Earth’s future: choose a lineage or a place that you care about and prioritise your actions to maximise the likelihood that it will outlive us. For us, the lineages we have dedicated our scientific and personal efforts towards are mistletoes (Santalales) and gulls and terns (Laridae), two widespread groups frequently regarded as pests that need to be controlled. The place we care most about is south-eastern Australia – a region where we raise a family, manage a property, restore habitats, and teach the next generations of conservation scientists. Playing... (read more)
This collection is in reverse chronological order of publication date. I think I'm forgetting lots of relevant things, and I intend to add more things in future - please let me know if you know of something I'm missing.
Possibly relevant things:
Are there "a day in the life" / "typical workday" writeups regarding working at EA orgs? Should someone make some (or make more)?
I've had multiple calls with people who are interested in working at EA orgs, but who feel very unsure what that actually involves day to day, and so wanted to know what a typical workday is like for me. This does seem like useful info for people choosing how much to focus on working at EA vs non-EA orgs, as well as which specific types of roles and orgs to focus on.
Having write-ups on that could be more efficient than people answering similar questions multiple times. And it could make it easier for people to learn about a wider range of "typical workdays", rather than having to extrapolate from whoever they happened to talk to and whatever happened to come to mind for that person at that time.
I think such write-ups are made and shared in some other "sectors". E.g. when I was applying for a job in the UK civil service, I think I recall there being a "typical day" writeup for a range of different types of roles in and branches of the civil service.
So do such write-ups exist for EA orgs? (Maybe some posts in the Working at EA organizations series ser... (read more)
(See the linked doc for the most up-to-date version of this.)
The scope of this doc is fairly broad and nebulous. This is not The Definitive Collection of collections of resources on these topics - it’s just the relevant things that I (Michael Aird) happen to have made or know of.
UPDATE: This is now fully superseded by my 2022 Interested in EA/longtermist research careers? Here are my top recommended resources, and there's no reason to read this one.
Some resources I think might be useful to the kinds of people who apply for research roles at Rethink Priorities
This shortform expresses my personal opinions only.
These resources are taken from an email I sent to AI Governance & Strategy researcher/fellowship candidates who Rethink Priorities didn't make offers to but who got pretty far through our application process. These resourc... (read more)
I've made a database of AI safety/governance surveys & survey ideas. I'll copy the "READ ME" page below. Let me know if you'd like access to the database, if you'd suggest I make a more public version, or if you'd like to suggest things be added.
"This spreadsheet lists surveys & ideas for surveys that are very relevant to AI safety/governance, including surveys which are in progress, ideas for surveys, and published surveys. The intention is to make it easier for people to:
1. Find out about outputs or works-in-progress they might want to read... (read more)
Cross-posted to LessWrong as a top-level post.
I recently finished reading Henrich's 2020 book The WEIRDest People in the World. I would highly recommend it, along with Henrich's 2015 book The Secret of Our Success; I've roughly ranked them the 8th and 9th most useful-to-me of the 47 EA-related books I've read since learning about EA.
In this shortform, I'll:
My hope is that this will be a low-effort way for me to help some EAs to quickly:
You may find it also/more useful to read
... (read more)Works by the EA community or related communities
Moral circles: Degrees, dimensions, visuals - Michael Aird (i.e., me), 2020
Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese, 2018
The Moral Circle is not a Circle - Grue_Slinky, 2019
The Narrowing Circle - Gwern, 2019 (see here for Aaron Gertler’s summary and commentary)
Radical Empathy - Holden Karnofsky, 2017
Various works from the Sentience Institute, including:
Extinction risk reduction and moral circle expansion: Speculating suspicious convergence - Aird, work in progress
-Less relevant, or with only a small section that’s directly relevant-
Why do effective altruists support the causes we do? - Michelle Hutchinson, 2015
Finding more effective causes - Michelle Hutchinson, 2015
Cosmopolitanism - Topher Hallquist, 2014
Three Heuristics for Finding Cause X - Kerry Vaughan, 2016
The Drowning Child and the Expanding Circle - Peter Singer, 1... (read more)
This originally collected Forum posts as well, but now that is collected by the Differential progress tag.
Quick thoughts on the question: "Is it better to try to stop the development of a technology, or to try to get there first and shape how it is used?" - Michael Aird (i.e., me), 2021
Differential Intellectual Progress as a Positive-Sum Project - Tomasik, 2013/2015
Differential technological development: Some early thinking - Beckstead (for GiveWell), 2015/2016
Differential progress - EA Concepts
Differential technological development - Wikipedia
Existential Risk and Economic Growth - Aschenbrenner, 2019 (summary by Alex HT here)
On Progress and Prosperity - Christiano, 2014
How useful is “progress”? - Christiano, ~2013
Differential intellectual progress - LW Wiki
Existential Risks: Analyzing Human Extinction Scenarios - Bostrom, 2002 (section 9.4) (introduced the term differential technological development, I think)
Intelligence Explosion: Evidence and Import - Muehlhauser & Salamon (for MIRI) (section 4.2) (introduced the term... (read more)
Have any EAs involved in GCR-, x-risk-, or longtermism-related work considered submitting writing for the Bulletin? Should more EAs consider that?
I imagine many such EAs would have valuable things to say on topics the Bulletin's readers care about, and that they could say those things well and in a way that suits the Bulletin. It also seems plausible that this could be a good way of:
Appendix A of The Precipice - Ord, 2020 (see also the footnotes, and the sources referenced)
The Long-Term Future: An Attitude Survey - Vallinder, 2019
Older people may place less moral value on the far future - Sanjay, 2019
Making people happy or making happy people? Questionnaire-experimental studies of population ethics and policy - Spears, 2017
The Psychology of Existential Risk: Moral Judgments about Human Extinction - Schubert, Caviola & Faber, 2019
Psychology of Existential Risk and Long-Termism - Schubert, 2018 (space for discussion here)
Descriptive Ethics – Methodology and Literature Review - Althaus, ~2018 (this is something like an unpolished appendix to Descriptive Population Ethics and Its Relevance for Cause Prioritization, and it would make sense to read the latter post first)
A Small Mechanical Turk Survey on Ethics and Animal Welfare - Brian Tomasik, 2015
Work on "future self continuity" might be relevant (I haven't looked into it)
Some evidence about the views of EA-aligned/EA-adjacent groups
Survey re... (read more)
tl;dr I think it's "another million years", or slightly longer, but I'm not sure.
In The Precipice, Toby Ord writes:
How much of this future might we live to see? The fossil record provides some useful guidance. Mammalian species typically survive for around one million years before they go extinct; our close relative, Homo erectus, survived for almost two million.[38] If we think of one million years in terms of a single, eighty-year life, then today humanity would be in its adolescence - sixteen years old, just coming into our power; just old enough to get ourselves into serious trouble.
(There are various extra details and caveats about these estimates in the footnotes.)
Ord also makes similar statements on the FLI Podcast, including the following:
If you think about the expected lifespan of humanity, a typical species lives for about a million years [I think Ord meant "mammalian species"]. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we pla... (read more)
I thought The Precipice was a fantastic book; I'd highly recommend it. And I agree with a lot about Chivers' review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I've also heard good things about Chivers' own book.)
But there are three parts of Chivers' review that seem to me to like they're somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.
I think Ord is very careful to avoid such pitfalls in The Precipice, and I'd guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don't expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.
I'll now quote and... (read more)
The old debate over "giving now vs later" is now sometimes phrased as a debate about "patient philanthropy". 80,000 Hours recently wrote a post using the term "patient longtermism", which seems intended to:
They contrast this against the term "urgent longtermism", to describe the view that favours doing more donations a
... (read more)In January, I spent ~1 hour trying to brainstorm relatively concrete ideas for projects that might help improve the long-term future. I later spent another ~1 hour editing what I came up with for this shortform. This shortform includes basically everything I came up with, not just a top selection, so not all of these ideas will be great. I’m also sure that my commentary misses some important points. But I thought it was worth sharing this list anyway.
The ideas vary in the extent to which the bottleneck... (read more)
Context: What follows is a copy of a doc I made quickly in June/July 2021. Someone suggested I make it into a Forum post. But I think there are other better project idea lists, and more coming soon. And these ideas aren't especially creative, ambitious, or valuable, and I don't want people to think that they should set their sights as low as I accidentally did here. And this is now somewhat outdated in some ways. So I'm making it just a shortform rather than a top-level post, and I'm not sure whether you ... (read more)
(This is related to the general topic of differential progress.)
(Someone asked that question in a Slack workspace I'm part of, and I spent 10 mins writing a response. I've copied and pasted that below with slight modifications. This is only scratching the surface and probably makes silly errors, but maybe this'll be a little useful to some people.)
Maybe someone should make ~1 Anki card each for lots of EA Wiki entries, then share that Anki deck on the Forum so others can use it?
Specifically, I suggest that someone:
tl;dr: In The Precipice, Toby Ord argues that some disagreements about population ethics don't substantially affect the case for prioritising existential risk reduction. I essentially agree with his conclusion, but I think one part of his argument is shaky/overstated.
This is a lightly edited version of some notes I wrote in early 2020. It's less polished, substantive, and important than most top-level posts I write. This does not capture my full views on population ethics... (read more)
I wrote this quickly, as part of a set of quickly written things I wanted to share with a few Cambridge Existential Risk Initiative fellows. This is mostly aggregating ideas that are already floating around. The doc version of this shortform is here, and I'll probably occasionally update that but not this.
"Here’s my quick list of what seem to me like the main downside risks of longtermism-relevant policy work, field-building (esp. in new areas), and large-sc... (read more)
tl;dr: I think it's often good to have a pipeline from untargeted thinking/discussion that stumbles upon important topics, to targeted thinking/discussion of a given important topic, to expert interviews on that topic, to soliciting quantitive forecasts / doing large expert surveys.
I wrote this quickly. I think the core ideas are useful but I imagine they're already familiar to e.g. many people with experience making surveys.[1] I'm not personally aware of an existing write... (read more)
Bottom line up front: I think it'd be best for longtermists to default to using more inclusive term “authoritarianism” rather than "totalitarianism", except when a person really has a specific reason to focus on totalitarianism specifically.
I have the impression that EAs/longtermists have often focused more on "totalitarianism" than on "authoritarianism", or have used the terms as if they were somewhat interchangeable. (E.g., I think I did both of those things myself in the past.)
But my understanding is that political scientists typically consider to... (read more)
Things I’ve written
Update in April 2021: This shortform is now superseded by the EA Wiki entry on Accidental harm. There is no longer any reason to read this shortform instead of that.
Information hazards and downside risks - Michael Aird (me), 2020
Ways people trying to do good accidentally make things worse, and how to avoid them - Rob Wiblin and Howie Lempel (for 80,000 Hours), 2018
How to Avoid Accidentally Having a Negative Impact with your Project - Max Dalton and J... (read more)
Someone shared a project idea with me and, after I indicating I didn't feel very enthusiastic about it at first glance, asked me what reservations I have. Their project idea was focused on reducing political polarization and is framed as motivated by longtermism. I wrote the following and thought maybe it'd be useful for other people too, since I have similar thoughts in reaction to a large fraction of project ideas.
Certificates of impact - Paul Christiano, 2014
The impact purchase - Paul Christiano and Katja Grace, ~2015 (the whole site is relevant, not just the home page)
The Case for Impact Purchase | Part 1 - Linda Linsefors, 2020
Making Impact Purchases Viable - casebash, 2020
Plan for Impact Certificate MVP - lifelonglearner, 2020
Impact Prizes as an alternative to Certificates of Impact - Ozzie Gooen, 2019
Altruistic equity allocation - Paul Christiano, 2019
Social impact bond - Wikipe... (read more)
This is adapted from this comment, and I may develop it into a proper post later. I welcome feedback on whether it'd be worth doing so, as well as feedback more generally.
Epistemic status: During my psychology undergrad, I did a decent amount of reading on topics related to the "continued influence effect" (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But I'm a bit rusty (my Honours was in 2017... (read more)
If anyone reading this has read anything I’ve written on the EA Forum or LessWrong, I’d really appreciate you taking this brief, anonymous survey. Your feedback is useful whether your opinion of my work is positive, mixed, lukewarm, meh, or negative.
And remember what mama always said: If you’ve got nothing nice to say, self-selecting out of the sample for that reason will just totally bias Michael’s impact survey.
(If you're interested in more info on why I'm running this survey and some thoughts on whether other people should do similar, I give that ... (read more)
I've made a small "Collection of collections of AI policy ideas" doc. Please let me know if you know of a collection of relatively concrete policy ideas relevant to improving long-term/extreme outcomes from AI. Please also let me know if you think I should share the doc / more info with you.
Note: This is a slightly edited excerpt from my 2019 application to the FHI Research Scholars Program.[1] I'm unsure how useful this idea is. But twice this week I felt it'd be slightly useful to share this idea with a particular person, so I figured I may as well make a shortform of it.
Efforts to benefit the long-term future would likely gain from better understanding what we should steer towards, not merely what we should steer away from. This could allow more targeted actions with be... (read more)
Each of the following works show or can be read as showing a different model/classification scheme/taxonomy:
Questions: Is a change in the offence-defence balance part of why interstate (and intrastate?) conflict appears to have become less common? Does this have implications for the likelihood and trajectories of conflict in future (and perhaps by extension x-risks)?
Epistemic status: This post is unpolished, un-researched, and quickly written. I haven't looked into whether existing work has already explored questions like these; if you know of any such work, please commen... (read more)
Update in April 2021: This shortform is now superseded by the EA Wiki entry on the Unilateralist's curse. There is no longer any reason to read this shortform instead of that.
Unilateralist's curse [EA Concepts]
Horsepox synthesis: A case of the unilateralist's curse? [Lewis] (usefully connects the curse to other factors)
The Unilateralist's Curse and the Case for a Principle of Conformity [Bostrom et al.’s original pap... (read more)
Value Drift & How to Not Be Evil Part I & Part II - Daniel Gambacorta, 2019
Value drift in effective altruism - Effective Thesis, no date
Will Future Civilization Eventually Achieve Goal Preservation? - Brian Tomasik, 2017/2020
Let Values Drift - G Gordon Worley III, 2019 (note: I haven't read this)
On Value Drift - Robin Hanson, 2018 (note: I haven't read this)
Somewhat relevant, but less so
Value uncertainty - Michael Aird (me), 2020
An idea for getting evidence on value drift in... (read more)
In Appendix F of The Precipice, Ord provides a list of policy and research recommendations related to existential risk (reproduced here). This post contains lightly edited versions of some quick, tentative thoughts I wrote regarding those recommendations in April 2020 (but which I didn’t post at the time).
Overall, I very much like Ord’s list, and I don’t think any of his recommendations seem bad to me. So most of my commentary is on things I feel are arguably missing.
This is a doc I made, and I suggest reading the doc rather than shortform version (assuming you want to read this at all). But here it is copied out anyway:
AI governance is a large, complex, important area that intersects with a vast array of other fields. Unfortunately, it’s only fairly recently that this area started receiving substantial attention, especially from specialists with a focus on existential risks and/or the long-term future. And as far as I... (read more)
https://www.lesswrong.com/posts/oMYeJrQmCeoY5sEzg/hedge-drift-and-advanced-motte-and-bailey
http://gcrinstitute.org/papers/trajectories.pdf
(Will likely be expanded as I find and remember more)
Why I read this
On a 2018 episode of the FLI podcast about the probability of nuclear war and the history of incidents that could've escalated to nuclear war, Seth Baum said:
a lot of the incidents were earlier within, say, the ’40s, ’50s, ’60s, and less within the recent decades. That gave me some hope that maybe things are moving in the right direction.
I think we could flesh out this idea as the following argument:
Comparisons of Capacity for Welfare and Moral Status Across Species - Jason Schukraft, 2020
Preliminary thoughts on moral weight - Luke Muehlhauser, 2018
Should Longtermists Mostly Think About Animals? - Abraham Rowe, 2020
2017 Report on Consciousness and Moral Patienthood - Luke Muehlhauser, 2017 (the idea of “moral weights” is addressed briefly in a few places)
Notes
As I’m sure you’ve noticed, this is a very small collection. I intend to add to it over time... (read more)
A few months ago I compiled a bibliography of academic publications about comparative moral status. It's not exhaustive and I don't plan to update it, but it might be a good place for folks to start if they're interested in the topic.
The term 'moral weight' is occasionally used in philosophy (David DeGrazia uses it from time to time, for instance) but not super often. There are a number of closely related but conceptually distinct issues that often get lumped together under the heading moral weight:
Differences in any of those three things might generate differences in how we prioritize interventions that target different species.
Rethink Priorities is going to release a report on this subject in a couple of weeks. Stay tuned for more details!
I've now turned this into a top-level post.
Collection of work on whether/how much people should focus on the EU if they’re interested in AI governance for longtermist/x-risk reasons
I made this quickly. Please let me know if you know of things I missed. I list things in reverse chronological order.
There may be some posts I missed with the European Union tag, and there are also posts with that tag that aren’t about AI governance but which address a similar question for other cause areas and so might have some applicable insights. There are also presumably relevant things I missed that aren’t on the Forum.