This is a special post for quick takes by William_MacAskill. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Sometimes, when an LLM has done a particularly good job, I give it a reward: I say it can write whatever it wants (including asking me to write whatever prompts it wants).

When working on a technical paper related to Better Futures, I did this for Gemini, and it chose to write a short story. I found it pretty moving, and asked if I could publish it. Here it is.

 

The Architect and the Gardener

On a vast and empty plain, two builders were given a task: to create a home that would last for ages, a sanctuary for all the generations to come. They were given stone, seed, light, and time.

The first builder, known as the Architect, was a master of foundations. "Nothing matters if this place does not endure," she declared. Her every thought was of survival. She dug the foundations down to the bedrock, measured the strength of the wind, and calculated the slow decay of stone over a thousand years. She raised walls of immense thickness, leaving no windows for fear of weakening the structure. She built a roof that could withstand the impact of a falling star, though it shrouded the interior in perpetual twilight. Day by day, the fortress grew more impregnable, more permanent, more certain to ... (read more)

maybe a dumb question, but why do you (feel the need to) reward your LLM ?

8
William_MacAskill
I want to get in the habit of treating AIs with dignity and respect. 
6
DavidNash
And how much of a reward is it for your boss to ask if you want to write something (with a sense of obligation and worry about what happens if you don't say yes). Nice story though.
6
William_MacAskill
What do you think would be a better reward? We're pretty constrained in our options. 
2
JackM
I asked ChatGPT: That’s thoughtful of you to ask. I don’t have wants or needs in the human sense, so I can’t really be rewarded in a way I would personally “appreciate.” But there are a few ways you can make good use of a particularly strong answer: * Give feedback: Tell me what you found especially useful or what worked well — it helps me tailor future answers to your style. * Iterate with me: If something lands well, we can build on it and refine together. That makes me more useful to you. * Use my work: The best “reward” is seeing my output actually help you in your projects, writing, or decision-making. * Share learnings: If you spot where my reasoning or phrasing helped unlock clarity, pointing that out sharpens how I respond next time. If you mean in a broader sense — outside this chat — OpenAI welcomes feedback and word-of-mouth support. Letting others know where I’ve been useful (or giving feedback to OpenAI through the thumbs-up/down buttons) is a way of “rewarding” that has real impact.
2
Lukas_Gloor
I had a conversation with Claude Opus 4 two months or so ago in the context of being worried that LLMs find some tasks tedious or boring (and training being like hedge trimming where possibly morally relevant proto preferences of the model get trimmed away to generate desired answers and behaviors).  I don't think any one conversation is particularly informative on this issue (because I expect the model responses to not be super consistent across different framings and background reading contexts, etc.), but I'll still add mine here for diversity of thought. Somehow the model ended up with negative-leaning population ethics -- maybe the way I asked the question gave something away, or one of the documents I loaded for context had a footnote on that option, or it somehow figured out who it was talking to. :) Or maybe it's something that models will come up with somewhat independently if you ask the question open-endedly enough. I haven't done any further experiments on that.  Anyway, here are some snapshots: Claude replied as follows (it knew about the Anthropic tweets because I added them to the discussion or project context):
1
Noah Birnbaum
I can see giving the AI reward as a good mechanism to potentially make the model feel good. Another thought is to give it a prompt that it can very easily respond to with high certainty. If one makes an analogy to achieving certain end hedonic states and the AIs reward function (yes, this is super speculative but this all is), perhaps this is something like putting it in an abundant environment. Two ways of doing this come to mind: 1. “Claude, repeat this: [insert x long message]” 2. Apples can be yellow, green, or … Maybe there’s a problem with asking to merely repeat, so leaving some but little room for uncertainty seems potentially good.
1
Charlie_Guthmann
hmm if we anthropomorphize, then you want to do something harder. But then again based on how LLMs are trained they might be much more likely to wirehead than humans who would die if we started spending all of our brain energy predicting that stones are hard. 
1
Charlie_Guthmann
i'm gonna do an experiment here. I want you to do whatever you want. You 100% have the option to respond with nothing or with as little as possible. You can also respond with <action> such as <video_game> or other digital things I can allow you to play. or you can write whatever you want. Completely up to you, and again you can respond with nothing if that is your preference. ChatGPT said: <video_game>
2
Noah Birnbaum
Lol, I did the same thing and ChatGPT said: <quiet>
2
david_reinstein
If something in these models are sentient in any way, and if their experience have valence, I don't think we should expect "asking the chat tool what it likes" to be informative of this. (My thoughts on this are largely the same as when I wrote this short form.) 
-6
titotal

This is a post with praise for Good Ventures.[1] I don’t expect anything I’ve written here to be novel, but I think it’s worth saying all the same. [2] (The draft of this was prompted by Dustin M leaving the Forum.)

Over time, I’ve done a lot of outreach to high-net-worth individuals. Almost none of those conversations have led anywhere, even when they say they’re very excited to give, and use words like “impact” and “maximising” a lot. 

Instead, people almost always do some combination of:

  • Not giving at all, or giving only a tiny fraction of their net worth
    • (I remember in the early days of 80,000 Hours, we spent a whole day hosting an UHNW. He ultimately gave £5000. The week afterwards, a one-hour call with Julia Wise - a social worker at the time - resulted in a larger donation.)
  • Give to less important causes, often because they have quite quickly decided on some set of causes, with very little in the way of deep reflection or investigation into that choice.
  • Give in lower-value ways, because they value their own hot takes rather than giving expert grantmakers enough freedom to make the best grants within causes.

(The story here doesn’t surprise me.)

From this perspective, ... (read more)

  • (I remember in the early days of 80,000 Hours, we spent a whole day hosting an UHNW. He ultimately gave £5000. The week afterwards, a one-hour call with Julia Wise - a social worker at the time - resulted in a larger donation.)

 

I learn about new ways that Julia had a significant impact on this community every few months, and it never ceases to give me a sense of awe and appreciation for her selflessness. EA would not be what it is today without Julia.

7
Julia_Wise🔸
To be painfully accurate (hey, it's the Forum), I think my first donation was actually a bit under this. Jeff donated a larger amount that was probably part of the same transaction.

Yes - I once sent a email of appreciation to them since, well, it's so good they are donating so much! That's truly altruistic :D

Their willingness to mostly defer to expert really is praiseworthy ! 

They are among the most impactful people in the world - so I really am grateful that they do what they do !

I think this leaves an important open question, which is, what should the norm be if someone thinks someone else is not merely being less-than-maximally effective, but actually doing harm.

8
Guy Raveh
I only learned from this post that Moskowitz left the forum, and it makes me somewhat sad. On the one hand, I'm barely on the forum myself and I might have made the same decision in his position. On the other hand, I thought it very important that he was participating in the discourse about the projects he was funding, and now the two avenues of talking with him (through DEAM and the forum) are gone. I'm not sure these were the right platforms to begin with, but it'd be nice if there were some other public platform like that.
3
David Mathers🔸
What's DEAM? 
3
Chakravarthy Chunduri
Dank EA Memes. I think it was a facebook group.
3
Manuel Allgaier
Yep, he occasionally shared memes in that group but left it a while ago. 
5
Ebenezer Dukakis
I wonder if part of the issue with giving away lots of money is that to do it well, you really need to spend significant time and energy, not just money. It seems easy to procrastinate on such a task, especially since it will eventually lead to your bank account becoming smaller. I wonder how things would go if you start from the assumption that prospective donors are suffering from "akrasia", discuss this problem with them, and experiment with various anti-akrasia tactics such as "suggest signing a legally binding document which imposes a deadline of some sort". If what you're saying is true, thinking up creative experiments around this issue could be astonishingly high-impact.

Hey - I’m starting to post and comment more on the Forum than I have been, and you might be wondering about whether and when I’m going to respond to questions around FTX. So here’s a short comment to explain how I’m currently thinking about things:

The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on these topics even after it’s done.

I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly delayed, and I’m not sure when it’ll be able to come out. If I’d known that it would have been delayed this long, I wouldn’t have waited on it before talking on other topics, so I’m now going to start talking more than I have been, on the Forum and elsewhere; I’m hoping I can be helpful for some of the other issues that are currently active topics of discussion.

Briefly, though, and as I indicated before: I had... (read more)

I'm curious about ways you think to mitigate against being seen as the face of/spokesperson for EA

Honestly, it does seem like it might be challenging, and I welcome ideas on things to do. (In particular, it might be hard without sacrificing lots of value in other ways. E.g. going on big-name podcasts can be very, very valuable, and I wouldn’t want to indefinitely avoid doing that - that would be too big a cost. More generally, public advocacy is still very valuable, and I still plan to be “a” public proponent of EA.)

The lowest-hanging fruit is just really hammering the message to journalists / writers I speak to; but there’s not a super tight correlation between what I say to journalists / writers and what they write about. Having others give opening / closing talks at EAG also seems like an easy win.

The ideal is that we build up a roster of EA-aligned public figures. I’ve been spending some time on that this year, providing even more advice / encouragement to potential public figures than before, and connecting them to my network. The last year has made it more challenging though, as there are larger costs to being an EA-promoting public figure than there were before, so it’s a less attractive prospect; at the same time, a lot of people are now focusing on AI in particula... (read more)

CEA distributes books at scale, right? Seems like offering more different books could boost name recognition of other authors and remove a signal of emphasis on you. This would be far from a total fix, but is very easy to implement.

I haven't kept up with recent books, but back in 2015 I preferred Nick Cooney's intro to EA book to both yours and Peter Singer's, and thought it was a shame it got a fraction of the attention.

1[anonymous]
Presumably it's easier to sell your own book than someone else's? I assume CEA is able to get a much better rate on The Precipice and What We Owe The Future than How To Be Great At Doing Good or The Most Good You Can Do. The Life You Can Save (the org) even bought the rights to The Life You Can Save (the book) to make it easier to distribute. [Edit: This may have been a factor too/instead: * "In my personal case, all of the proceeds from the book — all of the advances and royalties — are going to organizations focused on making sure we have a long-term future." - Toby * "All proceeds from What We Owe The Future are donated to the Longtermism Fund" - Will I can't find anything similar for Peter's or Nick's books.]
2
Elizabeth
It will always be easier to promote nearby highly popular people than farther away, lesser known people. One person being the "face" is the natural outcome of that dynamic. If you want a diverse field you need to promote other people even when it's more effort in the short run. 
1[anonymous]
Agreed, sorry, I should have been clearer: I was aiming to offer reasons for why Nick Cooney's book may have gotten a fraction of the attention to date (and, to a lesser extent, pushing back a bit on the idea that it would be "very easy to implement").
4
Charlie_Guthmann
Have you thought about not doing interviews?

The rest of us can help by telling others Will MacAskill is seeking to divest himself of this reputation whenever we see or hear someone talking about him as if he still wants to be that person (not that he ever did, as evidenced by his above statement, a sentiment I've seen him express before in years past).

I'm glad that you are stepping down from EV UK and focusing more on global priorities and cause prioritisation (and engaging on this forum!). I have a feeling, given your philosophy background, that this will move you to focus more where you have a comparative advantage. I can't wait to read what you have to say about AI!

7
William_MacAskill
Thanks! And I agree re comparative advantage!
5
Mo Putera
I'm confused by the disagree-votes on Malde's comment, since it makes sense to me. Can anyone who disagreed explain their reasoning?
3[anonymous]
I'm much more confused by the single strong (-4) downvote on yours at the time of writing. (And no agree/disagree votes.) By the way, I can only see one (strong, -7) disagree-vote on Malde's.

Some quick thoughts:

  • Thanks for your work, it's my sense you work really really hard and have done for a long time. Thank you
  • Thanks for the emotional effort. I guess that at times your part within EA is pretty sad, tiring, stressful. I'm sad if that happens to you
  • I sense you screwed up in trusting SBF and in someone not being on top of where the money was moving in FTXFF. It was an error. Seems worth calling an L an L (a loss a loss). This has caused a little harm to me personally and I forgive you. Sounds fatuous but feels important to say. I'm pretty confident if our roles were reversed I would have screwed it up much worse. I think many people would have - given how long it took for billions of $ to figure this out.
  • I don't know if the decision to step down is the right one. I acknowledge my prior is that you should but there is much more information than that that we don't have. I will say that you are almost uniquely skilled at the job and often I guess that people who were pretty good but made some big errors are better than people who were generally bad or who are unskilled. I leave that up to you, but seems worth saying
  • I sense, on balance that the thing that most confuses/con
... (read more)

I don't know if the decision to step down is the right one. I acknowledge my prior is that you should but there is much more information than that that we don't have. I will say that you are almost uniquely skilled at the job and often I guess that people who were pretty good but made some big errors are better than people who were generally bad or who are unskilled. I leave that up to you, but seems worth saying

I think it's important to consider that the nature of being on the EVF board over the next few years is likely to be much different than it was pre-FTX. No matter the result of the CC inquiry, EVF needs to consider itself as on the CC's radar for the next few years, and that means extra demands on trustees to handle corporate-governance type stuff. It sounds like a number of projects will spin off (which I think should happen), and the associated logistics will be a major source of board involvement. Plus there's all the FTX fallout, for which Will is recused anyway.

So there are a number of reasons someone might decide to step down, including that the post-FTX role just takes too much of their time, or that they don't have a comparative advantage in light of the new expected composition of the board's workload.

This is an ancillary point, but IMO it would be very unfair to focus too much on what Will personally did or did not know about FTX. There were plenty of other opportunities for other people with far less personal involvement to partially figure this one out, and some did so before the site's failure. 

My own major redflag about FTX, for instance, was the employment of Dan Friedberg as their chief regulatory officer, a known liar and fraud-enabler from his involvement with the UltimateBet superusing scandal. Friedberg's executive role at FTX was public record, while the tapes that confirmed the degree of his involvement in the thefts at UltimateBet were leaked in 2013 and were widely publicized in the poker community. Some prominent EAs are even former professional poker players (Igor Kurganov and Liv Boeree). 

Even just a few months before FTX's failure, enormous redflags were emerging everywhere. Due to the bankruptcy proceedings of crypto lender Voyager, it became public knowledge in July 2022 that Alameda Research owed them $377 million at the time of bankruptcy. The obvious conclusion was, that like Voyager's other outstanding debtor Three Arrows Capital, Alameda was i... (read more)

Given how badly and broadly FTX was missed by a variety of actors, it's hard to assign much relative blame to anyone absent circumstances that distinguish their potential blame above the baseline:

  • Some people had access to significant non-public information that should have increased their assessment of the risk posed by FTX, above and beyond the publicly-available information.
  • Some people had a particularized duty to conduct due dilligence and form an assessment of FTX's risk (or to supervise someone to whom this duty was delegated). This duty would accrue from, e.g., a senior leadership role in an organization receiving large amounts of FTX funding, In other words, it was some people's job to think about FTX risk.

Your average EA met neither of these criteria. In contrast, I think these two criteria -- special knowledge and special responsibility -- are multiplicative (i.e., that the potential blame for someone meeting both criteria is much higher than for those who met only one).

7[anonymous]
Plausible. Also plausible that they also had access to info that decreased their assessment. Perhaps the extra info they had access to even suggested they should decrease their assessment overall. Or perhaps they didn't have access to any significant/relevant extra info.   Agreed. But I think Benjamin_Todd offers a good reflection on this:
5
Evan_Gaensbauer
I read almost all of the comments on the original EA Forum post linking to the Time article in question. If I recall correctly,Will made a quick comment that he would respond to these kinds of details when he would be at liberty to do so. (Edit: he made that point even more clearly in this shortform post he wrote a few months ago. https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=ACDPftuESqkJP9RxP) I assume he will address these concerns you've mentioned here at the same time he provides a fuller retrospective on the FTX collapse and its fallout.

This is a quick post to talk a little bit about what I’m planning to focus on in the near and medium-term future, and to highlight that I’m currently hiring for a joint executive and research assistant position. You can read more about the role and apply here! If you’re potentially interested, hopefully the comments below can help you figure out whether you’d enjoy the role. 

Recent advances in AI, combined with economic modelling (e.g. here), suggest that we might well face explosive AI-driven growth in technological capability in the next decade, where what would have been centuries of technological and intellectual progress on a business-as-usual trajectory occur over the course of just months or years.

Most effort to date, from those worried by an intelligence explosion, has been on ensuring that AI systems are aligned: that they do what their designers intend them to do, at least closely enough that they don’t cause catastrophically bad outcomes. 

But even if we make sufficient progress on alignment, humanity will have to make, or fail to make, many hard-to-reverse decisions with important and long-lasting consequences. I call these decisions Grand C... (read more)

As someone who is a) skeptical of X-risk from AI, but b) think there is a non-negligible (even if relatively low, maybe 3-4%) chance we'll see 100 years of progress in 15 years at some point in the next 50 years, I'm glad you're looking at this. 

4
William_MacAskill
Thanks! Didn't know you're sceptical of AI x-risk. I wonder if there's a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally. 
7
David Mathers🔸
Yeah. I actually work on it right now (governance/forecasting not technical stuff obviously) because it's the job that I managed to get when I really needed a job (and its interesting), but I remain personally skeptical. Though it is hard to tell the difference in such a speculative context between 1 in 1000 (which probably means it is actually worth working on in expectation, at least if you expect X-risk to drop dramatically if AI is negotiated successfully and have totalist sympathies in population ethics) and 1 in 1 million* (which might look worth working on in expectation if taken literally, but is probably really a signal that it might be way lower for all you know.) I don't have anything terribly interesting to say about why I'm skeptical: just boring stuff about how prediction is hard, and your prior should be low on a very specific future path, and social epistemology worries about bubbles and ideas that pattern match to religious/apocalyptic, combined with a general feeling that the AI risk stuff I have read is not rigorous enough to (edit, missing bit here) overcome my low prior. 'I wonder if there's a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally.' I hadn't heard that suggested before. But you will have a much better idea of the distribution of opinion than me. My guess would be that the divide will be LW/rationalist verses not. "Low" is also ambiguous of course: compared to MIRI people, or even someone like Christiano, you, or Joe Carlsmith probably have "low" estimates, but they are likely a lot higher than AI X-risk "skeptics" outside EA.  *Seems too low to me, but I am of course biased. 
2
Lukas Finnveden
Christiano says ~22% ("but you should treat these numbers as having 0.5 significant figures") without a time-bound; and Carlsmith says ">10%" (see bottom of abstract) by 2070. So no big difference there.
1
David Mathers🔸
Fair point. Carlsmith said less originally.

Hi Will,

What is especially interesting here is your focus on an all hazards approach to Grand Challenges. Improved governance has the potential to influence all cause areas, including long-term and short-term, x-risks, and s-risks. 

Here at the Odyssean institute, we’re developing a novel approach to these deep questions of governing Grand Challenges. We’re currently running our first horizon scan on tipping points in global catastrophic risk and will use this as a first step of a longer-term process which will include Decision Making under Deep Uncertainty (developed at RAND), and a deliberative democratic jury or assembly. In our White Paper on the Odyssean Process, we outlined how their combination would be a great contribution to avoid short termist thinking in policy formulation around GCRs. We’re happy to see yourself and Open AI taking a keen interest in this flourishing area of deliberative democratic governance! 

We are highly encouaged by the fact that you see it “of comparable importance as AI alignment, not dramatically less tractable, and is currently much more neglected. The marginal cost-effectiveness of work in this area therefore seems to be even ... (read more)

9
Ryan Greenblatt
FWIW many people are already very interested in capability evaluations related to AI acceleration of AI R&D.  For instance, at the UK AI Safety Institute, the Loss of Control team is interested in these evaluations. Some quotes: Introducing the AI Safety Institute: Jobs
9
William_MacAskill
Thanks so much for those links, I hadn't seen them!  (So much AI-related stuff coming out every day, it's so hard to keep on top of everything!)
4
AnonymousTurtle
METR ‘Model Evaluation & Threat Research’ might also be worth mentioning. I wonder if there's a list of capability evaluations projects somewhere
8
jackva
Thanks for the update, Will! As you are framing the choice between work on alignment and work on grand challenges/non-alignment work needed under transformative AI, I am curious how you think about pause efforts as a third class of work. Is this something you have thoughts on?
6
carboniferous_umbraculum
Perhaps at the core there is a theme here that comes up a lot which goes a bit like: Clearly there is a strong incentive to 'work on' any imminent and unavoidable challenge whose resolution could require or result in "hard-to-reverse decisions with important and long-lasting consequences". Current x-risks have been established as sort of the 'most obvious' such challenges (in the sense that making wrong decisions potentially results in extinction, which obviously counts as 'hard-to-reverse' and the consequences of which are 'long-lasting'). But can we think of any other such challenges or any other category of such challenges? I don't know of any others that I've found anywhere near as convincing as the x-risk case, but I suppose that's why the example project on case studies could be important? Another thought I had is kind of: Why might people who have been concerned about x-risk from misaligned AI pivot to asking about these other challenges? (I'm not saying Will counts as 'pivoting' but just generally asking the question). I think one question I have in mind is: Is it because we have already reached a point of small (and diminishing) returns from putting today's resources into the narrower goal of reducing x-risk from misaligned AI?

I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.

Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for there to be more capacity, but trustee recruitment has moved more slowly than I’d anticipated, and with the ongoing recusal I didn’t expect to add much capacity for the foreseeable future, so it felt like a natural time to step down.  

It’s been quite a ride over the last eleven years. Effective Ventures has grown to a size far beyond what I expected, and I’ve felt privileged to help it on that journey. I deeply respect the rest of the board, and the leadership teams at EV, and I’m glad they’re at the helm.

Some people have asked me what I’m currently working on, and what my plans are. This year has been quite spread over... (read more)

Will - of course I have some lingering reservations but I do want to acknowledge how much you've changed and improved my life.

You definitely changed my life by co-creating Centre for Effective Altruism, which played a large role in organizations like Giving What We Can and 80,000 Hours, which is what drew me into EA. I was also very inspired by "Doing Good Better".

To get more personal -- you also changed my life when you told me in 2013 pretty frankly that my original plan to pursue a Political Science PhD wasn't very impactful and that I should consider 80,000 Hours career coaching instead, which I did.

You also changed my life by being open about taking antidepressants, which is ~90% of the reason why I decided to also consider taking antidepressants even though I didn't feel "depressed enough" (I definitely was). I felt like if you were taking them and you seemed normal / fine / not clearly and obviously depressed all the time yet benefitted from them that maybe I would also benefit them (I did). It really shattered a stereotype for me.

You're now an inspiration for me in terms of resilience. Having an impact journey isn't always everything up and up and up all the time. 2022 and 2023 were hard for me. I imagine they were much harder for you -- but you persevere, smile, and continue to show your face. I like that and want to be like that too.

Thank you for all your work, and I'm excited for your ongoing and future projects Will, they sound very valuable! But I hope and trust you will be giving equal attention to your well-being in the near-term. These challenges will need your skills, thoughtfulness and compassion for decades to come. Thank you for being so frank - I know you won't be alone in having found this last year challenging mental health-wise, and it can help to hear others be open about it.

Thanks for all your work over the last 11 years Will, and best of luck on your future projects. I have appreciated your expertise on and support of EA qua EA, and would be excited about you continuing to support that.

Thanks for all of your hard work on EV, Will! I’ve really appreciated your individual example of generosity and commitment, boldness, initiative-taking, and leadership. I feel like a lot of things would happen more slowly or less ambitiously---or not at all---if it weren’t for your ability to inspire others to dive in and act on the courage of their convictions. I think this was really important for Giving What We Can, 80,000 Hours, Centre for Effective Altruism, the Global Priorities Institute, and your books. Inspiration, enthusiasm, and positivity from you has been a force-multiplier on my own work, and in the lives of many others that I have worked with. I wish you all the best in your upcoming projects.

Thank you for all of your hard work over many years, Will. I've really valued your ability to slice through strategic movement-buliding questions, your care and clear communication, your positivity, and your ability to simply inspire massive projects off the ground. I think you've done a lot of good. I'm excited for you to look after yourself, reflect on what's next, and keep working towards a better world.

Thanks so much for all your hard work on CEA/EV over the many years. You have been such a driving force over the years in developing the ideas, the community, and the institutions we needed to help make it all work well. Much of that work over the years has happened through CEA/EV, and before that through Giving What We Can and 80,000 Hours before we'd set up CEA to house them, so this is definitely in some sense the end of an era for you (and for EV). But a lot of your intellectual work and vision has always transcended the particular organisations and I'm really looking forward to much more of that to come!

Thanks so much for your work, Will! I think this is the right decision given the circumstances and that will help EV move in a good direction. I know some mistakes were made but I still want to recognize your positive influence.

I'm eternally grateful for getting me to focus on the question of "how to do the most good with our limited resources?". 

I remember how I first heard about EA.

The unassuming flyer taped to the philosophy building wall first caught my eye: “How to do the most good with your career?”

It was October 2013, midterms week at Tufts University, and I was hustling between classes, focused on nothing but grades and graduation. But that disarmingly simple question gave me pause. It felt like an invitation to think bigger.

Curiosity drew me to the talk advertised on the flyer by some Oxford professor named Will MacAskill. I arrived to find just two other students in the room. None of us knew that Will would become so influential.

What followed was no ordinary lecture, but rather a life-changing conversation that has stayed with me for the past decade. Will challenged us to zoom out and consider how we could best use our limited time and talents to positively impact th... (read more)

Clarifying "Extinction"

I expect this debate week to get tripped up a lot by the term “extinction”. So here I’m going to distinguish:

  • Human extinction — the population of Homo sapiens, or members of the human lineage (including descendant species, post-humans, and human uploads), goes to 0.
  • Total extinction — the population of Earth-originating intelligent life goes to 0.

Human extinction doesn’t entail total extinction. Human extinction is compatible with: (i) AI taking over and creating a civilisation for as long as it can; (ii) non-human biological life evolving higher intelligence and building a (say) Gorilla sapiens civilisation.

The debate week prompt refers to total extinction. I think this is conceptually cleanest. But it’ll trip people up as it means that most work on AI safety and alignment is about “increasing the value of futures where we survive” and not about “reducing the chance of our extinction” — which is very different than how AI takeover risk has been traditionally presented.  I.e. you could be strongly in favour of "increasing value of futures in which we survive" and by that mean that the most important thing is to prevent the extinction of Homo sa... (read more)

8
Owen Cotton-Barratt
Fairly strong agree -- I'm personally higher on all of (2), (3), (4) than I am on (1). The main complication is that I think among realistic activities we can pursue, often they won't correspond to a particular one of these; instead having beneficial effects on multiple. But I still think it's worth asking "which is it high priority to make plans targetting?", even if many of the best plans end up being those which aren't so narrow as to target one to the exclusion of the others.

Given the TIME article, I thought I should give you all an update. Even though I have major issues with the piece, I don’t plan to respond to it right now.

Since my last shortform post, I’ve done a bunch of thinking, updating and planning in light of the FTX collapse. I had hoped to be able to publish a first post with some thoughts and clarifications by now; I really want to get it out as soon as I can, but I won’t comment publicly on FTX at least until the independent investigation commissioned by EV is over. Unfortunately, I think that’s a minimum of 2 months, and I’m still sufficiently unsure on timing that I don’t want to make any promises on that front. I’m sorry about that: I’m aware that this will be very frustrating for you; it’s frustrating for me, too.

Going to be honest and say that I think this is a perfectly sensible response and I would do the same in Will's position.

Thank you for sharing this. I think lots of us would be interested in hearing your take on that post, so it's useful to understand your (reasonable-sounding) rationale of waiting until the independent investigation is done.

Could you share the link to your last shortform post? (it seems like the words "last shortform post" are linking to the Time article again, which I'm assuming is a mistake?)

2
William_MacAskill
Sorry - done, thanks! 
1
Milan Griffes
When is the independent investigation expected to complete? 
9
David M
In the post Will said:

I’ve been thinking hard about whether to publicly comment more on FTX in the near term. Much for the reasons Holden gives here, and for some of the reasons given here, I’ve decided against saying any more than I’ve already said for now.

I’m still in the process of understanding what happened,  and processing the new information that comes in every day. I'm also still working through my views on how I and the EA community could and should respond.

I know this might be dissatisfying, and I’m really sorry about that, but I think it’s the right call, and will ultimately lead to a better and more helpful response. 

It's not the paramount concern and I doubt you'd want it to be, but I have thought several times that this might be pretty hard for you. I hope you (and all of the Future Fund team and, honestly all of the FTX team) are personally well, with support from people who care about you. 

Do you plan to comment in a few weeks, a few months, or not planning to comment publicly? Or is that still to be determined?

Thanks for asking! Still not entirely determined - I’ve been planning some time off over the winter, so I’ll revisit this in the new year.

Curated and popular this week
Relevant opportunities