This is a special post for quick takes by Aaron Boddy🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

How bad is it to exploit bees?

I agree that taking action to improve the welfare of farmed bees is positive.

But with other farmed animals such as chickens/pigs/cows, a significant goal to aim for is to ultimately bring fewer of those animals into existence in order to reduce overall suffering. 

But is that also the case for bee farming? Or do we instead want to increase the number of bees we farm because we need to increase commercial pollination services for a greater good? And if so, even if we weren't to intervene in bee welfare in any way, would we still be aiming to increase the number of farmed bees from a consequentialist point of view? 

Is it possible to calculate the net utility (positive or negative) from bringing one suffering bee into existence?

I really like how you're using your shortform to ask these small, well-formed, interesting questions!

(I don't have anything useful to say here, I just wanted to give this my 👍.)

Is it possible to calculate the net utility (positive or negative) from bringing one suffering bee into existence?

I doubt it, but if so it would make a great unit of measurement.

How bad is Amazon?

So there are a lot of reasons people don't like Amazon. It exploits its workers, it fights tax laws, it has a significant environmental impact etc.

But is Amazon net-negative from a consequentialist point of view, or is there a net-positive impact of Amazon? My rough thinking is:

  • Jeff Bezos has projects such as Blue Origin which might be positive for longtermism.
  • He recently donated $10billion to Climate Change with the Bezos Earth Fund (and this may continue?).
  • He has been interested in some other short term philanthropy in the past. His ex-wife (who now has a lot of his money) has also signed the giving pledge (though Bezos himself hasn't).

Like I think this argument is easier to make with someone like Elon Musk. There may be reasons people personally dislike him, but I think its relatively easy to argue that because of OpenAI, SpaceX and Tesla, that he is likely to have a significant net-positive impact on the world, particularly the long-term.

I'm not sure really what I plan to do with the information. I'm not sure an "EA supports buying from Amazon" is particularly useful or accurate. It's just something that's played in the back of my mind a lot when I hear people badmouth Amazon.

I think you've left out the most important point: net positive effect of Amazon as having generated trillions of dollars of value for its customers, suppliers, and employees.

  • Customers gain from having a streamlined reliable online ordering experience, with fast delivery times, large body of reviews, and friendly dispute resolution policies
  • Suppliers gain access to the huge market of said customers, as well as the infrastructure to deliver products and collect payment
  • Employees are offered a job opportunity that they may freely choose to leave

This doesn't even touch upon the huge social value from the websites built on top of their cloud. It's perhaps hard to appreciate without a background in tech, but briefly: before AWS (Amazon Web Services) and their competitors, every company had to build and manage their own servers, aka physical huge hot computers that require dedicated IT people to oversee and then break when too many people visit your website.

Zvi has a line that goes like "The world's best charity is Amazon"

This is great thanks I hadn't considered this! I found the Zvi post you're referring to if anyone else is interested.

Do you know if there has been any work to try and quantify this added value from Amazon? (Like in Meatonomics, David Robinson Simon discusses the hidden costs of meat, so a $4 Big Mac really costs society $11, so that extra $7 cost is absorbed by society). Is there any potential to calculate something similar with Amazon? e.g. every $1 someone spends on Amazon typically saves the consumer/society $X. 

I'm not an economist and I know that its very difficult to calculate value added by technology etc. and this value would likely vary by product, but just wondering if that's something that could be possible while I'm trying to explore this idea?

Yeah, I'm not currently that excited about Bezos as a philanthropist, but the near-term impact of Amazon in the countries it operates in has been hugely positive, especially for low-income people.

I agree with most of the benefits, but think that the "employees may freely choose to leave" part may be somewhat contentious. People need money to survive, and one argument that is often brought forward is that Amazon has driven a lot of smaller businesses out of the market, so that employees may not have that many choices of where to work any more.

Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg