Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
12 more

I'd like to have conversations with people who work or are knowledgeable about energy and security. Whether that's with respect to energy grids, nuclear power plants, solar panels, etc. I'm exploring a startup idea to harden the world's critical infrastructure against powerful AI. (I am also building a system to make formal verification more deployable at scale so that it may reduce loss of control and misuse scenarios.)

I've given workshops on using AIs for productivity/research to various research organizations like MATS. I'm happy to offer a bit of my ti... (read more)

Been thinking about morality recently. Here are my current thoughts, take them with a grain of salt because they aren't battle-tested yet.

There are some strong arguments for utilitarianism, but regardless of what is correct theoretically, in practise utilitarianism doesn't work well without some kind of deontological bars.

Continuing with attempting to develop a pragmatic morality, it then become clear that virtue ethics is important too because a) rules are rigid compared to judgement b) decisions aren't independent but also affect how you'll act in the fu... (read more)

I quite liked this article by Martha Nussbaum: Virtue Ethics is a Misleading Category. She points out that both the classical utilitarians and Kant talked extensively about virtues. On the other hand, there's great variation among those who call themselves 'virtue ethicists', such that it's not clear if virtue ethics is really a thing.

But the point I want to make is: a good utilitarian has to acknowledge the role of virtue, and I think a lot of modern utilitarians have forgotten this. We want to use utility-calculation to guide our actions, but humans can'... (read more)

I wanted to make this poll to see how the community views the speed/x-risk tradeoff. I'm personally 99% x-risk and 1% speed, so I would hard agree. My prediction is most people will agree, maybe a 70/30 split, but I'm curious to see.

Showing 3 of 12 replies (Click to show all)
John Salter
2
0
0
60% disagree

I would be willing to delay technological innovation by up to 100 years to significantly reduce existential risk

I think the question is too imprecise phrased to be answered precisely. When would the delay start? Over what time period would it be felt?  (e.g. a 100% delay for 100 years is very different than 1% delay over 10,000 years)

I'm thus giving a directional answer assuming we're talking about whether seeking to dramatically reducing technological progress in exchange for safety is a feasible way to make the world a better place. I don't thi... (read more)

5
Craig Green 🔸
You are rightly grasping that we disagree, but I don't think you are understanding my view (and to be clear, reasonable people can disagree about this). My wife and I are debating whether we will have more children or not. Having another child is desirable to us. So much so that she's willing to undergo the relatively risky process of child birth to have another one. However, failing to have another child is significantly less bad than losing one of our existing children, IMO. I'd even say that, failing to have 100 more children is significantly less bad than losing one of our existing children. The reason why is that the child who never existed is not sentient and so does not experience any deprivation. They do not suffer. And my suffering of that abstract loss is not nearly as bad as would be the suffering I would experience losing a living child who I know. Now you may disagree with that, and mourn all the lost utility, and that is a reasonable perspective, but its not mine, and as you can see, this is a deeper philosophical difference and not some sort of misunderstanding about expected utility or something like that. FYI, about this sentence: "X risks aren't especially bad because of all the utility lost ... they're bad because after they happen there's never any utility again." I don't really see a difference between these two statements.
5
Michael St Jules 🔸
I agree with Craig here. I've written about problems with most conceptions of utility people use and describe alternatives that I think better match what Craig is saying in this sequence.

"On the Promotion of Safe and Socially Beneficial Artificial Intelligence" by @SethBaum from 2016

The recent forecasting is overrated post got me thinking: 

Solution Seeking a Problem

When talking about forecasting, people often ask questions like “How can we leverage forecasting into better decisions?” This is the wrong way to go about solving problems.

Intuitively, that seems correct, and I've relied on the expression "when you have a hammer, everything looks like a nail." This got me thinking: is it necessarily the wrong way, or is this a truism?

If I have a legitimately useful and powerful tool, isn't it indeed valuable to look around for problems... (read more)

This just came to mind: the reason that it's the wrong way to go about solving problems is that you want to solve the largest problems (well, per resource) and not just solve any random problem. Like, there is a problem that my shoes are currently untied, and I don't want to bend down or spend 10 seconds to tie them, but it's not very important.

So if you want to solve the most important problems, you should start with the problem and then work backwards for what solutions you might wish existed. I think the mere fact that people often talk about forecasting as the solution they are seeking to apply, whether that be Sentinel or whoever, is evidence that things are going wrong.

4
Marcus Abramovitch 🔸
Actually, the set of things you want to apply electricity to is far smaller than the set of things you dont want to. For example, if your baby is crying, please dont use electricity.  The problem side should do the searching since they have the shape and exact know-how of the problem
2
david_reinstein
They do and it’s a powerful point. But on the other hand they may be very much unaware of the nature of available tools and solutions. So I think there should probably be some searching — and listening — in both directions. If it’s done in good faith.

I have been disappointed by the support some EAs have expressed for recent activist actions at Ridglan Farms. I share others’ outrage at the outcome of the state animal cruelty investigation, which found serious animal cruelty law violations but led to a settlement that still permits Ridglan to sell beagles through July and to continue in-house experimentation. But I personally think the tactics used in the recent open rescues, including property damage and forced entry to remove animals, violate reasonable moral bounds on what actions are p... (read more)

Showing 3 of 19 replies (Click to show all)
4
MHR🔸
Thanks for engaging with the post! You made a lot of different points, so I'll do my best to separate them out and consider them one-by-one:  (1) * I'm not making an argument for quietism. Saying that we have an obligation to follow the law is compatible with having obligations (even extraordinarily strong ones) to use non-illegal means to combat injustice (e.g. by advocating for changes to laws). * It's a genuinely interesting point that many of our laws are inherited traditions, rather than the direct product of the democratic process. However, I don't think that's a strong argument in this specific case. The US has had true universal suffrage for more than 60 years, and in that time Congress and state legislative bodies have passed many laws related to the treatment of animals and the criminality of trespassing. Under any reasonable interpretation of democratic legitimacy, a democratically-elected legislative body specifically dealing with an issue and choosing to pass laws that accept the underlying common law principles and add specific penalties, related rules etc., should confer it. * I don't disagree that a reasonable contractualist would think that there are cases where it would be justified to break an unjust law. The core question is whether the required conditions hold in this case. Democratic legitimacy is one important part of that, since reasonable contractualists generally would give some weight to whether laws resulted from a just process. A point I didn't make in the OP, but I think is relevant here, is that even if you disagree about the democratic legitimacy argument, I think the specific nature of the lawbreaking here falls outside many notions of justifiable civil disobedience. That's because the Ridglan rescues involved breaking a law to achieve a non-symbolic end (rescuing the dogs), not merely symbolically challenging a law by breaking it. (2) * I think you're moving between a couple different notions of universalizability here. It's

State laws are path dependent, and rely very often on common law principles and  concepts uncritically applied. That does not equate to democratic legitimacy for every codified version of property and criminal law. 

I think we have fundamentally incompatible views on the appropriate frame to apply to balancing questions—I am not at all a utilitarian, and I don’t think you should be either. But I’ll set that aside. 

You again seem to conflate lawbreaking with immorality. Please don’t do that. Rosa Parks broke the law. So did the Ridglan rescuer... (read more)

2
MHR🔸
I see what you're getting at here. But if we agree that the externalities of crime aren't internalized, then I think we're just back in the position of the original post. You think the act utilitarian calculus checks you, I'm both skeptical that it does and think that there are non-act-utilitarian reasons why we ought to avoid lawbreaking. 

You should volunteer at your first EAG! (Especially if you are a student or early career)

  • If you don’t have a network in EA, EAG’s can be overwhelming. Volunteering gives you a ready-made, organic network.
  • Volunteering is pretty chill - a lot of the shifts aren’t that hard.
  • At your first EAG, it’s unlikely that you are using your time so efficiently that a few hours of volunteering would cut into the value of your conference.

I am attending  my EAG and volunteering as well. Hoping to learn and build meaningful networks.

3
Mitchell Laughlin🔸
I volunteered at my first EAGx (EAGx Australia 2023) and support this sentiment.
2
Toby Tremlett🔹
And also, AFAIK if you volunteer your ticket is free :)

Deleted

[This comment is no longer endorsed by its author]Reply

I no longer endorse this post, which argued that honey is basically fine to eat ethically, to the degree that I chose to delete it entirely.

1
Pat Myron 🔸
Protein/dairy tradeoffs/substitutions make more sense: honey/syrup/agave seem less necessary. For example, waffles, pancakes, french toast, etc still taste good without much of those, and honey/syrup/agave all seem too sugary to be healthy. Since they seem less necessary, your reasoning makes more sense to me as a case against honey alternatives rather than a case for honey

We recently published an interview with Matthew Coleman - another entry in our Career Journeys series. Matthew is the Executive Director of Giving Multiplier, a platform that encourages donations to highly effective charities through donation matching. Before this, he completed a PhD in psychology, researching the psychology of altruism.

The interview covers quite a lot of ground, but a few of the things we talked about include:

  • The gap between what a career looks like from the outside and what it's actually like day-to-day.
  • Advice for people w
... (read more)

Lighting has been getting ridiculously cheaper. And for the most part we seem to be not taking advantage of that positive externality: reducing crime through better lighting. This has been battle-tested as one of the effective ways for public security, see Chalfin, Hansen, Lerner & Parker (2022), an RCT in NYC public housing finding ~36% reductions in nighttime outdoor index crimes from added street lighting. Many, many major cities still haven't copied this at the right levels!

But we're also getting substantially negative externalities of bright light... (read more)

Maybe my biggest medium-term worry about transformative AI, other than the takeover stuff, is a constellation of concerns I sometimes abbreviate to "political economy." Right now a large fraction of humans in democracies can live and support their families as a direct result of voluntarily exchanging their labor. It'd take active acts of violence to break from this (pretty good, all things considered) status quo. As a peacetime norm, this is unusually good relative to the history of human civilization. 

At some point in the future (in the "good" future... (read more)

Showing 3 of 5 replies (Click to show all)
10
Linch
My current first-pass answer is that  1. Windfall shares. Some fraction of AI stocks should be given one-time to every human alive 1. This still requires some form of largesse/threat but one-time largesse feels less scary to me than continuously need to uphold the norm. 2. And it's not exactly largesse while people (especially outside of AI companies) still have real power, more like a structured negotiation 3. For reasons of political-economy realities, probably with more given towards rich countries and/or countries that are closer to developing AGI 1. I'm imagining maybe ratios like 10:1 4. Not sure about the exact amount of shares but should be way more than enough to support everybody indefinitely at significantly above modern Western standards, excepting positional goods 5. After the initial transfer, this completely solves the largesse and political economy problems. The "dignity" problem of having your consumption no longer tied to your labor is still there but I'm less worried about this (seems more like a framing problem). 6. Children can still be a problem. My guess is that normal inheritance stuff is enough though in edge cases maybe we say that you aren't allowed to disown your children completely from your windfall shares. 1. If people live forever maybe we have a rule that reproduction means a minimum fraction of your shares automatically go to your children I dunno. 2. Charter. Later on, some version of this is also written directly into the charters of the AIs, so at minimum something like 0.1-10% of their values ought to care something like all of current humanity's preferences 1. Assuming alignment is solved, now superintelligence is (0.1-10%) on the side of all humanity. 3. (probably optional) some form of protection against manipulation/theft/expropriation 1. If there's a transition period where AIs are good enough to do most work in the economy and generate a lot of wealth and/or disemploy most

Thanks Linch <3

4
Linch
Claude gives some references to prior work. Maybe the most interesting is Anton Korinek:

Thinking of drafting a post on war crimes, trying to answer the following puzzles:

  1. Why do we have a notion of war crimes at all, given how bad war itself is?
  2. Why are some things war crimes and not others?
  3. Why do precursor notions to war crimes appear, independently, in essentially every culture that has fought wars at scale?
  4. Given that essentially every culture has also broken these norms, sometimes spectacularly, why does the norm always come back, and often come back stronger?

Common answers to these questions seem profoundly misguided. The naive answer, that... (read more)

Showing 3 of 14 replies (Click to show all)

You might like this post I wrote earlier about the bargaining theory puzzle of war. I engaged with the academic literature on the subject pretty significantly, particularly James Fearon, so you might like it. On the other hand Fearon himself mostly reasoned from first-principles rather than conduct a careful historical assessment, so in that regard it might fit your interests less.

The post never got very popular but a few people who read it carefully really enjoyed it. One of the better compliments I've gotten on my writing is when somebody said they were ... (read more)

2
Mo Putera
Your experience reminded me of how Holden Karnofsky described his career so far:
2
Linch
Thanks, though tbc Holden's way better at it than I am! 

A quick reminder that applications for EA Global: London 2026 close this Sunday (May 10)!

We already have more applications than last year, and this looks set to be our biggest EAG yet (again)! If you've been meaning to apply but haven't gotten around to it, this is your sign.

The admissions bar is more accessible than people often assume. If you're working on or seriously exploring a high-impact problem, you should apply.

This is the EAG I've been most excited to put together yet. I'd love to see you all there.

📍 InterContinental London, The O2 · 29-31 May 2... (read more)

[comment deleted]*1
0
0
Showing 3 of 4 replies (Click to show all)

The best thing I’ve read on it so far is this article by Kelsey Piper

3
Mihkel Viires 🔹
Despite the real risk from hantavirus being low, it is getting covered a lot in media right now. I think this is actually good. A lot of people had already forgotten about the pandemic that we had not that long ago and moved on to worrying about other problems currently dominating the news cycle. Hopefully this serves as a (small) reminder to people that pandemic preparedness / biosecurity really does matter. 
5
Ian Turner
At the risk of being too curmudgeonly, I'd say the main take is to stay away from the news cycle.

Just was watching Dwarkesh/David Reich podcast, fascinating stuff. Looking back at how I was taught taxonomy and anthropological history I find it frustrating. Note that I don't know much about (evolutionary) biology or genetics or the frontier of what genetic-history research so this is my layman attempt to explain why it's generally been puzzling for me how i have had this explained by other people who probably don't understand either, not trying to propose that I understand something david reich doesn't. 

My main gripe is that we are taught evolutio... (read more)

FYI, next week we will be highlighting the first batch of articles from In Development, @Lauren Gilbert's new global development magazine

Lauren and most of the authors will be on the Forum to answer your questions throughout the week. More info to come on Monday, but I figured I'd mention in case anyone wanted to read the articles in advance (they are here, and all authors apart from Paul Niehaus will be around to answer questions). 

I'm looking forward to the discussion. 

Earning to give is lonely and requires repeated decisions. This is bad.

If you're earning to give, you are lucky if you have one EtG team-mate. The people you talk to every day do not have moral intuitions similar to yours, and your actions seem weird to them.

If you do direct work, the psychological default every day is to wake up and do work. You are surrounded by people who think the work is important, and whose moral values at least rhyme with your own.

If you earn to give, most days you do not give (you're probably paid bi-weekly, and transaction costs d... (read more)

Showing 3 of 10 replies (Click to show all)

I'm EtG and would love to connect with others. My DMs are open! A bit about me: I'm a SWE based in Europe, and my preferred cause area is animal welfare.

5
D_M_x
We have a regular EtG meetup in London. You might be interested in setting up something similar where you live, perhaps branching off a preexisting Effective Giving/Giving What We Can group?
1
dan.pandori
Oh totally. I'm lucky to be in the Bay Area where EA is a thing at all.

Somewhat meta point on epistemic modesty, calling it out here because it is a pattern that has deeply frustrated me about EA/rationalism for as long as I have known them: 
(making a quick take rather than commenting due to an app.operation_not_allowed error - I'm responding to @Linch's quick take on war crimes)
I guess these are just EA/rationalist norms, but an approach that glosses major positions as being so quickly dismissible strikes me as insufficiently epistemically modest. I would expect such a treatment will fail to properly consider alternativ... (read more)

Showing 3 of 4 replies (Click to show all)

I feel like I've heard this position a lot before, and I have some sympathy for it, but I feel like it implicitly overlooks a lot of what I find valuable about writing EA Forum comments, and it sets an overly high bar.

When one writes academic papers, one is expected to cite relevant previous work. Credit assignation is an important mechanism for tracing the evidence for claims and for assigning credit. Even in academic spheres, I think this is perhaps taken pathologically far (to the point where it probably sometimes is unduly burdensome and vaguely implie... (read more)

4
Ben Stewart
I'm not trying to be unkind, and I apologise if I was. I'll take this down if you ask here or via DM. I overreacted to what is a quick take because I think it was emblematic of a bad pattern - but that is unfair and disproportionate of me.  My main thing here is to push for better intermediate thinking. Like the standard EA/rat approach is so often based on dismissing mainstream or non-EA views, and then acting like their individual opinion is clearly superior, often reinventing current or past views that have had lots of non-EA examination. I want EA thinking to be better, and a lot of the time it would be improved by people reading more before opining, and not thinking the views of EA are so special.  We just have very different experiences then.  Do you mean critique someone on epistemic immodesty grounds? This is probably true but can you point me to the examples you have in mind? (I may indeed be doing this too much and seeing the examples would help)
16
Linch
Thanks for the much kinder response and the serious engagement! :) Please don't take your comment down, it's good to have this discussion in the open. (Also apologies for the long comment, brain not working really well so less succinct than I want to be) I want to defend my own approach here, and won't speak for the" standard EA/rat approach" except insomuch as my thinking is constitutive of that approach (as the old joke goes, "you're not in traffic, you are traffic"). Generally when I try to learn information about the world, what I go for is to seek facts and models that are  * interesting (ie, novel to me) * true * useful The best way to do this typically involves some combination of Google searches, original thinking, reading papers, conversations, reading, toy models, and (since ~2025) talking to AIs[1]. Since college, I've honed an ability to form views very quickly that I can defend, and believe I'm reasonably calibrated on. I think this is sometimes surprising to people but it shouldn't be. The first data point tells you a lot[2]. Similarly my bar for publishing my thoughts, ignoring opportunity cost, is also fairly low. The primary thing I'm interested in from a content perspective is some combination of novel/true/useful to my readers. Novel to whom? For me I have an implicit model of who my readers are and I try to calibrate accordingly. I want to write things that are new to a large fraction of my readers. I think you might have more of an academia-derived model where it's very important to only share thoughts that are novel to humanity.  I think this is less good of a norm. If I can write a better intro to stealth than is widely understood/disseminated, I think this is a useful service even if no individual point there is original.  Similarly, I think it's less important in non-academic contexts to attribute the originators of an idea or an analysis. I don't think it's useless, I just think it's less important. But if I'm thinking about a pro

This is too tangential from the forecasting discussion to justify being a comment there so I'm putting it here:

Forecasting makes no sense as a cause area, because cause areas are problems, something like "people lack resources/basic healthcare/etc.", "we might be building superintelligent AI and we have no idea what we're doing". Forecasting is more like a tool. People use forecasting to address AI, global poverty, and all sorts of more general problems, including ones that aren't major EA focuses.

For instance, we could treat vaccines as a cause area. All ... (read more)

The recent work on SAEBER, which applies sparse autoencoders (SAEs) to the screening of dna synthesis printers marks a big step towards effective function based screening.

This allows for printers to be monitored just as a lab technician uses computational gel electrophoresis to separate a messy mixture into clear, readable bands through the use of a specialized gel. SAEs happen to do the exact same thing by taking the muddied activation results of a neural network and projecting them out onto a higher dimensional space until the individual viral motifs can... (read more)

Load more