This is a special post for quick takes by Benny Smith. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Steelmanning is typically described as responding to the “strongest” version of an argument you can think of.
Recently, I heard someone describe it a slightly different way, as responding to the argument that you “agree with the most.”
I like this framing because it signals an extra layer of epistemic humility: I am not a perfect judge of what the best possible argument is for a claim. In fact, reasonable people often disagree on what constitutes a strong argument for a given claim.
This framing also helps avoid a tone of condescension that sometimes comes with steelmanning. I’ve been in a few conversations in which someone says they are “steelmanning” some claim X, but says it in a tone of voice that communicates two things:
The speaker thinks that X is crazy.
The speaker thinks that those who believe X need help coming up with a sane justification for X, because X-believers are either stupid or crazy.
It’s probably fine to have this tone of voice if you’re talking about flat earthers or young earth creationists, and are only “steelmanning” X as a silly intellectual exercise. But if you’re in a serious discussion, framing “steelmanning” as being about the argument you "agree with the most" rather than the "strongest" argument might help signal that you take the other side seriously.
Anyone have thoughts on this? Has this been discussed before?
Yeah. I think Bensinger has some posts about how steelmanning may be corrosive, and ideological turing tests (ITT) are the actual targeted social technology (but that unfortunately people think steelmanning is better than it is)
I think this condescension idea you discuss is actually a fatal criticism of steelmanning, and ITT is how to explore cooperation with alien (to you) minds. The difference is basically that steelmanning draws you toward "make up a guy", it's kinda fun to think "what if there was a guy who thought xyz" and then work backwards for a plausible chain of facts/logic. It's not necessarily related to actually existing minds, so this activity is technically asocial! Whereas ITT has some of similar good properties yet shifts the emphasis on building bridges with a mind that in actuality, not just plausibly, exists.
I admit that there are numerous values of "outgroup" such that I think my simulation of them in my brain is way more sophisticated and justified than the actually existing proponents that I've ever met. This is not good. I may be more charitable and enthusiastic about cooperation under pluralism if I had a more ITT emphasis.
Discussions of the long-term future often leave me worrying that there is a tension between democratic decision-making and protecting the interests of all moral patients (e.g. animals). I imagine two possible outcomes:
Mainstream political coalitions make the decisions in their usual haphazard manner.
RISK: vast numbers of moral patients are ignored.
A small political cadre gains power and ensures that all moral patients are represented in decision-making.
CLAIM: The most straightforward way to dissolve this tradeoff is to get the mainstream coalitions to care about all sentient beings before they make irreversible decisions.
How?
A major push to change public opinion on animal welfare. Conventional wisdom in EA is to prioritize corporate campaigns over veg outreach for cost effectiveness reasons. The tradeoff I've described here is a point in favor of large-scale outreach.
I don't just mean 10x of your grandpa's vegan leafletting. A megaproject-scale campaign would be an entirely different phenomenon.
A Long Reflection. Give society time to come to its senses on nonhuman sentience.
Of course, the importance of changing public opinion depends a lot on how hingey you think the future is, and tractability depends on how close you think we are to the hinge. But in general, I think this is an underrated point for moral circle expansion.
Thanks for this! For what it's worth I think this is an important and under-explored area and would be really interested in seeing a longer-form post version.
A succinct discussion of the current state of our understanding of consciousness. I love seeing things like this in mainstream media.
Interestingly, there's also a reference to AI risk at the end:
As to conscious AI, Yoshua Bengio of the University of Montreal, a pioneer of the modern deep-learning approach to AI, told the meeting he believes it might be possible to achieve consciousness in a machine using the global-workspace approach. He explained the advantages this might bring, including being able to generalise results with fewer data than the present generation of enormous models require. His fear, though, is that someone will build a self-preservation instinct into a conscious AI, which could result in its running out of control.
This paragraph probably leaves readers with two misconceptions.
The wording implies that Bengio's main worry is deliberate coding of a self-preservation instinct, whereas the more prevalent concern is instrumental convergence.
Readers may conclude that consciousness is required for AI to be dangerous, which is not the case.
It also would have been nice for the article to mention the ethical implications for how we treat nonhuman minds, but that's usually too much to ask for.
Perhaps someone with better credentials than me could write them a letter.
Steelmanning is typically described as responding to the “strongest” version of an argument you can think of.
Recently, I heard someone describe it a slightly different way, as responding to the argument that you “agree with the most.”
I like this framing because it signals an extra layer of epistemic humility: I am not a perfect judge of what the best possible argument is for a claim. In fact, reasonable people often disagree on what constitutes a strong argument for a given claim.
This framing also helps avoid a tone of condescension that sometimes comes with steelmanning. I’ve been in a few conversations in which someone says they are “steelmanning” some claim X, but says it in a tone of voice that communicates two things:
It’s probably fine to have this tone of voice if you’re talking about flat earthers or young earth creationists, and are only “steelmanning” X as a silly intellectual exercise. But if you’re in a serious discussion, framing “steelmanning” as being about the argument you "agree with the most" rather than the "strongest" argument might help signal that you take the other side seriously.
Anyone have thoughts on this? Has this been discussed before?
Yeah. I think Bensinger has some posts about how steelmanning may be corrosive, and ideological turing tests (ITT) are the actual targeted social technology (but that unfortunately people think steelmanning is better than it is)
I think this condescension idea you discuss is actually a fatal criticism of steelmanning, and ITT is how to explore cooperation with alien (to you) minds. The difference is basically that steelmanning draws you toward "make up a guy", it's kinda fun to think "what if there was a guy who thought xyz" and then work backwards for a plausible chain of facts/logic. It's not necessarily related to actually existing minds, so this activity is technically asocial! Whereas ITT has some of similar good properties yet shifts the emphasis on building bridges with a mind that in actuality, not just plausibly, exists.
I admit that there are numerous values of "outgroup" such that I think my simulation of them in my brain is way more sophisticated and justified than the actually existing proponents that I've ever met. This is not good. I may be more charitable and enthusiastic about cooperation under pluralism if I had a more ITT emphasis.
Discussions of the long-term future often leave me worrying that there is a tension between democratic decision-making and protecting the interests of all moral patients (e.g. animals). I imagine two possible outcomes:
Neither of these is what we should want.
CLAIM: The most straightforward way to dissolve this tradeoff is to get the mainstream coalitions to care about all sentient beings before they make irreversible decisions.
How?
I don't just mean 10x of your grandpa's vegan leafletting. A megaproject-scale campaign would be an entirely different phenomenon.
Of course, the importance of changing public opinion depends a lot on how hingey you think the future is, and tractability depends on how close you think we are to the hinge. But in general, I think this is an underrated point for moral circle expansion.
I wrote this quickly and am on the fence about turning it into a longer-form post.
Thanks for this! For what it's worth I think this is an important and under-explored area and would be really interested in seeing a longer-form post version.
Animal and AI Consciousness in The Economist
https://www.economist.com/science-and-technology/2023/06/28/thousands-of-species-of-animals-likely-have-consciousness
A succinct discussion of the current state of our understanding of consciousness. I love seeing things like this in mainstream media.
Interestingly, there's also a reference to AI risk at the end:
This paragraph probably leaves readers with two misconceptions.
It also would have been nice for the article to mention the ethical implications for how we treat nonhuman minds, but that's usually too much to ask for.
Perhaps someone with better credentials than me could write them a letter.
There’s no Wikipedia article on Leah Garcés. This problem seems pretty tractable.