We've had a lot of votes on the banner! If you'd like to explain why you voted the way you did, what your cruxes[1] are, and what would change your mind, comment in this thread.
You can also mention if you'd be open for having a dialogue with another Forum user who disagrees with you. If someone comments and offers to dialogue with you, you can set up a time to write a dialogue together (perhaps via Forum dms).
To find out more about the event, and how to contribute, read the announcement post.
- ^
Beliefs or assumptions which determine your overall opinion, but are better targets for argument/ you would more easily change your mind on. For example, one of mine is "philosophy of mind doesn't make progress".
Ok--at Toby's encouragement, here are my thoughts:
This is a very old point, but to my mind, at least from a utilitarian perspective, the main reason it's worth working on promoting AI welfare is the risk of foregone upside. I.e. without actively studying what constitutes AI welfare and advocating for producing it, we seem likely to have a future that's very comfortable for ourselves and our descendants--fully automated luxury space communism, if you like--but which contains a very small proportion of the value that could have been created by creating lots of happy artificial minds. So concern for creating AI welfare seems likely to be the most important way in which utilitarian and human-common-sense moral recommendations differ.
It seems to me that the amount of value we could create if we really optimized for total AI welfare is probably greater than the amount of disvalue we'll create if we just use AI tools and allow for suffering machines by accident, since in the latter case the suffering would be a byproduct, not something anyone optimizes for.
But AI welfare work (especially if this includes moral advocacy) just for the sake of avoiding this downside also seems valuable enough to be worth a lot of effort on its own, even if suffering AI tools are a long way off. The animal analogy seems relevant: it's hard to replace factory farming once people have started eating a lot of meat, but in India, where Hinduism has discouraged meat consumption for a long time, less meat is consumed and so factory farming is evidently less widespread.
So in combination, I expect AI welfare work of some kind or another is probably very important. I have almost no idea what the best interventions would be or how cost-effective they would be, so I have no opinion on exactly how much work should go into them. I expect no one really knows at this point. But at face value the topic seems important enough to warrant at least doing exploratory work until we have a better sense of what can be done and how cost-effective it could be, only stopping in the (I think unlikely) event that we can say with some confidence that the best AI welfare work to be done is worse than the best work that can be done in other areas.
When telling stories like your first paragraph, I wish people either said "almost all of the galaxies we reach are tiled with some flavor of computronium and here's how AI welfare work affected the flavor" or "it is not the case that almost all of the galaxies we reach are tiled with some flavor of computronium and here's why."
The universe will very likely be tiled with some flavor of computronium is a crucial consideration, I think.
To my mind, the first point applies to whatever resources are used throughout the future, whether it’s just the earth or some larger part of the universe.
I agree that the number/importance of welfare subjects in the future is a crucial consideration for how much to do longtermist as opposed to other work. But when comparing longtermist interventions—say, splitting a budget between lowering the risk of the world ending and proportionally increasing the fraction of resources devoted to creating happy artificial minds—it would seem to me that the “size of the future” typically multiplies the value of both interventions equally, and so doesn’t matter.
(Not an AI welfare/safety expert by any stretch, just adding my two cents here! Also I was very piqued by the banner and loved hovering over the footnote! I've thought about digital sentience, but this banner and this week really put me into a "hmm..." state)
My view leans towards "moderately disagree." (I fluctuated between this, neutral, and slightly agree.) For context, when it's AI safety, I'd say "highly agree." Thoughts behind my current position:
Why I'd prioritize it less:
Why I'd still prioritize it:
Overall, I agree that resources and talent should be allocated to AI welfare because it's prudent and can prevent future suffering. However, I moderately disagree with it being an EA priority due to its current speculative nature and how I think AI safety. I think AI safety and solving the alignment problem should be a priority, especially in these next few years, though, and hold some confidence in preventing digital suffering.
Other thoughts:
My reasons for not making it a priority:
By contrast to existential risk, which we need to get right now or lose the opportunity (and all other opportunities) forever, I don't see a corresponding loss of option value here. Perhaps it's worth thinking about how to ensure we preserve the will to solve the issue through whatever upheaval comes next. But I think that's much easier than actually trying to solve it right now.
edit: I think the first consideration isn't nearly as strong for poverty / health interventions and animal interventions: it feels more like we already know some good things to do there so I'm on board with starting now, especially in cases where we think their effects will compound over time.
Do you have a sense of what you think the right amout to spend is?
I think spending zero dollars (and hours) isn't obviously a mistake, but I'd be willing to listen to someone who wanted to advocate for some specific intervention to be funded.
It seems really valuable to have experts at the time the discussion happens.
If you agree, then it seems worth trianing people for the future when we discuss it.
We can do that in the future too?
Training takes probably 3 years to cycle up and maybe 3 years to happen. When did we start deciding to train people in AI Safety, vs when was there enough?
Seems plausible to me that the AI welfare discussion happens before we might currently be ready.
But again you're suggesting a time-limited window in which the AI welfare discussion happens, and if we don't intervene in that window it'll be too late. I just don't see a reason to expect that. I imagine that after the first AI welfare discussion, there will be others. While obviously it's best to fix problems as soon as possible, I don't think it's worth diverting resources from problems with a much clearer reason to believe we need to act now.
I think that being there at the start of a discussion is a great way to shift it. Look at AI safety (for good and ill)
For me, a key question is "How much is 5%?".
Here is a table I found.
So it seems like right now 5% is somewhere in the same range as Animal Welfare and EA Meta funding.
I guess that seems a bit high, given that animals exist and AIs don't.
I think a key benefit of AI work was training AI Safety folks to be around when needed. Having resources at a crucial moment isn't solely about money, it's about having the resource that is useful in that moment. A similar thing to do might be to train philosophers and government staffers and activists who are well versed in the AI welfare arguments who can act if need be.
Not clear to me that that requires 5% of EA funding though.
this is super helpful! would be cool if we can see %s given to insect sentience or other smaller sub cause areas like that. does anyone have access to that?
I'd guess a less than .5% (90%)
I think the burden of proof lies with those advocating for AI welfare as an EA priority.
So far, I haven't read compelling arguments to change my default.
What's your thought on this:
Can you expand on this? Do you think that a model loaded onto a GPU could be conscious?
And do you think bacteria might be conscious?
I think given a big enough GPU, yes, it seems plausible to me. Our mids are memory stores and performing calculations. What is missing in terms of a GPU?
I think bacteria are unlikely to be conscious due to a lack of processing power.
Something unknown.
Do you think it's plausible that a GPU rendering graphics is conscious? Or do you think that a GPU can only be conscious when it runs a model that mimics human behavior?
Potential counterargument: microbial intelligence.
I agree and I hope we get some strong arguments from those in favor. I would imagine there is already a bunch of stuff written given the recent Open Phil defunding it kerfuffle.
I have doubts about whether digital or electronic information flows will yield valenced sentience, although I don't rule it out.
But I have much stronger doubts about whether we can ever know what makes these 'digital minds' happy or sad, or even what goes in what direction.
Despite being a panpsychist, I rate it fairly low. I don't see a future where we solve AI safety where there are a lot of suffering AIs. If we fail on safety, then it wont matter what you wrote about AI welfare, the unaligned AI is not going to be moved by it.
My thoughts on the issue: