This is where we need a broad perspective.
Long-term, we solve the problem of meat-eating with artificial protein, which also solves many other problems.
Medium-term, we work to end factory-farming, which needlessly increases the suffering of animals. (I don't want to get into it because there are many experts here and I'm not one of them, but it may be arguable that an animal which is bred for food but gets to live a decent life in a field is better off than if it hadn't been born because people didn't need to eat it. However, in the case of factory-farming, such an argument seems totally untenable).
Short-term, we accept that we live in an imperfect world and that most people value saving human lives, even at the cost of animal lives. So we work to save human lives and improve health and improve quality of life, and instead of losing sleep over the calculation of the net impact on animals, we support the amazing organisations who are working to end factory-farming (like Farmkind) and to develop alternative protein (like GFI).
It's valuable to discuss questions like this, and I absolutely do not claim to have a definitive answer - all I say is that when I think about this, that's how I rationalise it.
Hi,
I'm not sure if you've had any interactions with the "EU Technical Policy Fellowship" led by Training for Good. You can find a lot of information online, and I could put you in contact with the trainers/organisers if that would be helpful.
They take 12 people (out of about 300 applicants) through an intense 8-week program about how to influence EU policy towards better AI Safety Governance. I was lucky enough to be a fellow earlier this year. Many of the fellows then do a 6-month internship at an AI-focused think-tank or Civil Society organisation.
IMHO this group may be of interest to some of the fellows and/or they may be interested in volunteering to support some of the activities. I'm not sure, as the focus of the fellowship is very much on getting people into the bodies that you do not want to duplicate.
They may also just have a good network of others who may be interested - again, possibly you already have access to the same network (Brussels isn't so big!)
There may also be potential to work with the new AI Office. I'm sure they are totally understaffed and over-worked at the moment - however, it sounds like you're planning to do some things that they would support, so maybe they would see enabling this organisation as an effective way to meet some of their needs.
On this subject, it was nice to see Nick Kristof in the New York Times write on a related theme, comparing how we treat and respect dogs and pigs.
Opinion | Dogs Are the Best! But They Highlight Our Hypocrisy. - The New York Times (nytimes.com)
I agree fully with the sentiment, but IMHO as a logical argument it fails, as so many arguments do, not in the details but in making a flawed assumption at the start.
You write: "Clearly, in such a case, even though it would cost significant money, you’d be obligated to jump into the pond to save the child."
But this is simply not true.
For two reasons:
The scenario you describe isn't realistic. None of us wear $5000 suits. For someone who wears a $5000 suit, you're probably right. But for most of us, our mental picture of "I don't want to ruin my clothes" does not translate to "I am not willing to give up $5000." I'm not sure what the equivalent realistic scenario is. But in real cases of people drowning, choking or needing to be resuscitated, many people struggle even to overcome their own timidity to act in public. We see people stabbed and murdered in public places and bystanders not intervening. I do not see compelling evidence that most strangers feel morally compelled to make major personal risks or sacrifices to save a stranger's life. To give a very tangible example, how many people feel obligated to donate a kidney while they're alive to save the life of a stranger? It is something that many of us could do, but almost nobody does. I know that is probably worth more than $5000, but it's closer in order-of-magnitude than ruining our clothes.
Absolutely, it would be a better world for all of us if people did feel obliged to help strangers to the tune of $5000, but we don't live in that world ... yet.
The drowning child analogy is a great way to help people to understand why they should donate to charities like AMF, why they should take the pledge.
But if you present it as a rigorous proof, then it must meet the standards of rigorous proof in order to convince people to change their minds.
Additionally, my sense is that presenting it as an obligation rather than a free, generous act is not helpful. You risk taking the pleasure and satisfaction out of it for many people, and replacing that with guilt. This might convince some people, but might just cause others to resist and become defensive. There is so much evidence of this, where there are immensely compelling reasons to do things that even cost us nothing (e.g. vote against Trump) and still they do not change most people's behaviour. I think we humans have developed very thick skins and do not get forced into doing things by logical reasoning if we don't want to be.
Good analysis of this from PauseAI:
I don't want to presume to paraphrase their analysis into one phrase, but if I were forced to, it would seem to be that there was a lot of pressure on Governor Newsom from powerful AI companies and interests, who also threatened to ruin the bill's sponsor Scott Wiener.
Still a pity that he couldn't resist the pressure.
It's kind of pathetic, but this is the reality of politics today. With their money, they really can either make or break a politician, and we voters are not smart enough to avoid being taken in by their negative advertising and dirt-digging.
It's clear that we need a much stronger movement on this. The other reason he was able to veto this bill is that the vast majority of people do not agree that AI poses a major / existential risk, and so they do not insist on the urgent action we need.