I don't really know... I'm suspect some kind of first-order utility calculus which tallies up the number of agents which are helped per dollar weighted according to what species they are makes animal welfare look better by large degree. But in terms of getting the world closer on the path of the "good trajectory", for some reason the idea of eliminating serious preventable diseases in humans feels like a more obvious next step along that path?
A priori, what is the motivation for elevating the very specific "biological requirment" hypothesis to the level of particular consideration? Why is it more plausible than than similarly prosaic claims like "consciousness requires systems operating between 30 and 50 degrees celsius" or "consciousness requires information to propegate through a system over timescales between 1 millisecond and 1000 milliseconds" or "consiousness requires a substrate located less than 10,000km away from the center of the earth"?
It seems a little weird to me that most of the replies to this post are jumping to the practicalities/logistics of how we should/shouldn't implement official, explicit, community-wide bans on these risky behaviours.
I totally agree with OP that all the things listsed above generally cause more harm than good. Most people in other cultures/communities would agree that they're the kind of thing which should be avoided, and most other people succeed in avoiding them without creatiing any explicit institution responsible for drawing a specific line between correct/incorrect behavior or implementing overt enforcment mechanisms.
If many in the community don't like these kind of behaviours, we can all contribute to preventing them by judging things on a case-by-case basis and gently but firmly letting our peers know when we dissaprove of their choices. If enough people softly disaprove of things like drug use, or messy webs of romantic entanglement - this can go a long way towards reducing their prevalance. No need to draw bright lines in the sand or enshrine these norms in writing as exact rules.
Sorry I might not have made my point clearly enough. By remaining anonymous, the OP has shielded themselves from any public judgement or reputational damage. Seems hypocritical to me given the post they wrote is deliberately designed to bring about public judgement and affect the reputation of Nick Bostrom.
So I'm saying "if OP thinks it's okay to make a post which names Nick and invites us all to make judgements about him, they should also have the guts to name themselves"
I really don't think the crux is people who disagree with you being unwilling to acknowledge their unconscious motivations. I fully admit that sometimes I experience desires to do unsavory things such as
- Say something cruel to a person that annoys me
- Smack a child when they misbehave
- Cheat on my taxes
- Gossip about people in a negative way behind their backs
- Eat the last slice of pizza without offering it to anyone else
- Not stick to my GWWC pledge
- Leave my litter on the ground instead of carrying it to a bin
- Lie to a family member and say "I'm busy" when they ask me to help them with home repairs
- Be unfaithful to my spouse
- etc.
If you like, for sake of argument let's even grant that for all the nice things I've ever done for others, ultimately I only did them because I was subconsciously trying to attract more mates (leaving aside the issue that if this was my goal, EA would be a terribly inefficient means by which to achieve it).
Even if we grant that that's how my subconscious motivations are operating, it still doesn't matter. It's still better for me to not go around hitting on women at EA events, and the EA movement is still better off if I'm incentivised not to do it.
Maybe all men have have a part of ourselves which wants to live the life of Genghis Khan and torture our enemies and impregnate every attractive person we ever lay eyes on - but if that were true, that wouldn't imply it's ethical or rational to indulge that fantasy! And it definitely wouldn't imply that the EA project would be better off if we designed our cultural norms+taboos+signals of prestige in ways which encourage it.
The better I am at not giving in to these shitty base urges, and the more the culture around me supports and rewards me for not doing these degenerate things, the happier I will be in the long run and the more positive the impact I have on those around me will be.
The main reason I disagree is that to me it seems plainly obvious that it's far better for a community organiser's motivations to be related to earning respect/advancing their career/helping others, rather than their reason for participating in EA being so they can have more sex. This is because, if they're motivated by wanting to have more sex, then this predictably leads to more drama and more sexual harrassment.
I also don't think you did enough to back up the inference "lots of people are motivated by sex, therefore we should try to harness this, instead of encouraging people to suppress these instincts in problematic contexts".
As a comparison, lots of people get excited by conflict and gossip too. That doesn't automatically mean we should be trying to harness, rather than suppress those things
I think an important consideration being overlooked is how comptetntly a centralised project would actually be managed.
In one of your charts, you suggest worlds where there is a single project will make progress faster due to "speedup from compute almagamation". This is not necessarily true. It's very possible that different teams would be able to make progress at very different rates even if both given identical compute resources.
At a boots-on-the-ground level, the speed of progress an AI project makes will be influenced by thosands of tiny decisions about how to:
The list goes on!
Even seemingly minor decisions like coding standards, meeting structures and reporting processes might compound over time to create massive differences in research velocity. A poorly run organization with 10x the budget might make substantially less progress than a well-run one.
If there was only one major AI project underway it would probably be managed less well than the overall best-run project selected from a diverse set of competing companies.
Unlike the Manhattan project - there's already sufficently strong commercial incentives for private companies to focus on the problem, it's not already clear exactly how the first AGI system will work, and capital markets today are more mature and capable of funding projects at much larger scales. My gut feeling is if AI was fully consolidated tomorrow - this is more likely to slow things down than speed them up.