I do research at Longview Philanthropy. Previously I was a Research scholar at FHI and assistant to Toby Ord. Philosophy at Cambridge before that.
I also do a podcast about EA called Hear This Idea.
Do you think octopuses are conscious? I do — they seem smarter than chickens, for instance. But their most recent common ancestor with vertebrates was some kind of simple Precambrian worm with a very basic nervous systems.
Either that most recent ancestor was not phenomenally conscious in the sense we have in mind, in which case consciousness arose more than once in the tree of life. Or else it was conscious, in which case consciousness would seem easy to reproduce (wire together some ~1,000 nerves).
The main question of the debate week is: “On the margin, it is better to work on reducing the chance of our extinction than increasing the value of the future where we survive”.
Where “our” is defined in a footnote as “earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)”.
I'm interested to hear from the participants how likely they think extinction of “earth-originating intelligent life” really is this century. Note this is not the same as asking what your p(doom) is, or what likelihood you assign to existential catastrophe this century.
My own take is that literal extinction of intelligent life, as defined, is (much) less than 1% likely to happen this century, and this upper-bounds the overall scale of the “literal extinction” problem (in ITN terms). I think this partly because the definition counts AI survival as non-extinction, and I truly struggle to think of AI-induced catastrophes leaving only charred ruins, without even AI survivors. Other potential causes of extinction, like asteroid impacts, seem unlikely on their own terms. As such, I also suspect that most work directed at existential risk is just already not in practice targeting extinction as defined, though of course it is also not explicitly focusing on “better futures” instead — more like “avoiding potentially terrible global outcomes”.
(This became more a comment than a question… my question is: “thoughts?”)
Nice! Consolidating some comments I had on a draft of this piece, many of them fairly pedantic:
It's worth noting that the average answers to “How much financial compensation would you expect to need to receive to make you indifferent about that role not being filled?” were $272,222 (junior) and $1,450,000 (senior).
And so I think that just quoting the willingness to pay dollar amounts to hire top over second-preferred candidate can be a bit misleading here, because it's not obvious to everyone that WTP amounts are typically much higher than salaries in general in this context. If the salary is $70k, for instance, and the org's WTP to hire you over the second-preferred candidate $50k, it would be a mistake to infer that you are perceived as 3.5 times more impactful.
Another way of reading this is that the top hire is perceived as about 23% and about 46% more 'impactful' respectively than the second-preferred hire in WTP terms on average. I think this is a more useful framing.
And then eyeballing the graphs, there is also a fair amount of variance in both sets of answers, where perceptions of top junior candidates' 'impactfulness' appear to range from ~5–10% higher to ~100% higher than the second-best candidate. That suggests it is worth at least asking about replaceability, if there is a sensitive way to bring it up!
I agree that people worry too much about replaceability overall, though.
Thanks! I'm not trying to resolve concerns around cluelessness in general, and I agree there are situations (many or even most of the really tough ‘cluelessness’ cases) where the whole ‘is this constructive?’ test isn't useful, since that can be part of what you're clueless about, or other factors might dominate.
Why do you think we ought to privilege the particular reason that you point to?
Well, I'm saying the ‘is this constructive’ test is a way to latch on to a certain kind of confidence, viz the confidence that you are moving towards a better world. If others also take constructive actions towards similar outcomes, and/or in the fullness of time, you can be relatively confident you helped get to that better world.
This is not the same thing as saying your action was right, since there are locally harmful ways to move toward a better world. And so I don't have as much to say about when or how much to privilage this rule!
Just I want to register the worry that the way you've operationalised “EA priority” might not line up with a natural reading of the question.
The footnote on “EA priority” says:
By “EA priority” I mean that 5% of (unrestricted, i.e. open to EA-style cause prioritisation) talent and 5% of (unrestricted, i.e. open to EA-style cause prioritisation) funding should be allocated to this cause.
This is a bit ambiguous (in particular, over what timescale), but if it means something like “over the next year” then that would mean finding ways to spend ≈$10 million on AI welfare by the end of 2025, which you might think is just practically very hard to do even if you thought that more work on current margins is highly valuable. Similar things could have been said for e.g. pandemic prevention or AI governance in the early days!
Where are you getting those numbers from? If by “subjective decades” you mean “decades of work by one smart human researcher”, then I don't think that's enough to secure it's position as a singleton.
If you mean “decades of global progress at the global tech frontier” then imagining that the first-mover can fit ~100 million human research-years into a few hours shortly after (presumably) pulling away from the second-mover in a software intelligence explosion, then I'm skeptical (for reasons I'm happy to elaborate on).