I'm a mathematician working mostly on technical AI safety and a bit on collective decision making, game theory, and formal ethics. I used to work on international coalition formation, and a lot of stuff related to climate change. Here's my professional profile.
My definition of value :
I need help with various aspects of my two main projects: a human empowerment based AI safety concept, and an open-source collective decision app, http://www.vodle.it
I can help by ...
Maybe this is true in the EA branch of AI safety. In the wider community, e.g. as represented by those attending IASEAI in February, I believe this is not a correct assessment. Since I began working on AI safety, I have heard many cautious and uncertainty-aware statements along the line that the things you claim people believe will almost certainly happen are merely likely enough to worry deeply and work on preventing them. I also don't see that community having an AI-centric worldview – they seem to worry about many other cause areas as well such as inequality, war, pandemics, climate.
The author is using "we" in several places and maybe not consistently. Sometimes "we" seems to be them and the readers, or them and the EA community, and sometimes it seems to be "the US". Now you are also using an "us" without it being clear (at least to me) who that refers to.
Who do you mean by 'The country with the community of people who have been thinking about this the longest' and what is your positive evidence for the claim that other communities (e.g., certain national intelligence communities) haven't thought about that for at least as long?
"targeting NNs" sounds like work that takes a certain architecture (NNs) as a given rather than work that aims at actively designing a system.
To be more specific: under the proposed taxonomy, where would a project be sorted that designs agents composed of a Bayesian network as a world model and an aspiration-based probabilistic programming algorithm for planning?
I wonder how to correctly conceptualize the idea of "a net-negative influence on civilization" in view of the fact that the future is highly uncertain and that that uncertainty is a major motivating factor.
E.g., assume at some time point t1, a longtermist's proposed plan has higher expected longterm value than an alternative plan because the alternative plan takes a major risk. The longtermist's plan is realized and at some later time point t2 someone points out that the alternative plan would have produced more value between t1 and t2 (tacitly assuming the risk not realizing between t1 and t2 because the realized longterm plan has successfully avoided it).
Would that constitute an example of what these critics would call a "net-negative influence on civilization"? If so, it's just a fallacy. If not, then what comparison exactly is meant?
More generally: How to plausibly construct a "counterfactual" world in view of large uncertainties? It seems the only valid comparison would not be between the one realization that actually emerged from a certain behavior and one (potentially overly optimistic) realization that might have emerged from an alternative behavior, but between whole ensembles of realizations. This goes similarly for the effects of drug regulation, workplace laws, historic technology bans etc.