OscarD🔸

1775 karmaJoined Working (0-5 years)Oxford, UK

Comments
292

Thanks for writing! If something like this doesn't already exist, perhaps someone should start an ops fellowship where there is a centralised hiring round, and then the selected applicants are placed at different orgs to help with entry-level ops tasks. Perhaps one bottleneck here is it is hard for research to be net negative (short of being infohazardous, or just wasting a mentor's time), but doing low quality ops work could be retty bad for an org. Maybe that is partly why orgs don't want to outsource ops work to junior interns? Not sure.

Although there is the Alan Turing Institute, Ada Lovelace Institute, Leverhulme Centre, Simon Institute, etc.

Nice! We did something similar last year; you could check how well our taxonomies align or where any differences are. We also linked to various past taxonomies/overviews like this in that paper.

It seems like considering the Better Futrues perspective gets us some of the way back to 'normie'/intuitive ethics, compared to traditional X-risks-trump-everything views. E.g. promoting peace and democracy and good governance and so forth seem important from a BF perspective, and more akin to what pre-theoretically we would think is good.

I'm not sure what to make of this - in a sense it is good to be less controversial and have more common ground with mainstream folks, but it is also perhaps suspicious.

Good point, those seem like important weaknesses of the view (and this is partly why I favour totalism). And good to know re Yew-Kwang Ng. Yes, it is a version of your joint-aggregation bounded view - my main point was that it seemed like scale-tipping was one of your main objections and this circumvents that, but yes there are other problems with it as you note!

Yeah good point that memetic fitness != moral truth. I suppose one could hope that as long as some people are pursuing moral truth, then even if truth and fitness are uncorrelated, that will be some push towards truth, even though there is a lot of drift/noise from random ideas being fit.

The bad case is if truth and fitness are anticorrelated for some reason. My guess is that is unlikely though? Except insofar as the moral truth ends up being really convoluted and abstruse, and then simpler ideas might be more fit. But even then, maybe the memetically fitter simple ideas (e.g. total utilitarianism?) might be close approximations of some really messy truth.

The details and mechanisms make sense and are useful for making possibilities more vivid, but I think the very high-level argument seems strong to me and does much of the work:

  1. Social, values, and institutional change is significantly caused by technological, economic, and demographic changes.
  2. After a period of rapid transformation through the intelligence and industrial explosions, and space colonisation, the pace of technological, economic and demographic change will greatly slow.

C. The pace of social, values, and institutional change will greatly slow. Therefore, achieving a good state before the rate of change slows is very valuable.

(more minor points)

If only a small number of people have power, then it becomes less likely that the correct moral views are represented among that small group, and therefore less likely that we get to a mostly-great future via trade and compromise.

I believe this is correct, but possibly for the wrong reason. If you just have a smaller group of people and they are drawn randomly from the population, yes there is a higher probability that no-one will have the correct moral view. But there is also a higher probability an unusually high fraction of people will have such a view. So a smaller bottleneck just increases the variance. But this is bad in expectation if you think that the value of the future is a concave function of the fraction of world power wielded by people with the correct values, because of trade and compromise. ie if having 10% of power in good hands is less than 10 times as good as 1%, as I understand you believe, then increasing variance by concentrating power is bad. (And of course, there is the further effect of power-seekers on average having worse values.)

In particular, we could model the process of reflection as a series of independent Brownian motions in R2, all starting at the same point at the same time. Then the expected distance of a view from the starting point, and the expected distance between two given views, both increase with the square root of time. The latter expectation is larger by a factor of sqrt(2)​.

The choice of two dimensions is unmotivated, so I don't trust the numbers, but the general effect seems right and would hold directionally even if there are e.g. 10 dimensions that people are going on a random walk through.

Moral progress

Another partial explanation is that (putatively) as people get richer and happier and wiser and so forth, they just have more time and interest and mental/emotional capacity to think carefully about ethics and act accordingly. I’m not sure how much the psychological literature supports this, but e.g. even just ending the worst privations and abuses in childhood probably removes a lot of the left tail of morality, thus increasing the average. And so, if this is significantly right, as material and social progress continues, we will get some moral progress for free as well. I note your point in 2.3.2 that there isn't much correlation between wealth and charitable giving currently, which does seem evidence against my hypothesis. But richer people care more about 'post-material' issues in politics, and intuitively I still think there is some correlation between how well-off you are (broadly construed, not just wealth) and your interest in abstract ethics. But I agree this probably isn’t enough by itself to get everyone to converge to the correct values.

Another thought on why we might continue to see moral progress: Most people don’t care about discovering moral truths or seeking out the Good de dicto, and will just go along with the bare minimum of ethics required by polite society. But some people do actively seek the Good, and they will influence the rest of the population by osmosis over many generations. Even a slow steady tug in the right direction is enough to turn a massive oil tanker, ie collective human morality, in this analogy. But this crucially relies on some people seeking and having access to (upon reflection) the Good, which is not obvious. Note this is distinct from moral trade; it is more moral persuasion.

Load more