Ok thanks I think it's fair to call me on this (I realise the question of what Thiel actually thinks is not super interesting to me, compared to "does this critique contain inspiration for things to be aware of that I wasn't previously really tracking"; but get that most people probably aren't orienting similarly, and I was kind of assuming that they were when I suggested this was why it was getting sympathy).
I do think though that there's a more nuanced point here than "trying too hard to do good can result in harm". It's more like "over-claiming about how to do good can result in harm". For a caricature to make the point cleanly: suppose EA really just promoted bednets, and basically told everyone that what it meant to be good was to give more money to bednets. I think it's easy to see how this gaining a lot of memetic influence (bednet cults; big bednet, etc.) could end up being destructive (even if bednets are great).
I think that EA is at least conceivably vulnerable to more subtle versions of the same mistake. And that that is worth being vigilant against. (Note this is only really a mistake that comes up for ideas that are so self-recommending that they lead to something like strategic movement-building around the ideas.)
I think that the theology is largely a distraction from the reason this is attracting sympathy, which I'd guess to be more like:
is that you feel that moral statements are not as evidently subjective as say, 'Vanilla ice-cream is the best flavor' but not as objective as, say 'An electron has a negative charge', as living in some space of in-betweeness with respect to those two extremes
I think that's roughly right. I think that they are unlikely to be more objective than "blue is a more natural concept than grue", but that there's a good chance that they're about the same as that (and my gut take is that that's pretty far towards the electron end of the spectrum; but perhaps I'm confused).
I'd say again, an electron doesn't care for what a human or any other creature thinks about its electric charge.
Yeah, but I think that e.g. facts about economics are in some sense contingent on the thinking of people, but are not contingent on what particular people think, and I think that something similar could be true of morality.
I, on the contrary, don't feel like there could be 'moral experts'
The cleanest example I might give is that if I had a message from my near-future self saying "hey I've thought really hard about this issue and I really think X is right, sorry I don't have time to unpack all of that", I'd be pretty inclined to defer. I wonder if you feel differently?
I don't think that moral philosophers in our society are necessarily hitting the bar I would like for "moral expert". I also don't think that people who are genuinely experts in morality would necessarily act according to moral values. (I'm not sure that these points are very important.)
See my response to Manuel -- I don't think this is "proving moral realism", but I do think it would be pointing at something deeper and closer-to-objective than "happen to have the same opinions".
I'm not sure what exactly "true" means here.
Here are some senses in which it would make morality feel "more objective" rather than "more subjective":
I don't really believe there's anything more deeply metaphysical than that going on with morality[1], but I do think that there's a lot that's important in the above bullets, and that moral realist positions often feel vibewise "more correct" than antirealist positions (in terms of what they imply for real-world actions), even though the antirealist positions feel technically "more correct".
I guess: there's also some possibility of getting more convergence for acausal reasons rather than just evolution towards efficiency. I do think this is real, but it mostly feels like a distraction here so I'll ignore it.
Locally, I think that often there will be some cluster of less controversial common values like "caring about the flourishing of society" which can be used to derive something like locally-objective conclusions about moral questions (like whether X is wrong).
Globally, an operationalization of morality being objective might be something like "among civilizations of evolved beings in the multiverse, there's a decently big attractor state of moral norms that a lot of the civilizations eventually converge on".
Ok but jtbc that characterization of "affronted" is not the hypothesis I was offering (I don't want to say it wasn't a part of the downvoting, but I'd guess a minority).
I would personally kind of like it if people actively explored angles on things more. But man, there are so many things to read on AI these days that I do kind of understand when people haven't spent time considering things I regard as critical path (maybe I should complain more!), and I honestly find it's hard to too much fault people for using "did it seem wrong near the start in a way that makes it harder to think" as a heuristic for how deeply to engage with material.
That makes sense!
(I'm curious how much you've invested in giving them detailed prompts about what information to assess in applying particular tags, or even more structured workflows, vs just taking smart models and seeing if they can one-shot it; but I don't really need to know any of this.)