Extremely minor and pedantic correction: Ōe Kenzaburō is male, not female: https://en.wikipedia.org/wiki/Kenzabur%C5%8D_%C5%8Ce (I don't think that makes any significant difference to the point you're making, I just hate letting mistakes rest uncorrected!)
' Rosenberg and the Churchlands are anti-realists about intentionality— they deny that our mental states can truly be “about” anything in the world..' Taken literally this is insane. It means no one has ever thought about going out to the shops for some milk. If it's extended to language (and why wouldn't it?) it means that we can't say that science sometimes succeeds in representing the world's reasonably well, since nothing represents anything. It is also very different from the view that mental states are real, but they are behavioral dispositions, not inner representations in the brain, since the latter view is perfectly compatible with known facts like "people sometimes want a beer". I'm also suspicious of what the world "truly" is doing in this sentence if it's not redundant. What exactly is the difference between "our mental states can be about things in the world" and "truly our mental states can be about things in the world"?
Only glanced at one or two sections but the "goal realism is anti-Darwinian" section seems possibly irrelevant to the argument to me. When you first introduce "goal realism" it seems like it is a view that goals are actual internal things somehow "written down" in the brain/neural net/other physical mind, so that you could modify the bit of the system where the goal is written down and get different behaviour, rather than there really being nothing that is the representation of the AIs goals, because "goals" are just behavioral dispositions. But the view your criticizing in the "goal realism is anti-Darwinian" section is the view that there is always a precise fact of the matter about what exactly is being represented at a particular point in time, rather than several different equally good candidates for what is represented. But I can think of representations are physically real vehicles-say, that some combination of neuron firings is the representation of flys/black dots that causes frogs to snap at them-without thinking it is completely determinate what-flies or black dots-is represented by those neuron firings. Determinacy of what a representation represents is not guaranteed just by the fact that a representation exists. ~EDIT: Also, is Olah-style interpretability working presuming "representation realism"? Does it provide evidence for it? Evidence for realism about goals specifically? If not, why not?
You are a good and smart commenter, but that is probably generally a sign that you could be doing something more valuable with your time than posting on here. In your case though, that might not actually be true, since you also represent a dissenting perspective that makes things a bit less of an echo chamber on topics like AI safety, and it's possible that does have some marginal influence on what orgs and individuals actually do.
I think that whilst utilitarian but not longtermist views might well justify full-speed ahead, normal people are quite risk averse, and are not likely to react well to someone saying "let's take a 7% chance of extinction if it means we reach immortality slightly quicker and it benefits current people, rather than being a bit slower so that some people die and miss out". That's just a guess though. (Maybe Altman's probability is actually way lower, mine would be, but I don't think a probability more than an order of magnitude lower than that fits with the sort of stuff about X-risk he's said in the past.)
It's worth saying also that we already have 1 commercial forecasting organisation Good Judgment (I do a little bit of professional forecasting for them though it's not my main job.) Not clear why we need another. (I don't know who GJ clients actually are though, plus presumably I wouldn't be allowed to tell you even if I did. EDIT: Actually, in some cases I think client info became public and/or we were internally told who they were, but I have just forgotten who.)
Maybe Open Phil are doing this because they feel like they often attempt to get good forecasts about stuff they care about in the course of trying to make the best grants they can in other areas, and after they have done that enough times, it seemed sensible to just formally declare that forecasting is something they fund. The theory here isn't "developing forecasting as an art is an EA cause because it will improve worldwide epistemics" or whatever, but rather "we, Open Phil, need good forecasts to get funding decisions about other stuff right".
It can be clear that there was no crisis before 1300 and clear that there was one in 1400 even if the boundaries are blurry.
Beethoven's 9th. (75% not joking.)