I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend on explaining why in great detail.
Some of these technological developments were themselves a result of social coordination. For example, solar panels are extremely cheap now, but they used to be very expensive. Getting them to where they are now involved decades of government funded research and subsidies to get the industry up and running, generally motivated by environmental concerns.
It seems like there are many cases where technology is used to solve a problem, but we wouldn't have actually made the switch without regulation and coordinated action. Would you really attribute the banning of CFC's primarily to the existence of technological alternatives? It seems like you need both an alternative technology and social and political action.
they can just describe to ChatGPT or Claude what they want to know and ask the bot to search the EA Forum and other EA-related websites for info.
I feel like you've written a dozen posts at this point explaining why this isn't a good idea. LLM's are still very unreliable, the best way to find out what people in EA believe is to ask.
With regards to the ranking of charities, I think it would be totally fine if there were 15 different versions rankings out there. It would allow people to get a feel for what people with different worldviews value and agree or disagree on. I think this would be preferable to having just one "official" ranking, as there's no way to take into account massive worldview differences into that.
I think you might be engaging in a bit of Motte-and-Baileying here. Throughout this comment, you're stating MIRI's position as things like "it will be hard to make ASI safe", and that AI will "win", and that it will be hard for an AI to be perfectly aligned with "human flourishing" Those statements seem pretty reasonable.
But the actual stance of MIRI, which you just released a book about, is that there is an extremely high chance that building powerful AI will result in everybody on planet earth being killed. That's a much narrower and more specific claim. You can imagine a lot of scenarios where AI is unsafe, but not in a way that kills everyone. You can imagine cases where AI "wins", but decides to cut a deal with us. You can imagine cases where an AI doesn't care about human flourishing because it doesn't care about anything, it ends up acting like a tool that we can direct as we please.
I'm aware that you have counterarguments for all of these cases (that I will probably disagree with). But these counterarguments will have to be rooted in the actual nuts and bolts details of how actual, physical AI works. And if you are trying to reason about future machines, you want to be able to get a good prediction about their actual characteristics.
I think in this context, it's totally reasonable for people to look at your (in my opinion poor) track record of prediction and adjust their credence in your effectiveness as an institution.
I'm a huge fan of epistemological humility, but it seems odd to invoke it for a topic where the societal effects have been exhaustively studied for decades. The measurable harms and comparatively small benefits are as well known as you could reasonably expect for a medical subject.
Your counterargument seems to be that there are unmeasured benefits, as revealed by the fact that people choose to smoke despite knowing the harm it does. But I don't think these are an epistemological mystery either: you can just ask people why they smoke and they'll tell you.
It's seems like this is more of a difference in values than a question of epistemics: one might regard the freedom to choose self-destructive habits as being an important principle worth defending.
I don't think this sort of anchoring is a useful thing to do. There is no logical reason for third party presidency success and AGI success to be linked mathematically. It seems like the third party thing is based on much greater empirical grounding.
You linked them because your vague impression of the likelihood of one was roughly equal to the vague impression of the likliehood of the other: If your vague impression of the third party thing changes, it shouldn't change your opinion of the other thing. You think that AGI is 5 times less likely than you previously thought because you got more precise odds about one guy winning the presidency ten years ago?
My (perhaps controversial) view is that forecasting AGI is in the realm of speculation where quantification like this is more likely to obscure understanding than to help it.
I believe Rice's theorem applies to a programmable calculator. Do you think it is impossible to prove that a programmable handheld calculator is "safe"? Do you think it is impossible to make a programmable calculator safe?
My point is, just because you can't formally, mathematically prove something, doesn't mean it's not true.
I have an issue with this argument, although I don't have much expertise in this field.
you talk about the legality of a patient directly buying radiology results from an AI company, but this isn't a very plausible path of radiologists being replaced. People will still have to go to the hospital to get the actual radiology scans done.
The actual concern would be that hospitals get the radiology scans done by non-radiologists, and outsource the radiology results to an AI radiology company. I can't really tell from your post whether this is illegal or not (if it is, what is the business model of these companies?). This process seems more like how automation will actually go in most fields, so it's relevant if it's not working for radiology.
And another point: one reason that this stuff may be illegal is that it doesn't work well enough to be made legal. I think if this is partly the reason, that can absolutely be used as a point against the likelihood of AI automation.