Speaking from what I've personally seen, but it's reasonable to assume it generalizes.
There's an important pool of burned out knowledge workers, and one of the major causes is lack of value alignment, i.e. working for companies that only care about profits.
I think this cohort would be a good target for a campaign:
I am in a similar boat as you. I don't feel comfortable being identity-EA because I have some core philosophical disagreements.
However, I have been inspired by EA to the point of making some substantive life changes, and participate in my local EA group. I try to do things that are convincing enough for their own sake, even though I do not necessarily agree with all the premises.
I believe there is value to participation in the whatever-ist party, even if you are not comcortable calling yourself a whatever-ist, not because of ideological purity, but because it doesn't even feel true.
Question: how to reconcile the fact that expected value is linear with preferences being possibly nonlinear?
Example: people are tipically willing to pay more than expected value for a small chance of a big benefit (lottery), or to remove a small chance of a big loss (insurance).
This example could be rejected as a "mental bias" or "irrational". However, it is not obvious to me that linearity is a virtue, and even if it is, we are human and our subjective experience is not linear.
My uninformed guess is that an automatic system doesn't need to be superintelligent to create trouble, it only needs some specific abilities (depending on the kind of trouble).
For example, the machine doesn't need to be agentic if there is a human agent deciding to make bad stuff happen.
So I think it would be an important point to discuss, and maybe someone has done it already.
(just speculating, would like to have other inputs)
I get the impression that sexy ideas get disproportionate attention, and that this may be contributing to the focus on AGI risk at the expense of AI risks coming from narrow AI. Here I mean AGI x-risk/s-risk vs narrow AI (+ possibly malevolent actors or coordination issues) x-risk/s-risk.
I worry about prioritising AGI when doing outreach because it may make the public dismiss the whole thing as a pipe dream. This happened to me a while ago.
Thank you Toby.
I agree that to observe macroeconomic effects something has to be broad scale and my question was quite speculative.
On the other hand, about the Forum, I see that posts are like essays and appear informative. I wonder what is the right place to things that might be interesting or valuable, but don't fit the general vibe, for instance, just a question. Do they belong in here? As quick takes?
Does the forum have a policy on necro posting (the act of commenting/editing "resurrecting" old material)? I didn't find it in the "how to use the forum" sequence.