A possible set of interventions here could focus on undermining the ads-based revenue model that sets the incentive structure for most of the internet. If your revenue comes from ads your incentive is to keep your users around as long as possible (addiction) and to learn as much about them as possible for improved ad-targeting (privacy concerns). The Center for Humane Technology (https://www.humanetech.com) have argued that monthly subscription models might produce better incentives because they require users to feel like the subscription has improved their lives over the last month to renew (although in practice I'm not sure how much better this is; eg. Netfilx still has auto-play enabled by default, exploiting status-quo bias to keep users watching even though it's not clear why Netflix should want this).
A specific intervention could be contributing to open-source ad blocking or privacy enhancing projects, like AdBlock, or advocating for their use. My main concern with this approach is that it's somewhat adversarial with big-tech, and I'd expect a lot of pushback if they started getting enough traction to be actually shifting incentives.
Hot Take: Securing AI Labs could actually make things worse
There's a consensus view that stronger security at leading AI labs would be a good thing. It's not at all clear to me that this is the case.
Consider the extremes:
In a maximally insecure world, where anyone can easily steal any model that gets trained, there's no profit or strategic/military advantage to be gained from doing the training, so nobody's incentivised to invest much to do it. We'd only get AGI if some sufficiently-well-resourced group believed it would be good for everyone to have an AGI, and were willing to fund its development as philanthropy.
In a maximally secure world, where stealing trained models is impossible, whichever company/country got to AGI first could essentially dominate everyone else. In this world there's huge incentive to invest and to race.
Of course, our world lies somewhere between these two. State actors almost certainly could steal models from any of the big 3; potentially organised cybercriminals/rival companies too, but most private individuals could not. Still, it seems that marginal steps towards a higher security world make investment and racing more appealing as the number of actors able to steal the products of your investment and compete with you for profits/power falls.
But I notice I am confused. The above reasoning predicts that nobody should be willing to make significant investments in developing AGI with current levels of cyber-security, since if they succeeded their AGI would immediately be stolen by multiple gouvenments (and possibly rival companies/cybercriminals), which would probably nullify any return on the investment. What I observe is OpenAI raising $40 billion in their last funding round, with the explicit goal of building AGI.
So now I have a question: given current levels of cybersecurity, why are investors willing to pour so much cash into building AGI?
...maybe it's the same reason various actors are willing to invest into building open-source models, which is also mysterious to me.
Excited to hear people's thoughts!
p.s. In a not-entirely-successful attempt to keep this short, I've totally ignored misuse risks. If you're mainly worried about misuse then the case for marginal security is much stronger. That said, in obviously insecure worlds, it may be more obvious to AI companies that dangerous capabilities will be misused, and therefore shouldn't be created.