AI safety is one of the most critical issues of our time, and sometimes the most innovative ideas come from unorthodox or even "crazy" thinking. I’d love to hear bold, unconventional, half-baked or well-developed ideas for improving AI safety. You can also share ideas you heard from others.
Let’s throw out all the ideas—big and small—and see where we can take them together.
Feel free to share as many as you want! No idea is too wild, and this could be a great opportunity for collaborative development. We might just find the next breakthrough by exploring ideas we’ve been hesitant to share.
A quick request: Let’s keep this space constructive—downvote only if there’s clear trolling or spam, and be supportive of half-baked ideas. The goal is to unlock creativity, not judge premature thoughts.
Looking forward to hearing your thoughts and ideas!
P.S. You answer can potentially help people with their career choice, cause prioritization, building effective altruism, policy and forecasting.
P.P.S. AIs are moving quick, so we need new ideas to make them safe, you can compare the ideas here with the ones we had last month.
Yes, the only realistic and planet-wide 100% safe solution is this: putting all the GPUs in safe cloud/s controlled by international scientists that only make math-proven safe AIs and only stream output to users.
Each user can use his GPU for free from the cloud on any device (even on phone), when the user doesn't use it, he can choose to earn money by letting others use his GPU.
You can do everything you do now, even buy or rent GPUs, all of them just will be cloud math-proven safe GPUs instead of physical. Because GPUs are nukes are we want no nukes or to put them deep underground in one place so they can be controlled by international scientists.
Computer viruses we still didn't 100% solve (my mom had an Android virus recently), even iPhone and Nintendo Switch got jailbroken almost instantly, there are companies jailbreak iPhones as a service. I think Google Docs never got jailbroken, and majorly hacked, it's a cloud service, so we need to base our AI and GPU security on this best example, we need to have all our GPUs in an internationally scientist controlled cloud.
Else we'll have any hacker write a virus (just to steal money) with an AI agent component, grab consumer GPUs like cup-cakes, AI agent can even become autonomous (and we know they become evil in major ways, want to have a tea party with Stalin and Hitler - there was a recent paper - if given an evil goal. Will anyone align AIs for hackers or hacker themself will do it perfectly (they won't) to make an AI agent just to steal money but be a slave and do nothing else bad?)