AI safety is one of the most critical issues of our time, and sometimes the most innovative ideas come from unorthodox or even "crazy" thinking. I’d love to hear bold, unconventional, half-baked or well-developed ideas for improving AI safety. You can also share ideas you heard from others.
Let’s throw out all the ideas—big and small—and see where we can take them together.
Feel free to share as many as you want! No idea is too wild, and this could be a great opportunity for collaborative development. We might just find the next breakthrough by exploring ideas we’ve been hesitant to share.
A quick request: Let’s keep this space constructive—downvote only if there’s clear trolling or spam, and be supportive of half-baked ideas. The goal is to unlock creativity, not judge premature thoughts.
Looking forward to hearing your thoughts and ideas!
P.S. You answer can potentially help people with their career choice, cause prioritization, building effective altruism, policy and forecasting.
P.P.S. AIs are moving quick, so we need new ideas to make them safe, you can compare the ideas here with the ones we had last month.
Interesting, David, I understand, if I would’ve found a post by Melon and flaws in it, I would’ve become not happy, too. But I found out that this whole forum topic is about both crazy and not ideas, preliminary ideas, about steel-manning each other and little things that may just work. Not dismissing based on flaws, of course there will be a lot.
We live in unreasonable times and the AI solution quite likely will look utterly unreasonable at first.
Chatting with a bot is a good idea, actually, I’ll convey it. You never said what is the flaw, though) Just bashed and made it personal (did the thing we agreed not to do here, it’s preliminary and crazy ideas here, remember?)
Thank you for your open mind and your time