This is a special post for quick takes by [anonymous]. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
I had an idea for a new concept in alignment that might allow nuanced and human like goals (if it can be fully developed).
Has anyone explored using neural clusters found by mechanistic interpretability as part of a goal system?
So that you would look for clusters for certain things e.g. happiness or autonomy and have that neural clusters in the goal system. If the system learned over time it could refine that concept.
This was inspired by how human goals seem to have concepts that change over time in them.
I've got an idea for a business that could help biosecurity by helping stop accidental leaks of data to people that shouldn't have it. I'm thinking about proving the idea with personal identifiable information. Looking for feedback and collaborators.
How should important ideas around topics like AI and biorisk be shared? Is there a best practice, or government departments that specialise in handling that?
I had an idea for a new concept in alignment that might allow nuanced and human like goals (if it can be fully developed).
Has anyone explored using neural clusters found by mechanistic interpretability as part of a goal system?
So that you would look for clusters for certain things e.g. happiness or autonomy and have that neural clusters in the goal system. If the system learned over time it could refine that concept.
This was inspired by how human goals seem to have concepts that change over time in them.
I've got an idea for a business that could help biosecurity by helping stop accidental leaks of data to people that shouldn't have it. I'm thinking about proving the idea with personal identifiable information. Looking for feedback and collaborators.
How should important ideas around topics like AI and biorisk be shared? Is there a best practice, or government departments that specialise in handling that?
My blog might be of interest to people