This is a special post for quick takes by Minh Nguyen. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Suggestion for EAGs/EAGx:

Please give more detail on your Swapcard about your research/projects. Even just the title of the research paper, or any links.

I always review the entire attendee list. I actively want to meet people, but if someone just puts "I am trying to get into AI alignment research", I literally have nothing to go off, and can't think of a reason to reach out with so few details.

This also saves time during 1-on-1s so you don't re-introduce yourself/areas of interest every 30 minutes.

If you don't have a project/experience, you can even put ideas you're fascinated by, or groups you identify with. I literally Ctrl+F "ADHD", "startup", "Singapore" and other keywords related to my research interests.

That's a great comment.

In general I would have like to have seen 30-50% more info on people's swapcards, but I know there is a tradeoff in effort and readability there as well. Like you say projects and specific interests can be super useful.

Hello! I'm looking for AI Safety proposals/project ideas to deploy within CivitAI, the biggest open-source AI image model hosting platform.

Civitai AI: A platform for sharing AI-generated art

I own AI Hub, an AI voice model hosting platform w/ ~1 million users. We are discussing integration/merging with, which is an open source AI model hosting platform and the 7th most visited AI website with 27 million monthly users. Most of their hosting is Stable Diffusion image models, but I'm helping them introduce AI voice cloning model hosting. I'd get a lot of autonomy over distribution for voice and possibly text models.

The founders mentioned they’re also exploring content moderation solutions, and when I asked, they said they were open to working with AI Safety companies. CivitAI is very influential in the open-source community, so I think shaping policy here could significantly shape AI Safety wrt open-source and local models. Some directions I’m exploring:

  • Content and open source model moderation frameworks/tools
  • Collecting data and running tests that aid alignment research

If anyone has ideas, or knows anyone who might be interested, do LMK! This is exploratory, but I aim to formulate an idea and deploy to users within ~3 months, with 80% confidence. I am literally open to trying any suggestions - for-profit, nonprofit, research, product, governance, alignment research etc.

Even if you have an idea that’s not AI image-related, I have a decent (40%) chance of proposing it to Github, Huggingface etc. Plus, my task is to diversify outside of image models anyway, so LLM-related proposals could still be relevant.

(this is still in exploratory/idea phase, and I wasn't sure if this should be a full post)

In AI Safety, it seems easy to feel alienated and demoralised.

The stakes feel vaguely existential, most normal people have no opinion on it, and it never quite feels like "saving a life". Donating to global health or animal welfare feel more direct, if still distant.

I would imagine a young software developer discovers EA, and then AI Safety, hoping to do something meaningful. But the moments after that, feel much the same as it would a normal job.

Curious if others feel the same.

Curated and popular this week
Relevant opportunities