Richard Möhn

software developer @ Spark Wave
305 karmaJoined Working (6-15 years)Kagoshima, Japan
littlepluses.com

Comments
70

What does the research say about the fraction of people who decided on ethical grounds to have/not to have children and then were happy with their choice on emotional grounds vs. regretted their choice?

It might be better to have them for impact and find out that it's great to have them independent of impact than to not have them because one didn't feel like having them. (Similar to my experience. I think babies are gross rather than cute. Only my own baby is cute.)

I know there is more nuance in your post, but if I take your title at face value, I would say: When I'm evaluating candidates and I catch you not being honest (ie. lying or distorting the truth), I'm going to reject your application. If I catch you lying outright, I'm never going to consider you again as a candidate. If I find out after you were hired that you lied during the application process, I would probably do my best to get you fired. (I mean the ‘you’ in a general sense. I'm not expecting you, JDLC, would lie.)

If you give honest, but unspecific answers, and it's about an important skill, I'm going to ask you follow-up questions to figure out what's going on.

Thanks! I've written it down for next time.

Down-to-earth interventions sounds better to me, too. I like all of the examples.

Just a quick thought: Supporting (investigative) journalism is also a possible intervention. I think journalism is considered a pillar of democracy? Currently, Kelsey Piper's work on OpenAI/Sam Altman is a good example.

Good to know about the small donations! Although I wonder: There are lots of (small) AI safety orgs that have a donate thing on their website and it's easy to donate a small amount. Is this also secretly very inefficient? To make bigger donations I would have to save up and then make a good decision. I prefer a drive-by spray-and-pray approach, personally.

NTI is a good suggestion, too! Even if it's not just bio, at least it's not AI safety. (Nothing against AI safety – as I stated above: I already donate in other ways.)

I think Yanni isn't writing about personal favourites. Assuming there is such a thing as objective truth, it makes sense to discuss cause prioritization as an objective question.

Load more