CC

Christopher Clay

Non-Trivial Fellow @ Non Trivial
66 karmaJoined Pursuing an undergraduate degreeUnited Kingdom

Bio

Participation
1

I'm on an (unintended) Gap Year at the moment and will study maths at university next year. Right now I'm exploring cause prioritisation.

Previously I was focused on Nuclear War, but I no longer think it's worth me working on, because it's very intractable and the extinction risk is very low. I've also explored AI Safety (doing the AI Safety Fundamentals Course) but my coding really isn't up to scratch at the moment. 

The main thing I'm focusing on right now is cause prioritisation - I'm still quite sceptical of the theory of working on extinction risks.

Things I've done:

  • Non Trivial Fellowship. I produced an explainer of the risks posed by improved precision in nuclear warfare.
  • AI Safety Fundamentals. I produced this explainer of superposition: https://chrisclay.substack.com/p/what-is-superposition-in-neural-networks 

How others can help me

I'm looking for opportunities to gain career capital this summer, particularly in EA-related orgs. I'm open to many things, so if you think I might be a good fit, feel free to reach out!

How I can help others

If you'd like advice on Non-Trivial or are interested in talking about Cause prioritisation, send me a message!

Posts
1

Sorted by New

Comments
6

I see the argument about the US Government's statistical value of a life used a lot - and I'm not sure if I agree. I don't think it echoes public sentiment - rather a government's desire to remove itself of blame. Note how much more is spent per life on say, air transport than disease prevention.

Interesting argument - I don't know much about this argument, but my thoughts are that there's not much value in thinking in terms of conditional value. If AI Safety is doomed to fail, there's not much value focusing on good outcomes which won't happen, when there are great global health interventions today. Arguably, these global health interventions could also help at least some parts of humanity have a positive future.

Unless you work in practical AI Safety/AI policy, I disagree.

The press is very bad at talking about the scale of problems. For example, they're always talking about murders, which are not a pressing problem in most parts of the world right now.

The press also tends to focus on the unimportant parts of stories. For instance, when they talk about scientific papers, they often focus on a small aspect of the papers that they think readers will find exciting and blow it out of proportion.

For me also, I often don't have enough time to both read the news and read a book in the day. I think reading books are much more valuable.

I've heard this argument before, but I find it un-compelling in its tractability. If we don't go extinct, its likely to be a silent victory; most humans on the planet won't even realise it happened. Individual humans working on X-risk reduction will probably only impact the morals of people around them.

Wow - of all the replies this makes the most sense to me! That's a great way of looking at things!