Johan de Kock

101 karmaJoined Working (0-5 years)Maastricht, Niederlande
www.effectivealtruismmaastricht.nl/

Bio

Participation
5

An aspiration in my life is to make the biggest positive impact in the world that I can. In 2018 I started working on this goal as a junior paramedic and in 2019 by starting to be trained as a physiotherapist. My perspective shifted significantly after reading Factfulness by Hans Rosling, which inspired me to explore larger-scale global issues. This led me to pursue an interdisciplinary degree in Global Studies and to discover the research field and social community of Effective Altruism.

Since 2022, I’ve been actively involved in projects ranging from founding a local EA university group to launching an AI safety field building organization. Through these experiences and the completion of my bachelor programme, I discovered that my strengths seem to align best with AI governance research, a field I believe is fundamental for ensuring the responsible development of artificial intelligence.

Moving forward, my goal is to deepen my expertise in AI governance as a researcher and contribute to projects that advance this critical area. I am excited to connect with like-minded professionals and explore opportunities that allow me to make a meaningful impact.

Comments
19

Thank you for your reply!

Summary: My main intention in my previous comment was to share my perspective on why relying too much on the outside view is problematic (and, to be fair, that wasn’t clear because I addressed multiple points). While I think your calculations and explanation are solid, the general intuition I want to share is that people should place less weight on the outside view, as this article seems to suggest.

I wrote this fairly quickly, so I apologize if my response is not entirely coherent.


Emphasizing the definition of unemployment you use is helpful, and I mostly agree with your model of total AI automation, where no one is necessarily looking for a job.

Regarding your question about my estimate of the median annual unemployment rate: I haven’t thought deeply enough about unemployment to place a bet or form a strong opinion on the exact percentage points. Thanks for the offer, though.

To illustrate the main point in my summary, I want to share a basic reasoning process I'm using.

Assumptions:

  • Most people are underestimating the speed of AI development.
  • The new paradigm of scaling inference-time compute (instead of training compute) will lead to rapid increases in AI capabilities.
  • We have not solved the alignment problem and don’t seem to be making progress quickly enough (among other unsolved issues).
  • An intelligence explosion is possible.

Worldview implications of my assumptions:

  • People should take this development much more seriously.
  • We need more effective regulations to govern AI.
  • Humanity needs to act now and ambitiously.

To articulate my intuition as clearly as possible: the lack of action we’re currently seeing from various stakeholders in addressing the advancement of frontier AI systems seems to be, in part, because they rely too heavily on the outside view for decision-making. While this doesn’t address the crux of your post ( but it prompted me to write my comment initially), I believe it’s dangerous to place significant weight on an approach that attempts to make sense of developments we have no clear reference classes for. AGI hasn’t happened yet, so I don’t understand why we should lean heavily on historical data to assess such a novel development.

What’s currently happening is that people are essentially throwing their arms up and saying, “Uh, the probabilities are so low for X or Y impact of AGI, so let’s just trust the process.” If people placed more weight on assumptions like those above, or reasoned more from first principles, the situation might look very different. Do you see? My issue is with putting too much weight on the outside view, not with your object-level claims.

I am open to changing my mind on this.

Thank you for the post! How much weight do you think should one allocate to the inside and outside view respectively in order to develop a comprehensive estimate of the potential future unemployment rate?

Your calculations look fancy and all of that, but it seems inappropriate to me to be putting so much weight on historical data as you are doing. Especially because I think this ignores the apparent fact that the development of intelligent systems that are more capable than humans has never occurred in history. This fundamentally changes the game.

The more the world changes, I think the less weight one should be putting on the outside view (needs more nuance). People are scared, people don't update in the face of new evidence, people dislike change.

I know you are not saying that the inside view doesn't matter, but I am concerned that a post like this anchors people toward a base rate that is a lot lower than what things will actually be like. It reinforces status quo bias. And this is frustrating to me because so many people don't seem to understand the seriousness of our situation

I think it makes a lot of sense to reason bottom-up when thinking about topics like these, and I actually disagree with you a lot. It seems to be that there is a deeply correlated failure happening in the AI safety community. In my view, people are putting way too much weight onto the outside view. I am happy to elaborate. 

Thank you for sparking this discussion.

Thank you for writing this! I just took the time to write a letter.

Would you consider adding your ideas for 2 minutes?  - Creating an comprehensive overview of AI x-risk reduction strategies
------

Motivation: To identify the highest impact strategies for reducing the existential risk from AI, it’s important to know what options are available in the first place.

I’ve just started creating an overview and would love for you to take a moment to contribute and build on it with the rest of us!

Here is the work page: https://workflowy.com/s/making-sense-of-ai-x/NR0a6o7H79CQpLYw

Some thoughts on how we collaborate:

  • Please don’t delete others’ bullet points; instead, use the comment feature to suggest changes or improvements.
  • If you’re interested in discussing this further, feel free to add your name and contact details here. I may organize a follow-up discussion.
     

Thank you for sharing Zach! I think it is valuable to highlight the key parts from the podcast episode and share them here. With so many podcast episodes to choose from, this helps people selectively engage with the parts of the episode that are most relevant to them.

Thank you for writing this up, Akash! I am currently exploring my aptitude as an AI governance researcher and consider the advice provided here to be valuable. Especially the point on bouncing off ideas with people early on, but also throughout the research process is something I have started to appreciate a lot more.

For anyone who is in a similar position, I can also highly recommend to check out this and this post.

For any other (junior or senior) researchers interested in expanding their pool of people to reach out to for feedback on their research projects, or simply to connect, feel free to reach out on LinkedIn or schedule a call via Calendly! I look forward to chatting.

I think this is an interesting post. I don’t agree with the conclusion, but I think it’s a discussion worth having. In fact, I suspect that this might be a crux for quite some people in the AI safety community. To contribute to the discussion, here are two other perspectives. These are rough thoughts and I could have added a lot more nuance.

Edit: I just noticed that your title includes the word "sentient". Hence, my second perspective is not as applicable anymore. My own take that I offer at the end seems to hold up nonetheless.
 

  1. If we develop an ASI that exterminates humans, it will likely also exterminate all other species that might exist in the universe. 
     
  2. Even if one subscribes to utilitarianism, it does not seem clear at all that an ASI would be able to experience any joy or happiness, or that it would be able to create it. Sure, it can accomplish objectives, but one can argue from a strong position that these won’t accomplish any utilitarian goals. Where is the the positive utility here? And even more importantly, how should we frame positive utility in this context? 

I think a big reason to not buy your argument stems from the apparent fact that humans are a lot more predictable than an ASI. We know how to work together (at least a bit), we know that we have managed to improve the world throughout the last centuries pretty well. Many people dedicate their life to helping others (such as this lovely community) the higher they are located on Maslow hierarchy. Sure, we have so many flaws (humans), but it seems a lot more plausible to me that we will be able to accomplish full-scale cosmic colonisation that actually maximises positive utility if we don't go extinct in the process. On the other hand, we don't even know whether an ASI could create positive utility, or experience it.

I hope you are okay with the storm! Good luck there. And indeed, figuring out how to work with ones evolutionary tendencies is not always straightforward. For many personal decisions this is easier, such as recognising that sitting 10 hours a day at the desk is not what our bodies have evolved for. "So let's go for a run!" If it comes to large scale coordination, however, things get trickier...

"I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility." -> I agree with this and your following points. 

Thank you for writing this up Hayven! I think there are multiple reasons as to why it will be very difficult for humans to settle for less. Primarily, I suspect this to be the case because a large part of our human nature is to strive for maximizing resources, and wanting to consistently improve the conditions of life. There are clear evolutionary advantages to have this ingrained into a species. This tendency to want to have more got us out of picking berries and hunting mammoths to living in houses with heating, being able to connect with our loved ones via video calls and benefiting from better healthcare. In other words, I don't think that the human condition was different in 2010, it was pretty much exactly the same as it is now, just as it was 20 000 years ago. "Bigger, better, faster."

The combination out of this human tendency, combined with our short-sightedness is a perfect recipe for human extinction. If we want to overcome the Great Filter, I think the only realistic way we will accomplish this is by figuring out how we can combine this desire for more with more wisdom and better coordination. It seems to be that we are far from that point, unfortunately. 

A key takeaway for me is the increased likelihood of success with interventions that guide, rather than restrict, human consumption and development. These strategies seem more feasible as they align with, rather than oppose, human tendencies towards growth and improvement. That does not mean that they should be favoured though, only that they will be more likely to succeed. I would be glad to get pushback here.

I can highly recommend the book The Molecule of More to read more about this perspective (especially Chapter 6). 

Ryan, thank you for your thoughts! The distinctions you brought up are something I did not think about yet, so I am going to take a look at the articles you linked in your reply. If I have more to add to this point, I'll add that. Lots of work ahead to figure out these important things. I hope we have enough time.

Load more