Bio

Superpower: Can hold two opposing ideas in the head simultaneously
Weakness: Time

Jesper is a very curious individual with interests in basically everything. He is particularly interested in ontology, information theory, and heavy metal. He is getting quite happy with his current modeling of reality. 

He also believes human thinking peaked ca. 600 BC-200 CE, but that we have a good chance to catch up now.

He holds a M.Sc. in cell- and molecular biology from Finland. He likes fantasy, sci-fi, philosophy, science and ancient history, but somehow also has strong social skills (according to himself). Yet he is writing his own bio in third person, because that's what the forum guide did.

He has not pursued a PhD in cell biology, but briefly worked in the biotech field, developing lectin technologies. He currently works in a non-scientific corporate role where he has final administrational responsibilities for 11 countries.

Jesper recently decided (2025) that he should focus his talents and precious time on AI safety and ethics, as they are both meaningful, as well as frontiers where many, many of his interests align. 

Aspiring independent researcher.

How I can help others

I can assist with negotiations. Strong skills in diplomatic- and strategic negotiations.

Comments
2

This is a very important and underrated topic in tech circles. I would strongly agree, that Mutual Trust is essential.

I have been part of two major restructurings within a corp. branch employing over 6K people, in the last 3 years. Part of a much larger company.

My own dpt. faced serious challenges. We went from fragmented and overworked teams to resilient and flourishing in 1.4 years. Here are my takes on what had the biggest impact, turning it all around and set us up for success.

 

  1. Top leader(s) have to personally communicate and engage with ALL layers of the company/branch. This is leadership 101.
  2. Leadership has to force team managers to sit down and work out their differences. Forcing practical cooperation on an  operational level is essential to start building mutual trust, and to building new and better infrastructure down the road.
  3. Reward top performers and work horses with PTO, not just bonuses. 

In summary: Talk to each other, and get some rest once in a while.

I think that one reason as to why it is hard to break into the AI safety field, is because people are almost exclusively working on either a) alignment theory, which is very hard, and largely mirrors slow-moving academic work, or b) AI governance, which is somewhat dry and also inherently slow-moving.

I think we need more diversity in the field, and more focus on creating infrastructure for collaboration. We need coordinators, strategists, ethicists, startups, people working on control problems, people working on engineering, auditors, networkers, lobbyists, and more.

--

I am a strategist. My personal top-interests in AI safety are: a) industry collaboration mechanisms b) transparency mechanisms c) shared ontologies and shared ethical frameworks. 

None of these are traditional AI safety focus areas.

--

Diversity makes categorization harder, but we should not be purists. One example idea I was briefly working on: A self-funded, decentralized (ledger-based) insurance scheme for young people. Most would join due to job safety concerns, some for AI safety concerns. Is this an AI safety topic, or something else? Many would say something else, but then again, if implemented, it would build awareness and a mobilized network. Many small streams form a river...?