Jonas Hallgren

384 karmaJoined Uppsala, Sweden

Bio

Participation
5

Curious explorer of interesting ideas. 

I try to write as if I were having a conversation with you in person. 

I like Meditation, AI Safety, Collective Intelligence, Nature, and Civilization VI. 

I would like to claim that my current safety beliefs are a mix between Paul Christiano's, Andrew Critch's and Def/Acc

Currently the CSO of a startup ensuring safe collective systems of AIs into the real world. (Collective Intelligence Safety or Applied Cooperative AI, whatever you want to call it.)

I also think that Yuah Noah Harari has some of the best takes on the internet.
 

Comments
48

Topic contributions
3

Great point, I did not think of the specific claim of 5% when thinking of the scale but rather whether more effort should be spent in general.

My brain basically did a motte and baily on me emotionally when it comes to this question so I appreciate you pointing that out!

It also seems like you're mostly critiquing the tractability of the claim and not the underlying scale nor neglectedness?

It kind of gives me some GPR vibes as for why it's useful to do right now and that dependent on initial results either less or more resources should be spent?

Super exciting! 

I just wanted to share a random perspective here: Would it be useful to model sentience alongside consciousness itself? 

If you read Daniel Dennett's book Kinds of Minds or take some of the Integrated Information Theory stuff seriously, you will arrive at this view of a field of consciousness. This view is similar to Philip Goff's or to more Eastern traditions such as Buddhism. 

Also, even in theories like Global Workspace Theory, the amount of localised information at a point in time matters alongside the type of information processing that you have. 

I'm not a consciousness researcher or anything, but I thought it would be interesting to share. I wish I had better links to research here and there, but if you look at Dennett, Philip Goff, IIT or Eastern views of consciousness, you will surely find some interesting stuff.

Wild animal welfare and longtermist animal welfare versus farmed animal welfare? 

There's this idea of the truth as an asymmetric weapon; I guess my point isn't necessarily that the approach vector will be something like:
Expert discussion -> Policy change

but rather something like
Experts discussion -> Public opinion change -> Policy Change

You could say something about memetics and that it is the most understandable memes that get passed down rather than the truth, which is, to some extent, fair. I guess I'm a believer that the world can be updated based on expert opinion. 

For example, I've noticed a trend in the AI Safety debate: the quality seems to get better and more nuanced over time (at least, IMO). I'm not sure what this entails for the general public's understanding of this topic but it feels like it affects the policy makers.

Yeah, I guess the crux here is to what extent we actually need public support or at least what type of public support that we need for it to become legislation?

If we can convince 80-90% of the experts, then I believe that this has cascading effects on the population, and it isn't like AI being conscious is something that is impossible to believe either. 
I'm sure millions of students have had discussions about AI sentience for fun, and so it isn't like fully out of the Overton window either.

I'm curious to know if you disagree with the above or if there is another reason why you think research won't cascade to public opinion? Any examples you could point towards? 

A crux that I have here is that research that takes a while to explain is not going to inspire a popular movement. 

Okay, what comes to mind for me here is quantum mechanics and how we've come up with some pretty good analogies to explain parts of it. 

Do we really need to communicate the full intricacies of AI sentience to say that an AI is conscious? I guess that this isn't the case.

The world where EA research and advocacy for AI welfare is most crucial is one where the reasons to think that AI systems are conscious are non-obvious, such that we require research to discover them, and require advocacy to convince the broader public of them.
But I think that world where this is true, and the advocacy succeeds, is a pretty unlikely one. 

I think this is creating a potential false dichotomy? 
Here's what I believe might happen in AI Sentience without any intervention as an example:

1. Consciousness is IIT (Integrated Information Theory) or GWT (Global Workspace Theory) based in some way or another. In other words, we have some sort of underlying field of sentience like the electromagnetic field and when parts of the field interact in specific ways then "consciousness" appears as a point load in that field.
2. Consciousness is then only verifiable if this field has consequences on the other fields of reality; otherwise, it is non-popperian, like the Multiverse theory. 
3. Number 2 is really hard to prove and so we're left with very correlational evidence. It is also tightly connected to what we think of as metaphysics, meaning that we're going to be quite confused about it. 
4. Therefore, general legislators and researchers leave this up to chance and do not compute any complete metrics, as it is too difficult a problem. They hope that AIs don't have sentience. 

In this world, adding some AI sentience research from the EA Direction could have the consequences of:

1. Making AI labs have consciousness researchers on board so that they don't torture billions of iterations of the same AI.
2. Make governments create consciousness legislation and think tanks for the rights of AI.
3. Create technical benchmarks and theories about what is deemed to be conscious (See this initial, really good report for example)

You don't have to convince the general public; you have to convince the major stakeholders of tests that check for AI consciousness. It honestly seems kind of similar to what we have done for the safety of AI models but instead for the consciousness of them?

I'm quite excited for this week as it is a topic I'm very interested in but something that I also feel that I can't really talk about that much or take seriously as it is a bit fringe so thank you for having it!

Damn, I really resonated with this post. 

I share most of your concerns, but I also feel that I have some even more weird thoughts on specific things, and I often feel like, "What the fuck did I get myself into?"

Now, as I've basically been into AI Safety for the last 4 years, I've really tried to dive deep into the nature of agency. You get into some very weird parts of trying to computationally define the boundary between an agent and the things surrounding it and the division between individual and collective intelligence just starts to break down a bit. 

At the same time I've meditated a bunch and tried to figure out what the hell the "no-self" theory to the mind and body problem all was about and I'm basically leaning more towards some sort of panpsychist IIT interpretation of consciousness at the moment. 

I also believe that only the "self" can suffer and that the self is only in the map and not the territory. The self is rather a useful abstraction that is kept alive by your belief that it exists since you will interpret the evidence that comes in as being part of "you." It is therefore a self-fulfilling prophecy or part of "dependent origination".

A part of me then thinks the most effective thing I could do is examine the "self" definition within AIs to determine when it is likely to develop. This feels very much like a "what?" conclusion, so I'm just trying to minimise x-risk instead, as it seems like an easier pill to swallow. 

Yeah, so I kind of feel really weird about it, so uhh, to feeling weird, I guess? Respect for keeping going in that direction though, much respect.

So I've been working in a very adjacent space to these ideas for the last 6 months and I think that the biggest problems that I have with this is just the feasibility of it.

That being said we have thought about some ways of approaching a GTM for a very similar system. Thr system I'm talking about here is an algorithm to improve interpretability and epistemics of organizations using AI.

One is to sell it as a way to "align" management teams lower down in the organization for the C-suite level since this actually incentivises people to buy it.

A second one is to start doing the system fully on AI to prove that it increases interpretability of AI agents.

A third way is to prove it for non-profits by creating an open source solution and directing it to them.

At my startup we're doing number two and at a non-profit I'm helping we're doing number three. After doing some product market fit people weren’t really that excited about number 1 and so we had a hard time getting traction which meant a hard time building something.

Yeah, that’s about it really, just reporting some of the experience in working on a very similar problem

I appreciate you putting out a support post of someone who might have some EA leanings that would be good to pick up on. I may or may not have done so in the past and then removed the post because people absolutely shat on it on the forum 😅 so respect.

Load more