JH

Jonas Hallgren

367 karmaJoined Uppsala, Sweden

Comments
42

Topic contributions
3

Damn, I really resonated with this post. 

I share most of your concerns, but I also feel that I have some even more weird thoughts on specific things, and I often feel like, "What the fuck did I get myself into?"

Now, as I've basically been into AI Safety for the last 4 years, I've really tried to dive deep into the nature of agency. You get into some very weird parts of trying to computationally define the boundary between an agent and the things surrounding it and the division between individual and collective intelligence just starts to break down a bit. 

At the same time I've meditated a bunch and tried to figure out what the hell the "no-self" theory to the mind and body problem all was about and I'm basically leaning more towards some sort of panpsychist IIT interpretation of consciousness at the moment. 

I also believe that only the "self" can suffer and that the self is only in the map and not the territory. The self is rather a useful abstraction that is kept alive by your belief that it exists since you will interpret the evidence that comes in as being part of "you." It is therefore a self-fulfilling prophecy or part of "dependent origination".

A part of me then thinks the most effective thing I could do is examine the "self" definition within AIs to determine when it is likely to develop. This feels very much like a "what?" conclusion, so I'm just trying to minimise x-risk instead, as it seems like an easier pill to swallow. 

Yeah, so I kind of feel really weird about it, so uhh, to feeling weird, I guess? Respect for keeping going in that direction though, much respect.

So I've been working in a very adjacent space to these ideas for the last 6 months and I think that the biggest problems that I have with this is just the feasibility of it.

That being said we have thought about some ways of approaching a GTM for a very similar system. Thr system I'm talking about here is an algorithm to improve interpretability and epistemics of organizations using AI.

One is to sell it as a way to "align" management teams lower down in the organization for the C-suite level since this actually incentivises people to buy it.

A second one is to start doing the system fully on AI to prove that it increases interpretability of AI agents.

A third way is to prove it for non-profits by creating an open source solution and directing it to them.

At my startup we're doing number two and at a non-profit I'm helping we're doing number three. After doing some product market fit people weren’t really that excited about number 1 and so we had a hard time getting traction which meant a hard time building something.

Yeah, that’s about it really, just reporting some of the experience in working on a very similar problem

I appreciate you putting out a support post of someone who might have some EA leanings that would be good to pick up on. I may or may not have done so in the past and then removed the post because people absolutely shat on it on the forum 😅 so respect.

How will you address the conflict of interest allegations raised against your organisation? It feels like the two organisations are awfully intertwined. For gods sake, the CEOs are sleeping with each other! I bet they even do each other's taxes!

I'm joining the other EA. 

It makes sense for the dynamics of EA to naturally go in this way (Not endorsing). It is just applying the intentional stance plus the free energy principle to the community as a whole. I find myself generally agreeing with the first post at least and I notice the large regularization pressure being applied to individuals in the space.

I often feel the bad vibes that are associated with trying hard to get into an EA organisation. I'm doing for-profit entrepreneurship for AI safety adjacent to EA as a consequence and it is very enjoyable. (And more impactful in my views)

I will however say that the community in general is very supportive and that it is easy to get help with things if one has a good case and asks for it, so maybe we should make our structures more focused around that? I echo some of the things about making it more community focused, however that might look. Good stuff OP, peace.

Load more