JH

Jonas Hallgren

363 karmaJoined Uppsala, Sweden

Comments
39

Topic contributions
3

I appreciate you putting out a support post of someone who might have some EA leanings that would be good to pick up on. I may or may not have done so in the past and then removed the post because people absolutely shat on it on the forum 😅 so respect.

How will you address the conflict of interest allegations raised against your organisation? It feels like the two organisations are awfully intertwined. For gods sake, the CEOs are sleeping with each other! I bet they even do each other's taxes!

I'm joining the other EA. 

It makes sense for the dynamics of EA to naturally go in this way (Not endorsing). It is just applying the intentional stance plus the free energy principle to the community as a whole. I find myself generally agreeing with the first post at least and I notice the large regularization pressure being applied to individuals in the space.

I often feel the bad vibes that are associated with trying hard to get into an EA organisation. I'm doing for-profit entrepreneurship for AI safety adjacent to EA as a consequence and it is very enjoyable. (And more impactful in my views)

I will however say that the community in general is very supportive and that it is easy to get help with things if one has a good case and asks for it, so maybe we should make our structures more focused around that? I echo some of the things about making it more community focused, however that might look. Good stuff OP, peace.

I did enjoy the discussion here in general. I hadn't heard of the "illusionist" stance before and it does sound quite interesting yet I do find it quite confusing as well.

I generally find there to be a big confusion about the relation of the self to what "consciousness" is. I was in this rabbit hole of thinking about it a lot and I realised I had to probe the edges of my "self" to figure out how it truly manifested. A 1000 hours into meditation some of the existing barriers have fallen down. 

The complex attractor state can actually be experienced in meditation and it is what you would generally call a case of dependent origination or a self-sustaining loop (literally, lol). You can see through this by the practice of realising that the self-property of mind is co-created by your mind and that it is "empty". This is a big part of the meditation project. (alongside loving-kindness practice, please don't skip the loving-kindness practice)

Experience itself isn't mediated by this "selfing" property, it is rather an artificial boundary we have created about our actions in the world for simplification reasons. (See Boundaries as a general way of this occurring.)

So, the self cannot be the ground of consciousness; it is rather a computationally optimal structure for behaving in the world. Yet realizing this fully is easiest done through your own experience, or through n=1 science. Meaning that to fully collect the evidence you will have to discover it through your own phenomenological experience. (which makes it weird to take into western philosophical contexts)

So, the self cannot be the ground and partly as a consequence of this and partly since consciousness is a very conflated term, I like thinking more about different levels of sentience instead. At a certain threshold of sentience the "selfing" loop is formed.

The claims and evidence he's talking about may be true but I don't believe that justifies the conclusions that he draws from them.

Thank you for this post! I will make sure to read the 5/5 books that I haven't read yet, especially excited about Joseph Heinrich's book from 2020, had read The Secret of Our Success before but not that one. 

I actually come from an AI Safety interest when it comes to moral progress. The question is to some extent for me on how we can set up AI systems so that they continuously improve "moral progress" as we don't want to leave our fingerprints on the future

In my opinion, the larger AI Safety dangers come from "big data hell" like the ones described in Yuah Noah Harari's Homo Deus or Paul Christiano's slow take-off scenarios. 

Therefore we want to figure out how to set up AIs in such a way that automatically improves moral progress in the structure of their use. I'm also a believer that AI will most likely in the future go through a similar process to the one described in The Secret of Our Success and that we should prepare appropriate optimisation functions for it. 

So, if you ever feel like we might die from AI, I would love to see some work in that direction! 
(happy to talk more about it if you're up for it.)

The number of applications will affect the counterfactual value of applying. Now, saying your expected number might lower the number of people who will apply, but I would still appreciate having a range of expected applicants for the AI Safety roles. 

What is the expected amount of people applying for the AI Safety roles? 

Load more