J

JuliaHP

39 karmaJoined

Comments
2

hypothesis that springs to mind, might or might not be useful for engaging with it productively. might be wrong depending on what class of people you have been having meetings with.

when you select for people working on AI risks, you select for people who are generally less respective of status quo and social norms. you select for the kind of person who will generally do something less, because of the reason that its a social norm. in order for this reference class of person to do a thing, they kind of who through their personal reasoning methods reaches the conclusion "It would be worth the effort for me to change myself to become the kind of person who shows up on time consistently, compared to other things I could be spending my effort on". they might just figure its a better use of their effort to think about their research all day.

I would guess that most people on this earth don't show up on time because they reasoned it through that this is a good idea, they do it because it has been drilled into them through social norms, and they valued those social norms higher.

(note: this comment is not intended to be an argument that showing up on time is a waste of time)

In my observed experience there are lots of young people (many whom I know personally) who want to help with AI-alignment and are in my opinion capable of doing so, they just need to spend a year or two trying and learning things to get the necessary skills. 

These are people who usually lack prior achievement and therefore are not able to access various EA-adjacent grants to buy themselves the slack needed to put in the time and focus on a singular goal. When really what they need is not salary-sized grants but just a guarantee that they will have food, housing and a supportive environment and community, while they take a pause from formal education or jobs meant to keep them afloat.

I personally know one such person who got helped out of prior dependence through CEEALAR and has started to become productive. My own (and Orthogonals) stay at CEEALAR strongly positive as well.

Ive heard that grant makers are often operationally constrained on giving out smaller hit based grants to for example individuals. By giving to CEEALAR, grant makers would outsource this operational cost and would be able to bootstrap people doing AI-alignment, in a hits based manner for low cost compared to alternatives such as individual grants. Which I do think is one of the most cost effective ways to help with AI-alignment. Myself and many others I know doing good work in Alignment would not exist in the space if not for personal hit-based grants.

I'm very confused and sad that CEEALAR has not recieved more funding. Not only do I wish that CEEALAR could stay afloat and expand, but I also think we would STRONGLY benefit from similar institutions aimed at low cost housing for motivated, but potentially unproven individuals, at other geographical locations, say east and west coast in the US, as well as somewhere in Europe. If CEEALAR was able to consistently get funded, that would grant confidence for people to start similar organizations elsewhere.