S

SeLo

115 karmaJoined
Interests:
Forecasting

Bio

I work on applying best-practices from safety-conscious industries to AI governance.

Comments
3

SeLo
20
5
0

Hi Eli, thanks for this update. 

On the particular point of cutting the number of sessions you record, I wanted to quickly mention:

1. You say that the videos have few views, but maybe that is also due to a lack of advertisement? I recall finding the Youtube channel after having been involved in EA for quite a bit -- nobody had ever mentioned it to me as a good source of info. Now I'm a regular viewer (see also point 2).
2. I have the cached heuristic that you record most of the content on the primary+secondary stages, which I believe reduces quite a bit of my FOMO, so probably not having the anticipated recordings to watch later would make me have a couple fewer 1on1s, to be able to watch the talks live. Not sure how common this is, but should maybe be a factor in the calculation of (impact) cost.
3. The videos are also particularly useful for staying kind of up to date with things that are not in my immediate cause areas. These talks do not cross my bar for impact to attend at the event directly (because they likely won't guide any actions of mine), but are great to watch over dinner to not feel like I'm becoming an "AI-EA", but instead still engaged with other parts of the community.

(as an independent aside, fwiw, the cost category that has the highest delta to my expectations, in relative terms, are the 100k Printing and signage costs. The maps are always really professional and pretty, sure, but wow!)

SeLo
18
0
0

I think this event is valuable from multiple perspectives:
(1) I'm generally excited for more longtermist phase 2 work, since I think establishing such a track record has multiple benefits, such as signaling effects for implementation-focused people, moving us forward on the implementation learning curve for building real-world things, simply being more “believable” by putting skin in the game as opposed to theorizing, etc.

(2) On an object level, I believe shelters may turn out to be an important element in our portfolio to reduce existential risk, potentially both in a response as well as a resilience function 

(3) I am curious whether this approach to catalyzing organizational development for an ex ante defined project may be a model for future events

(4) Establishing a project focused on existential risk where we can show a straightforward causal chain towards reduced existential risk extends (if successfully executed) our list of legible achievements, thereby strengthening the case for high EV interventions out there, waiting to be executed

I like this advice and plan to follow it myself. 

I'd like to note though, that one part of my brain insists that this approach increases "false positive hires", where there is some probability for every application of the employer selecting me for a role someone else would be more suitable for, reducing counterfactual impact.

Spending time to figure out if I consider myself suitable instead of just applying would reduce this probability. 

"Just applying" is likely still be the optimum default for the community as a whole by reducing false negatives (people not applying for roles they end up being a superior fit for compared to others) and accepting the false positives. Additionally, the false positives seem to have less downsides as they can be ideally identified quickly, with the false negatives not having a feedback loop to identify this counterfactual loss.