L

leillustrations🔸

534 karmaJoined

Comments
46

In terms of EA charities most commonly cited in these areas only, I think global health charities are much more well evidenced.

I think the most effective animal welfare interventions are probably more effective, I'm just much less sure what they are.

Thanks for your work here! I can see that the data here is limited, and I think that makes projects like this much harder but still very valuable. 

A couple of questions/suggestions: 

  • I'm unable to find any .csv's to download on that page. Could you point out where they are?
  • It looks to me like all the data you have is at the country level – is this correct?
    •  I'm generally a big fan of geospatial work, but I'm not sure its helping in your case. It becomes quickly confusing if I turn on more than one layer at once, and I can't see any of the correlations you discuss.
    • You say: "layer overlapping, statistical variable calculations using GeoDa software, and interpolation were used to search for relevant indicators for spatial analysis" – can you specify what you did here? 

At the risk of sounding naive: I'd like to point out you can go work for a frontier AI company and give lots of money to AI safety (or indeed any other cause area you believe in). 

If nothing else, if you give at least the salary difference between a frontier job and a lower-pay non-frontier AI safety job, this prevents you from lying to yourself: thinking you are working at a frontier company because you believe its good, while actually doing it because of the benefits to you.

This is great! I think its extremely important and underrated (dare I say 'neglected'?) work to identify and shift resources towards more effective charities in smaller contexts, even if those charities are unlikely to be the most globally effective.

Are you able to share more of your analysis or data? I'm curious about the proportion of charities in the categories you identify above, and what, if any numerical/categorical values you assigned.

Upvoted because -50 karma strikes me as excessive for a joke (even if in poor taste)

  • Presentations from any of the individuals who work on evaluation, getting "into the weeds" of how decisions are made, and recent work
  • Presentations from Givewell grantees on what they're currently working on
  • Bill / Melinda Gates, or otherwise someone from the Gates foundation
  • Elon Musk, or people from Tesla, Neuralink, and SpaceX
  • People from pharmaceutical companies
  • Board members of EVF
  • Sal Khan
  • A talk from successful edutainment/social media people who discuss EA-adjacent ideas like CGP Grey, Kurzgesagt, etc. (who did not necessarily start out EA-funded)
  • Podcast interviewers who discuss EA-relevant content, eg. Ezra Klein (as already mentioned), Lex Fridman, Joe Rogan.
  • People running non-cause area EA interest groups, eg. SEADS, High Impact [Engineers, Law, Professionals, Medicine, etc], Religious EA groups, on what they're working on/how EA is different in their communities

I suspect you would get a much wider applicant pool for EAGxSingapore if it were a week later.

The time requirements (<10 hours/week for most roles for most of the process, then full-time the week of the conference) is not really viable for most working professionals, and more suited to students who would be on winter break - but it looks like NUS (Singapore), Ateneo de Manila University and De La Salle University (Phillipines), and Fulbright University (Vietnam), ie. the (I think) majority of the EA university groups in South East Asia, have school terms going up to the week of the conference.

If we are correct about the risk of AI, history will look kindly upon us (assuming we survive).

Perhaps not. It could be more like Y2K, where some believe problems were averted only by a great deal of effort and others believe there would have been minimal problems anyway.

I sometimes downvote comments and posts mostly because I think they have "too much" karma - comments and posts I might upvote or not vote on if they had less karma. As I look at the comment now it has 2 karma with 11 votes - maybe at some point it had more and people voted it back to 2?

I would have downvoted this comment if it had more karma because I think Deborah's comment can be read as antagonistic: "utterly blind", "dire state", "for heaven's sake!", calling people ignorant. In this context I didn't read it this way, but I often vote based on "what would the forum be like if all comments were more like this" rather than "what intentions do I think this person has".

Load more