O

OllieBase

AI Events Program Lead @ Centre for Effective Altruism
6552 karmaJoined Working (6-15 years)

Sequences
1

CEA Community Events Retrospective

Comments
360

This is a great post!

> ITN estimates sometimes consider broad versions of the problem when estimating importance and narrow versions when estimating total investment for the neglectedness factor (or otherwise exaggerate neglectedness), which inflates the overall results

I really like this framing. It isn't an ITN estimate, but a related claim I think I've seen a few times in EA spaces is:

"billions/trillions of dollars are being invested in AI development, but very few people are working on AI safety"

I think this claim: 

  • Seems to ignore large swathes of work geared towards safety-adjacent things like robustness and reliability.
  • Discounts other types of AI safety "investments" (e.g., public support, regulatory efforts).
  • Smuggles in a version of "AI safety" that actually means something like "technical research focused on catastrophic risks motivated by a fairly specific worldview".

I still think technical AI safety research is probably neglected, and I expect there's an argument here that does hold up. I'd love to see a more thorough ITN on this.

By my count, barring Trajan House, it now appears that EA has officially been annexed from Oxford


Do you mean Oxford University? That could be right (though a little strong, I'm sure it has its sympathisers).  Noting that Oxford is still one of the cities (towns?) with the highest density of EAs in the world. People here are also very engaged (i.e. probably work in the space).

 

I assumed the main reason for doing something like that is to get people engaged and actually thinking about ideas

I don't know what motivations people usually have, but I also feel skeptical of this vague "activation" theory of change. If session leads don't know what actions they want session participants to take, I'm not optimistic about attendees generating useful actions themselves by discussing the topic for 10 minutes in a casual no-stakes, no-rigour, no-guidance setting. I'm more optimistic if the ask is "open a doc and write things that you could do".

I would do a meeting of people filtered for being high context and having relevant thoughts, which is much more likely to work.

Yep, the thing you've described here sounds promising for the reasons Alex covered :) I realise I was thinking of the conference setting in my critique here (and probably should've made that explicit), but I'm much more optimistic about brainstorming in small groups of people with shared context, shared goals and using something like the format you've described.

It's not clear that EA funding relies on Facebook/Meta much anymore. The original tweet is deleted, and this post is 3 years old but Holden wrote of Cari and Dustin's wealth:

I also note that META stock is not as large a part of their portfolio as some seem to assume

You could argue Facebook/Meta is what made Dustin wealthy originally, but probably not correct to say that EA funding "deeply relies" on Meta today.

Yep, I think this is right, but we don't totally rely on these kinds of surveys!

We also conduct follow-up surveys to check what actually happens a few months after each event and unsurprisingly, you do see intentions and projects dissipate (as well as many materialising). A problem we face is that these surveys have much lower response rates.

Other more reliable evidence about the impact of EAG comes from surveys which ask people how they found impactful work (e.g., the EA Survey, Open Phil's surveys), and EAG is cited a lot. We'll usually turn to this kind of evidence to think about our impact, though end-of-event feedback surveys are useful for feedback about content, venue, catering, attendee interactions etc. and you can also do things like discounting reported impact in end-of-event surveys using follow-up survey data.

I'm reading "OK" as "morally permissible" rather than "not harmful". E.g., I think it's also "OK" to eat meat, even though I think it's causing harm.

(Not saying you should clarify the poll, it's clear enough and will probably produce interesting results either way!)

I thought this was a great post, thanks for sharing! I think you're unusually productive at identifying important insights in ethics and philosophy, please keep it up!

I strongly upvoted this. I don't endorse all your claims, but this is really easy to engage with, a very important topic and I admire how you charitably worked within the framework Shapira offered while ending up in a very different place.

Thanks. In the original quick take, you wrote "thousands of independent and technologically advanced colonies", but here you write "hundreds of millions".

If you think there's a 1 in 10,000 or 1 in a million chance of any independent and technologically advanced colony creating astronomical suffering, it matters if there are thousands or millions of colonies. Maybe you think it's more like 1 in 100, and then thousands (or more) would make it extremely likely.
 

probably near 100% if digital sentience is possible… it only takes one


Can you expand on this? I guess the stipulation of thousands of advanced colonies does some of the work here, but this still seems overconfident to me given how little we understand about digital sentience.

Load more