J

JDLC

164 karmaJoined

Participation
7

  • Completed the Introductory EA Virtual Program
  • Completed the In-Depth EA Virtual Program
  • Completed the Precipice Reading Group
  • Attended an EA Global conference
  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours

Comments
15

This is a great resource, very detailed and something I think all group organisers should be aware of! Great work on it.

Two questions:

  1. How likely do you think it is that this is a complete list? (Or, how likely do you think it is that a relevant organisation isn’t on this list?)
  2. Do you have plans to maintain this list as orgs open/close/change, and if so roughly how often?

Hey! Firstly - massive kudos for this post and your marketing efforts. That's a LOT of work done in total. A couple of thoughts:

  1. Do you know what the breakdown of attendees by outreach method was? The amount of stuff done might make this an unusually useful sample of what works.
  2. Your in-lecture pitches might actually have decreased the number of attendees to a first meeting (in a good way)!
    1. I can imagine last year, several people came to the first meeting thinking "EA sounds potentially interesting, but I don't have enough info to know if I'll like it. Let me go to their first meet and find out."
    2. I can imagine this year, several people heard the pitch, and made the judgement that EA wasn't for them, so they didn't turn up to the first meeting.
  3. I think the number attendees at the first meeting is largely unimportant/Goodharting compared to the number of attendees at the (say) fifth meeting.
    1. EA is quite high-commitment as societies go ("Hey, you should come and change your whole life plan to help others"). Heavy-tailed impact and such.
    2. I think the more interesting question is whether increased marketing resulted in a higher quality/fit (ie. likely to stay around and take points really seriously) of attendee, than pure number. The fifth meeting attendance might be a partial datapoint for this.

Here are some less important/certain factors that I think you could also take into account with your model:

  • This intervention can't prevent first incidents, which might make it much less effective.
    • Intuitively, I agree the harm from the first incident is likely larger than subsequent incidents. At a complete guess, I'd say the first incident is maybe 20-25% of total harm.
    • This intervention by nature cannot prevent first incidents (reporting requires an incident to take place).
    • The linear model therefore (perhaps significantly) overestimates the benefits of this intervention.
  • The bar for 'interacting with' 30 children might be high.
    • A teacher sees a child regularly over a long period of time. They therefore build a rapport that could lead to disclosures.
    • Doctors or police (mostly) see children relatively few times over a short period. It seems less likely they would be disclosed to because of the weaker rapport.
    • However, this might be outweighed by these professions being more likely to discover CSA (eg. noticing signs of CSA during a medical checkup; investigating other crimes which correlate with CSA offences).
  • Not all disclosures result in stoppages (sadly).
    • More importantly, the important factor is not whether a disclosure causes a stoppage, but how much quicker a stoppage occurs after disclosure, compared to no disclosure.
    • Depending on the length and complexity of the investigative process, this might not prevent much harm (although I hope I'm wrong).
  • It might be better to say an average of 1.5 years extra without disclosure.
    • This is half the time of the average CSA 'cycle', and assumes that each disclosure happens at a 'random' point.
    • The 1 year is also sensible, because I assume the chance of disclosing is proportional to the length of abuse taking place.
    • However, maybe the opposite is true. After a few incidents disclosure is likely, but after several incidents it becomes 'normalised' in some way, and the chance of disclosing drops dramatically.
    • This could make the intervention more or less cost effective, depending on how disclosure rates correlate with length of CSA.

Thanks for writing this Siobhan, and sorry this comment is very late. I currently see a few key issues (this comment), and a couple of broader concerns (future comment).

  1. 1 in 20 children will experience CSA (at some time). This does not mean 1 in 20 children are experiencing CSA (at the current time).
    1. On average, a child experiencing CSA experiences it for 3 years, at a random point between age 3 and 18 (15 years).
      1. (I'm ignoring children under 3, since it is unlikely they can report, so this intervention probably doesn't help them much.)
    2. For a given child, there is a 1 in 20 chance they experience CSA over 15 years, and the average duration is 3 years (1/5th of the 15 years), so there is a (1 in 20 * 1/5th = ) 1 in 100 chance that a given child is experiencing CSA in a given year.
    3. So the correct value for Assumption 2 is that 1 in 100 children are (over the 12 months) experiencing CSA. This makes the intervention less cost effective.
  2. I think the £29.7K/year figure is wrong, and £<19.8K/year a better figure.
    1. Using the stats from your cited report:
    2. On your assumptions, this intervention causes 1 less year of CSA per disclosure, and the benefit per year is 1/3 of the harm to the victim ('cost as a consequence' in the link).
    3. Therefore, the intervention saves (1/3 x £59,300 = ) £19,800 per disclosure.
    4. On your assumptions, this intervention doubles the number of disclosures (from 10% chance to 20% chance).
    5. This does not double the 'cost in response' costs, because these are 'top down' costs. (See Section 2.5 of the report). However, doubling the number of reports probably would require an increase in 'costs in response'.
    6. I don't know what a sensible increase would be, but it would require more spending, and thus reduce the cost saved below the £19,800 above. This makes the intervention less cost effective.
  3. I think the 1 in 20 figure might be wrong, and 1 in 10 a better figure.
    1. The 1 in 20 figure (4.8%) comes from asking a group of 11-17-year-olds.
    2. Asking 18-24-year-olds instead gave a figure of 1 in 10 (11.3%).
    3. The first number will be an underestimate (An 11-year-old might be asked, and truthfully say no, but then experience CSA at 14 years old. This method of asking skews the percentages downwards because the group asked are only partway through the 'relevant period' of under-18). This makes your intervention more cost-effective.

Combining these, the new cost effectiveness is (1/5 * (19.8/29.7) * 2 * £4450 = ) £1190 averted per professional per year, which is £1680-£2520 per DALY.

I think it's possible that I've misunderstood some/all of these, so would appreciate sanity checks from others.

(Standard caveat, still only a single experience and not necessarily representative of all groups)
Some updates a year on:
General point: I did several things whilst 'strategising' (before term), then forgot about them in the 'implementation' (during term). For example, I made SMART goals each term, but only remembered them during the semester review. Would strongly recommend setting aside ~1hr per month, to read through your TOC and articles like this, in case you miss things.
Backchaining: I didn't do enough of it.
SMART Goals for groups: I made them, didn't hit most of them, and didn't put too much stock in them. I think the specific numbers on goals (ie. 40 applications vs 30 applications) isn't too important, because it's not (fully) within your control and doesn't much change what you do (you would advertise the same either way). However, having the 'broad goals' (X applicants) frames the actions you take (advertising), so those are useful as part of the backchaining process.
SMART Goals for individuals: Tentatively EXTREMELY important. From experience, a semi-common failure this year was engaged members not doing much specific. Each person having goals gives: 1. Incentive to make progress; 2. Opportunity to meet/1-1 to check progress; 3. Clearer idea of what everyone is aiming towards.
It's also really hard to do without sounding like you're giving people homework. I think it's very useful to create a (sub?)group culture where the default expectation is that everyone has a goal they're working on at all times. Suggestion:
1. Get your top 3 engaged organisers
2. Each set goals, have an accountability call/meeting each week to discuss progress (and actually hold each other accountable, the vibe should be 'friendly, but if I haven't done the thing I'm actually going to feel bad/embarrassed about it at the meeting')
3. Add highly engaged people to the call/meeting slowly (like 1-2/month) until it becomes a norm among a set group.
Outsourcing: Valuable - do it!
Personal Development: Personally, I should have spent ~3hr/wk less on EA organising and applied for jobs instead. Still strongly agree with having someone else be responsible for your development (Vice-Prez being responsible for Prez).
Safeguarding Values: Thanks for the link - article is now on my reading list! This didn't come up much this year, but will be a good personal reminder for me next year.
Opportunity vs Obligation: I think whenever you use an obligation framing, you should couple it with an opportunity framing. For example: "You really should give 10% of your income" is bad, sad and off-putting; "You really should give 10% of your income, because you can save several lives!" is better. (This second option might just be an opportunity framing in disguise).
Socials/Development: Agree socials should come soon after events. We didn't do this well enough.
Resources: EA Groups Resource Centre should be your top group organiser bookmark. OSP was very useful before term, and less useful (but still net-positive) during term time, depending on if there were any issues to discuss.

Hey Ben, here's some semi-critical thoughts I had reading this:

  • It seems like the roles you've identified require working in-person in the relevant countries (entrepreneur, government worker, maybe policy for taking meetings).
    • Are many/most diaspora individuals are open to moving?
    • Would this 'requirement' to move significantly decrease the amount of people available to reach?
  • Intuitively, the "1000 engage deeply --> 50 make career changes" seems high. I'd be surprised if 1 in 20 people who read a website (even thoroughly) later go on to make a career switch.
    • I'd (randomly) put it closer to 1/50 or 1/100.
    • However, maybe I'm not accounting enough for the audience being already predisposed to changing careers.
    • I'm not sure if the link to the 80K reports has info on this. I read the most recent report but couldn't see anything relevant.
  • What would be your success metrics for your MVP? Is it something like 'the website/articles get a lot of organic traffic' or 'the articles are shown to people and they find them useful / high-quality'?
    • The former seems more open to random variation; the latter harder to do.

Take this with salt - I don't have experience in any relevant fields. I also think it's a cool idea and worth exploring further! :-)

I've just come across this post (7 years after initial publication I think). Would be really interesting to hear if any of these people are still significantly involved with EA (directly via community engagement, or indirectly via job roles / donations to pressing causes). 

Hey Daria! 3 questions from me:

  1. Why do you think this is the most effective thing people can donate funds to right now? (Why do you think it’s more effective than these charities, for example: https://www.givewell.org/charities/top-charities)
  2. What data can you provide to back this up? (Ideally numerical data/stats)
  3. How much funding would each of the organisations linked be able to use effectively?

(These are the sort of questions that readers of this forum tend to care about most, so the fact that your post doesn’t address them much is probably some/most of the reason it’s been downvoted, in case you were confused)

Answer by JDLC*12
2
0

I received a DM from someone who wishes to remain anonymous, but made the following points in answer to the question:

  • TLDR: The Gates funding increase is likely a large counterfactual funding increase but hardly any funding increase in absolute terms
  • The foundation currently spends ~$9bn per year. This is the outcome of a (public) decision ~3 years ago, to grow spending from ~$6bn p.a. at the time to $9bn, as steady state annual expenditure, over a period of 2-3 years
  • This new update is only a very small increase in grants. (200Bn over 20 years = 10Bn p.a. = increase of 1Bn / 1/9th only.)
  • Since the $9bn decision, Warren Buffet withdrew his future contributions (also all public). It became clear through reporting around that that the majority of Foundation contributions to date had actually been Buffet money, not Gates money. So one should have expected a pretty meaningful drop in the $9Bn off the back of that, or Gates to significantly step up giving.
  • So it’s fair to say that this is a very meaningful counterfactual increase vs a world where the Foundation had dropped down to 4/5/6 again.
  • It is not a meaningful increase in what the world of global health will see at all - esp once you compare the 1Bn increase to the many Bns of reduced spending from the US, UK, Germany, Switzerland, Belgium, etc)

Considered writing a similar post about the impact of anti-realism in EA, but I’m going to write here instead. In short, I think accepting anti-realism is a bit worse/wierder for ‘EA as currently’ than you think:

Impartiality 

It broadly seems like the best version of morality available under anti-realism is contractualism. If so, this probably significantly weakens the core EA value of impartiality, in favour of only those who you have a ‘contract’. It might rule out spatially far away people, it might rule out temporally far away people (unless you have an ‘asymmetrical contract’ whereby we are obligated to future generations because past generations were obligated to us), it probably rules out impartiality animals or non-agents/morally incapable beings.

‘Evangelism’

EA generally seems to think that we should put resources into convincing others of our views (bad phrasing but gist is there). This seems much less compelling on anti-realism, because your views are literally no more correct than others. You could counter that ‘we’ have thought more and therefore can help people who are less clear. You could counter that other people have inconsistent views (“Suffering is really bad but factory farms are fine”), however there’s nothing compelling bad about inconsistency on an anti-realist viewpoint either.

Demandingness

Broadly, turning morality into conditionals means a lot of the ‘driving force’ behind doing good is lost. It’s very easy to say “if I want to do good I should do X”, but then say “wow X is hard, maybe I don’t really want to do good after all”. I imagine this affects a bunch of things that EA would like people to do, and makes it much harder practically to cause changes if you outright accept it’s all conditional.

Note: I’m using Draft Amnesty rules for this comment, I reckon on a few hours of reflection I might disagree with some/all of these.

Load more