David Bernard

Program Officer @ Coefficient Giving
816 karmaJoined Working (6-15 years)Fernando de la Mora, Paraguay

Bio

Participation
4

Program Officer at Coefficient Giving. Former GCR Cause Prio researcher at CG and researcher on multiple teams at Rethink Priorities. Did a PhD at the Paris School of Economics.

Comments
34

You should turn your project into an organization

If your team's work is worth doing, it's worth doing as an org

When a few people are doing good work together, the question of whether to formally incorporate into an organization can feel like a distraction from doing the actual work. Why take time away from your exciting research project to create an org? There are some real up-front costs to incorporating – dealing with bureaucracy, legal overhead, governance obligations – but I think the benefits of doing so are usually greater and underappreciated.

Orgs are sticky

A project that loses its founder usually just ends. An org that loses its founder is usually able to recruit a replacement and persist. Orgs can outlast their founders in a way that projects almost never do. This is because orgs have persistent identity, infrastructure, culture and mutual commitments that projects lack and this allows them to live on. In other words, the org itself is a form of capacity and it has a ‘spirit’ that survives the individuals involved. If the work matters, you don't want it to be dependent on any one person choosing to stay, and forming an org reduces that dependency.

Orgs can hire

Orgs hire people; people join projects. The difference is larger than it sounds. There's a large pool of people who will respond to a job posting at a real organization with a website, but a much smaller pool of people who would respond to a vaguer ask to join a project. When you hire someone, they quit their current job, accept a salary, and take on a defined role with actual responsibility and accountability. When you add someone to a project, they help out at whatever level of commitment they find convenient, which is often not that much, and even that can change at any point. The quality and reliability of the people you can attract and retain is substantially different, and orgs give you the option value to grow in ways that projects don't.

Orgs are legitimate

A formal organization is a more credible actor in basically every relevant dimension. Funders take you more seriously if you have good governance. Potential hires prefer to work at places with some structure and processes in place, as well as being able to confidently tell people where they might work. Journalists and policymakers have something real and credible to signal that it’s worth their time to engage with you. You can also make credible long-term commitments – receive multi-year grants or investments, make long-term hires, establish lasting institutional relationships – in a way a project simply can't. These things compound over time in ways that are easy to underestimate at the start.

Orgs force clarity

Incorporating forces you to answer questions that a project lets you defer indefinitely: who's actually in, what are you aiming for, what is everyone's role. Projects have no forcing function to resolve these questions productively, so they can stay unresolved for years and eventually become the reason why people drift away from projects and they fall apart. The forced clarity of forming an org is usually good, even when it's uncomfortable in the moment.

The marginal cost of forming an org is low but the benefits compound. If your team’s work is worth doing, it's probably worth doing as an org.

Post on Substack

A couple of further questions that would help me interpret the results:

"with 20 multiply imputed datasets" - What does this mean? What are you imputing and how are you imputing it? What are the results if you don't do any imputation?

How can you say the effect strengthens or is maintained after 1 month if you don't observe the control group outcomes after 1 month? Generally, you see control group outcomes continue to improve over time even if they don't get treated (as you can see by doing control group pre-post comparisons for every outcome), so doesn't seem like you can claim much about whether the effect grows or shrinks over time.

I have a paper that can help answer this, which uses JPAL and IPA studies! However,  you might think observational study overestimates come from selection bias during the publication process - our result doesn't say anything about that.

https://www.jondequidt.com/pdfs/Lalonde30.pdf

"First, we find that there is little bias on average. Using our best-performing observational method (DDML), there is a statistically insignificant and modest negative mean bias of −0.025 standard deviations. This implies that observational studies do not systematically over- or underestimate the welfare impact of the programs they evaluate."

Thanks for flagging this, Ozzie. I led the GCR Cause Prio team for the last year before it was wound down, so I can add some context.

The honest summary is that the team never really achieved product-market fit. Despite the name, we weren't really doing “cause prioritization” as most people would conceive of it. GCR program teams have wide remits within their areas and more domain expertise and networks than we had, so the separate cause prio team model didn't work as well as it does for GHW, where it’s more fruitful to dig into new literatures and build quantitative models. In practice, our work ended up being a mix of supporting a variety of projects for different program teams and trying to improve grant evaluation methods. GCR leadership felt that this set-up wasn’t on track to answer their most important strategy and research questions and that it wasn’t worth the opportunity cost of the people on the team. GCR leadership are considering alternative paths forward, though haven’t decided on anything yet.

I don't think there are any other comparably major structural changes at Coefficient to flag, other than that we’re trying to scale Good Ventures' giving and work with other partners, as described in our name change announcement post. I’ll also note that the Worldview Investigation team also wound down in H2, although that case was because team members left for other high-impact roles (e.g. Joe) and not through a top-down decision. This means that there's no longer much dedicated pure research capacity within GCR, though grantmaking here is fairly contiguous with research in practice. 

Thanks for flagging this, I just made a submission!

Section 2.2.2 of their report is titled "Choosing a fixed or random effects model". They discuss the points you make and clearly say that they use a random effects model. In section 2.2.3 they discuss the standard measures of heterogeneity they use. Section 2.2.4 discusses the specific 4-level random effects model they use and how they did model selection.

I reviewed a small section of the report prior to publication but none of these sections, and it only took me 5 minutes now to check what they did. I'd like the EA Forum to have a higher bar (as Gregory's parent comment exemplifies) before throwing around easily checkable suspicions about what (very basic) mistakes might have been made.

Innovations for Poverty Action just released their Best Bets: Emerging Opportunities for Impact at Scale report. It covers what they think are best evidence-backed opportunities in global health and development. The opportunities are:

  1. Small-quantity lipid-based nutrient supplements to reduce stunting
  2. Mobile phone reminders for routine childhood immunization
  3. Social signaling for routine childhood immunization
  4. Cognitive behavioral therapy to reduce crime
  5. Teacher coaching to improve student learning
  6. Psychosocial stimulation and responsive care to promote early childhood development
  7. Soft-skills training to boost business profits and sales
  8. Consulting services to support small and medium-sized businesses
  9. Empowerment and Livelihoods for Adolescents to promote girls’ agency and health
  10. Becoming One: Couples’ counseling to reduce intimate partner violence
  11. Edutainment to change attitudes and behavior
  12. Digital payments to improve financial health
  13. Childcare for women’s economic empowerment and child development
  14. Payment for ecosystem services to reduce deforestation and protect the environment

Thanks Vasco, I'm glad you enjoyed it! I corrected the typo and your points about inverse-variance weighting and lognormal distributions are well-taken.

I agree that doing more work to specify what our priors should be in this sort of situation is valuable although I'm unsure if it rises to the level of a crucial consideration. Our ability to predict long-run effects has been an important crux for me hence the work I've been doing on it, but in general, it seems to be more of an important consideration for people who lean neartermist than those who lean longtermist.

Hi Michael, thanks for this.

On 1: Thorstad argues that if you want to hold both claims (1) Existential Risk Pessimism - per-century existential risk is very high, and (2) Astronomical Value Thesis - efforts to mitigate existential risk have astronomically high expected value, then TOP is the most plausible way to jointly hold both claims. He does look at two arguments for TOP - space settlement and an existential risk Kuznets curve - but says these aren’t strong enough to ground TOP and we instead need a version of TOP that appeals to AI. It’s fair to think of this piece as starting from that point, although the motivation for appealing to AI here was more due to this seeming to be the most compelling version of TOP to x-risk scholars.

On 2: I don’t think I’m an expert on TOP and was mostly aimed at summarising premises that seem to be common, hence the hedging. Broadly, I think you do only need the 4 claims that formed the main headings (1) high levels x-risk now, (2) significantly reduced levels of x-risk in the future, (3) a long and valuable / positive EV future, and (4) a moral framework that places a lot of weight on this future. I think the slimmed down version of the argument focuses solely on AI as it’s relevant for (1), (2) and (3), but as I say in the piece, I think there are potentially other ways to ground TOP without appealing to AI and would be very keen to see those articulated and explored more.

(2) is the part where my credences feel most fragile, especially the parts about AI being sufficiently capable to drastically reduce other x-risks and misaligned AI, and AI remaining aligned near indefinitely. It would be great to have a better sense of how difficult various x-risks are to solve and how powerful an AI system we might need to near eliminate them. No unknown unknowns seems like the least plausible premise of the group, but its very nature makes it hard to know how to cash this out.

Yep, I agree you can generate the time of perils conclusion if AI risk is the only x-risk we face. I was attempting to empirically describe a view that seem to be popular in the x-risk space here, that other x-risks beside AI are also cause for concern, but you're right that we don't necessarily need this full premise.

Load more