Hide table of contents

In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward. 

The ideas I think could have the highest impact are: 

  1. Government placements/secondments in key GHW areas (e.g. international development), and
  2. Expanded (ultra) high-net-worth ([U]HNW) advising

Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups. 

I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering. 

I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space!

Introduction

I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here).

Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion.

At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in gauging interest and identifying people with the right skills to take the ideas forward. If you’re interested, I’ve created an expression of interest form to collect potential leads. Please be aware that because of low capacity, I won’t be able to follow up with each respondent.

Project ideas

Fellowships & Placements

Placement orgs for governments and think tanks

Some organizations address expertise gaps in various fields within U.S. policy through programs that  place talented individuals in government and think tanks (for example, via the Intergovernmental Personnel Act). I’d love to see more such initiatives for GHW. While the current U.S. political climate and USAID situation might make this difficult in the U.S., there could be opportunities in other countries.

Fellowships/Placements at GHW Organizations

We previously recommended a grant to Ambitious Impact to support placing top candidates in incubated charities. Well-structured placements can be highly and mutually beneficial, providing fellows with career capital and training while simultaneously providing host organizations with talented contributors for key projects.

That being said, there are some key challenges to address:

  • Talent Recruitment & Vetting: AIM’s strong selection process helped identify top candidates. Scaling this effort to a bigger pool of candidates would require robust talent identification systems.
  • Host Organization Capacity: Placements only work if organizations have clear projects and sufficient management bandwidth.

More, and different, effective giving organizations

More (U)HNW advising

In the last few months, I’ve heard time and again that we need more (U)HNW advising — “a Longview for GHW” is a common refrain. While there are some (U)HNW initiatives in the GHW space — some driven by single advisors leveraging their networks, others by teams focusing on certain niche areas — I think there’s room for more. UHNW individuals accounted for close to 40% of individual giving in recent years. Further, many national effective giving organizations receive a significant amount of their donations from HNW donors, suggesting that engaging more with wealthy individuals is a promising avenue. That being said, (U)HNW advising is a complicated endeavor, and the organizations with the highest chance of success are those with:

  • Connections to an existing network of prospective donors.
  • Enough staffers to enable significant capacity for advising (and perhaps research). 

Two existing examples in the GHW space are Founders Pledge and Generation Pledge (more below).

Targeting different niche demographics

I believe there are other promising niches that could be explored (e.g., celebrities or other wealthy groups). With these opportunities, you need the right founder(s); it seems very hard to succeed unless they belong to the target audience. That’s the case with the three organizations listed above.

Filling more geographic gaps

Asia remains a significant gap in the effective giving ecosystem. Cultural differences mean standard messaging may not resonate, but local founders who adapt effective giving messages to their contexts could be quite valuable.

Infrastructure support for GHW organizations

I recently recommended a grant to Good Impressions, which fills a gap in the ecosystem by supporting highly impactful organizations (across many areas) with marketing services. I’d love to see more of this for other aspects of organizational infrastructure. I’m not sure what the main needs are, but I’d guess that promising options include:

  • Recruitment
  • Fundraising
  • Communications

EA-inspired GHW courses

Courses strike me as a great way to broadly disseminate EA concepts by incorporating them into general GHW content. You can choose different approaches based on your target audience.

BlueDot Impact for GHW

BlueDot Impact runs virtual courses to support high-impact careers in AI safety (they’ve also run courses on biosecurity in the past). They aim to help their participants with knowledge, skills and connections. The courses have been useful for people who want to transition into those spaces (e.g., students, recent graduates, professionals looking to switch paths), and for professionals already working in those spaces who want to have more impact.

I’d like to see different versions of this for different fields within GHW, and for different key audiences (e.g., civil servants, staff at relevant organizations, public health students).

Incorporating EA content into university courses

Another avenue that could be interesting to explore is to incorporate more EA content into relevant university courses. A couple examples of this:

  • I recently recommended a grant to Armando Meier, who teaches health economics to undergraduate and graduate students at the University of Basel. He’ll use the grant to create and run two courses that include cause prioritization components and information on how to choose impactful research questions/areas for a dissertation.
  • Another of our grantees, High Impact Medicine, has incorporated high impact career concepts into the curriculum of a selective course at a medical school (Georgetown University School of Medicine in the U.S.).

This is higher touch/lower scale than the virtual course option mentioned above. I think the two represent good complementary strategies.

Useful GHW events

I’m still not sure what the most useful types of GHW events are, but I think we should explore different options to test if any of them seem particularly promising. I suggest a couple of ideas below.

Events bringing together EA and mainstream GHD orgs

Unlike in some of the other EA cause areas, the mainstream global health and development community and the EA global health and development community share a lot of similarities. RCTs and cost-effectiveness analyses are not new to the GHD world. I do believe that the EA world has something to offer to the wider community (e.g., the focus on cause prioritization and relentless attempt to maximize impact), but also think that the EA community has a lot to learn from the mainstream (e.g., what people with ample field experience have learned about tractability and beneficiary preferences). I think we should be networking more with GHD orgs from outside of EA. I’m not sure what the right type of event is, but it could include a mix of:

  • Teams from EA orgs attending a major GHD conference
  • Inviting representatives from major GHD orgs to EA Global
  • Creating new, dedicated events with representatives from both communities (in person  and/or virtually)

Career panels or similar

Career-focused events targeting specific groups, such as late-stage PhD students in economics or public health, could be impactful during critical decision-making periods.

More, and different, student groups

Action-focused student groups

I like student groups as a way to bring more people into the community, though I’m probably biased because I first got into EA through a student group. It wasn’t a traditional student group, though — it was called the Philanthropy Advisory Fellowship. An interdisciplinary group of (mostly grad) students met weekly to work on a research project requested by foundations that wanted our help to increase the impact of their donations. We were also given talks and trainings on various relevant topics. 

Again, I’m likely quite biased, but I loved that model; the focus on action made it much more engaging. The fellowship set me on the path to my current role, and I think led many other members to rethink their career paths. That specific model is hard to scale, especially because it requires the organizer to have or make connections with a bunch of foundations, but I think the action focus is a great way to increase engagement (and hopefully impact, as some of those actions lead to change!). For example, other action-focused groups might:

  • Lead pledge drives or other fundraising efforts
  • Give talks about EA or effective giving in various circles
  • Carry out short research projects

Policy-focused grad student groups

I think there could be value in running EA groups for niche populations of students who might go on to have high-leverage roles. Policy students strike me as one of the clearer targets for this.

Less thought-through ideas

Here are a few additional, more speculative ideas:

  • A colleague recently pointed out that there is a lack of strong, accessible global health content to share with relevant decision-makers (e.g., funders). This suggests that starting a media training fellowship and network (similar to Tarbell but focused on GHW) could be useful.
  • There are a few LMICs that could be great hubs for new projects. India and Nigeria strike me as two examples.
    • Maybe we need more organizations like Ambitious Impact, each based in one of those countries and focused entirely on within-country projects.
    • I could also see myself supporting research and networking that contributes to LMIC-led policy and advocacy efforts, like my recent prizes to Uttej and Haindavi for their work to improve social welfare policy in India.
    • I imagine there are plenty more initiatives that would benefit from being founded in those countries.
  • Getting celebrity ambassadors/spokespeople to shine a spotlight on major GHW issues and cost-effective ways to address them.
  • Running prizes to incentivize the launch of new impactful initiatives in GHW.

Perceived impact and fit

Of the ideas listed, my guesses for which are likely to be the most impactful are:

  • Placement orgs for governments and think tanks
  • More (U)HNW advising
  • New communities with shared principles

That being said, I think the likelihood of success for each of those is much lower than for many others, and that all three of them require very specific leadership talent to go well. I expect that leadership quality would be by far the most important determinant of success.

For students or recent graduates, initiatives like BlueDot Impact for GHW or policy/action-focused student groups may be more tractable and still worth pursuing.

I’d also be excited to see the other items from the list pursued, and I’m sure there are many other ideas that are worth exploring. Keen to hear from you about those!

  1. ^

    Formerly called Effective Altruism (Global Health and Wellbeing).

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr