Bio

Feedback welcome: www.admonymous.co/mo-putera 

I currently work with CE/AIM-incubated charity ARMoR on research distillation, quantitative modelling, consulting, and general org-boosting to support policies that incentivise innovation and ensure access to antibiotics to help combat AMR

I was previously an AIM Research Program fellow, was supported by a FTX Future Fund regrant and later Open Philanthropy's affected grantees program, and before that I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA and changing my mind about becoming a physicist. I've also initiated some local priorities research efforts, e.g. a charity evaluation initiative with the moonshot aim of reorienting my home country Malaysia's giving landscape towards effectiveness, albeit with mixed results. 

I first learned about effective altruism circa 2014 via A Modest Proposal, Scott Alexander's polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler's personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):

I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].

Comments
312

Topic contributions
3

I admire influential orgs that publicly change their mind due to external feedback, and GiveWell is as usual exemplary of this (see also their grant "lookbacks"). From their recently published Progress on Issues We Identified During Top Charities Red Teaming, here's how external feedback changed their bottomline grantmaking:

In 2023, we conducted “red teaming” to critically examine our four top charities. We found several issues: 4 mistakes and 10 areas requiring more work. We thought these could significantly affect our 2024 grants: $5m-$40m in grants we wouldn’t have made otherwise and $5m-$40m less in grants we would have made otherwise (out of ~$325m total).

This report looks back at how addressing these issues changed our actual grantmaking decisions in 2024. Our rough estimate is that red teaming led to ~$37m in grants we wouldn't have made otherwise and prevented ~$20m in grants we would have made otherwise, out of ~$340m total grants. The biggest driver was incorporating multiple sources for disease burden data rather than relying on single sources.1 There were also several cases where updates did not change grant decisions but led to meaningful changes in our research. 

Some self-assessed progress that caught my eye — incomplete list, full one here; these "led to important errors or... worsened the credibility of our research" (0 = no progress made, 10 = completely resolved):

  • Failure to engage with outside experts (8/10): We spent 240 days at conferences/site visits in 2024 (vs. 60 in 2023). We think this type of external engagement helped us avoid ~$4m in grants and identify new grant opportunities like Uduma water utility ($480,000). We've established ongoing relationships with field experts. (more)
  • Failure to check burden data against multiple sources (8/10): By using multiple data sources for disease burden, we made ~$34m in grants we likely wouldn't have otherwise and declined ~$14m in grants we probably would have made. We've implemented comprehensive guidelines for triangulating data sources. (more)
  • Failure to account for individuals receiving interventions from other sources (7/10): We were underestimating how many people would get nets without our campaigns, reducing cost-effectiveness by 20-25%. We've updated our models but have made limited progress on exploring routine distribution systems (continuous distribution through existing health channels) as an alternative or complement to our mass campaigns. (more)
  • Failure to estimate interactions between programs (7/10): We adjusted our vitamin A model to account for overlap with azithromycin distribution (reducing effectiveness by ~15%) and accounted for malaria vaccine coverage when estimating nets impact. We've developed a framework to systematically address this. (more)

(As an aside, I've noticed plenty of claims of GW top charity-beating cost-effectiveness figures both on the forum and elsewhere, and I basically never give them the credence I'd give to GW's own estimates, due to the kind of (usually downward) adjustments mentioned above like receiving interventions from other sources or between-program interventions, and GW's sheer reasoning thoroughness behind those adjustments, seriously, click on any of those "(more)"s)

Some other issues they'd "been aware of at the time of red teaming and had deprioritized but that we thought were worth looking into following red teaming" — again incomplete list, full one here:

  • Insufficient attention to inconsistency across cost-effectiveness analyses (CEAs) (8/10): We made our estimates of long-term income effects of preventive health programs more consistent (now 20-30% of benefits across top charities vs. previously 10-40%) and fixed implausible assumptions on indirect deaths (deaths prevented, e.g., by malaria prevention that aren’t attributed to malaria on cause-of-death data). We've implemented regular consistency checks. (more)
  • Insufficient attention to some fundamental drivers of intervention efficacy (7/10): We updated our assumptions about net durability and chemical decay on nets (each changing cost-effectiveness by -5% and 11% across geographies) and consulted experts about vaccine efficacy concerns, but we haven't systematically addressed monitoring intervention efficacy drivers across programs. (more)
  • Insufficient sideways checks on coverage, costs, and program impact (7/10): We funded $900,000 for external surveys of Evidence Action's water programs, incorporated additional DHS data in our models, and added other verification methods. We've made this a standard part of our process but think there are other areas where we’d benefit from additional verification of program metrics. (more)
  • Insufficient follow-up on potentially concerning monitoring and costing data (7/10): We’ve encouraged Helen Keller to improve its monitoring (now requiring independent checks of 10% of households), verified AMF's data systems have improved, and published our first program lookbacks. However, we still think there are important gaps. (more)

I always had the impression GW engaged outside experts a fair bit, so I was pleasantly surprised to learn they thought they weren't doing enough of it and then actually followed through so seriously, this is an A+ example of organisational commitment to and follow-through on self-improvement so I'd like to quote this section in full:

In 2024, we spent ~240 days at conferences or site visits, compared to ~60 in 2023. We spoke to experts more regularly as part of grant investigations, and tried a few new approaches to getting external feedback. While it’s tough to establish impact, we think this led to four smaller grants we might not have made otherwise (totalling ~$1 million) and led us to deprioritize a ~$10 million grant we might’ve made otherwise.

More detail on what we said we’d do to address this issue and what we found (text in italics is drawn from our original report):

  • More regularly attend conferences with experts in areas in which we fund programs (malaria, vaccination, etc.).
    • In 2024, our research team attended 16 conferences, or ~140 days, compared to ~40 days at conferences in 2023.35
    • We think these conferences helped us build relationships with experts and identify new grant opportunities. Two examples:
      • A conversation with another funder at a conference led us to re-evaluate our assumptions on HPV coverage and ultimately deprioritize a roughly $10 million grant we may have made otherwise.36
      • We learned about Uduma, a for-profit rural water utility, at a conference and made a $480,000 grant to them in November 2024.37
    • We also made more site visits. In 2023, we spent approximately 20 days on site visits. In 2024, the number was approximately 100 days.38
  • Reach out to experts more regularly as part of grant investigations and intervention research. We’ve always consulted with program implementers, researchers, and others through the course of our work, but we think we should allocate more relative time to conversations over desk research in most cases.
    • Our research team has allocated more time to expert conversations. A few examples:
      • Our 2024 grants for VAS to Helen Keller International relied significantly on conversations with program experts. Excluding conversations with the grantee, we had 15 external conversations.
      • We’ve set up longer-term contracts with individuals who provide us regular feedback. For example, our water and livelihoods team has engaged Daniele Lantagne and Paul Gunstensen for input on grant opportunities and external review of our research.
      • We spoke with other implementers about programs we’re considering. For example, we discussed our 2024 grant to support PATH’s technical assistance to support the rollout of malaria vaccines with external stakeholders in the space.39
    • This led to learning about some new grant opportunities. For example:
  • Experiment with new approaches for getting feedback on our work.
    • In addition to the above, we tried a few other approaches we hadn’t (or hadn’t extensively) used before. Three examples:
      • Following our red teaming of GiveWell’s top charities, we decided to review our iron grantmaking to understand what were the top research questions we should address as we consider making additional grants in the near future. We had three experts review our work in parallel to internal red teaming, so we could get input and ask questions along the way.41 We did not do this during our top charities red teaming, in the report of which we wrote “we had limited back-and-forth with external experts during the red teaming process, and we think more engagement with individuals outside of GiveWell could improve the process.”
      • We made a grant to Busara to collect qualitative information on our grants to Helen Keller International's vitamin A supplementation program in Nigeria.42
      • We funded the Center for Global Development to understand why highly cost-effective GiveWell programs aren’t funded by other groups focused on saving lives. This evaluation was designed to get external scrutiny from an organization with expertise in global health and development, and by other funders and decision-makers in low- and middle-income countries.

Some quick reactions:

  • I like that GW thinks they should allocate more time to expert conversations vs desk research in most cases
  • I like that GW are improving their own red-teaming process by having experts review their work in parallel
  • I too am keen to see what CGD find out re: why GW top-recommended programs aren't funded by other groups you'd expect to do so
  • the Zipline exploratory grant is very cool, I raved about it previously
  • I wouldn't have expected that the biggest driver in terms of grants made/not made would be failure to sense check raw data in burden calculations; while they've done a lot to redress this there's still a lot more on the horizon, poised to affect grantmaking for areas like maternal mortality (prev. underrated, deserves a second look)
  • funnily enough, they self-scored 5/10 on "insufficient focus on simplicity in cost-effectiveness models"; as someone who spent all my corporate career pained by working with big messy spreadsheets and who's also checked out GW's CEAs over the years I think they're being a bit harsh on themselves here...

Ben Kuhn has a great essay about how 

all my favorite people are great at a skill I’ve labeled in my head as “staring into the abyss.”1

Staring into the abyss means thinking reasonably about things that are uncomfortable to contemplate, like arguments against your religious beliefs, or in favor of breaking up with your partner. It’s common to procrastinate on thinking hard about these things because it might require you to acknowledge that you were very wrong about something in the past, and perhaps wasted a bunch of time based on that (e.g. dating the wrong person or praying to the wrong god). However, in most cases you have to either admit this eventually or, if you never admit it, lock yourself into a sub-optimal future life trajectory, so it’s best to be impatient and stare directly into the uncomfortable topic until you’ve figured out what to do. ...

I noticed that it wasn’t just Drew (cofounder and CEO of Wave) who is great at this, but many the people whose work I respect the most, or who have had the most impact on how I think. Conversely, I also noticed that for many of the people I know who have struggled to make good high-level life decisions, they were at least partly blocked by having an abyss that they needed to stare into, but flinched away from.

So I’ve come to believe that becoming more willing to stare into the abyss is one of the most important things you can do to become a better thinker and make better decisions about how to spend your life.

I agree, and I think there's an organisational analogue as well, which GiveWell exemplifies above.

CE/AIM-incubated orgs run lean. (Some past discussion here if you're interested.) I also don't live in a high CoL country, which helps. 

I really like Bob Fischer's point #4 from deep within the comment threads of his recent post and thought to share it more widely, seemed like wise advice to me:

FWIW, my general orientation to most of the debates about these kinds of theoretical issues is that they should nudge your thinking but not drive it. What should drive your thinking is just: "Suffering is bad. Do something about it." So, yes, the numbers count. Yes, update your strategy based on the odds of making a difference. Yes, care about the counterfactual and, all else equal, put your efforts in the places that others ignore. But for most people in most circumstances, they should look at their opportunity set, choose the best thing they think they can sweat and bleed over for years, and then get to work. Don't worry too much about whether you've chosen the optimal cause, whether you're vulnerable to complex cluelessness, or whether one of your several stated reasons for action might lead to paralysis, because the consensus on all these issues will change 300 times over the course of a few years.

It's mind-blowing to me that AMF's immediate funding gap is $462M for 2027-29. That's 56-154,000 lives (mostly under-5 children) at $3-8k per life saved, maybe fewer going forward due to evolving resistance to insecticides, but it wouldn't change the bottomline that this seems to be a gargantuan ball dropped. Last time AMF's immediate funding gap was over $300M for 2024-26, so it's grown 50%(!) this time round. Both times the main culprit was the same, the Global Fund's funding replenishment shortfall vs target, which affects programmatic planning in countries. I'd like to think we're collectively doing our part (e.g. last year GiveWell directed $150M to AMF, more than to any other charity, which by their reckoning is expected to save ~27k lives over the next 1-2 years), but it's still nuts to me that such a longstanding high-profile "shovel-ready" giving opportunity as AMF can still have such a big and growing gap!

After looking into Rethink's methodology, i feel they grossly overestimate percentage chances of small animal sentience, and I'm not compelled by associating negative stimuli responses with meaningful feelings of pain. 

Just wanted to link to your post on this for readers who haven't seen it, as I really appreciated it: Is RP's Moral Weights Project too animal friendly? Four critical junctures 

And your flow diagram:

Do you have a sense as to why people haven't quite bridged the inferential gap between wherever they are and your work, despite your (patient, repeated, very thorough) attempts to explain? 

Out of curiosity, what do you think of GPT5-medium's attempt at sketching an answer to Seth's request for the "start here to understand my work" post?

I just learned via Martin Sustrik about the late Sofia Corradi

the spiritual mother of Erasmus, the European student exchange programme, or, in the words of Umberto Eco, “that thing where a Catalan boy goes to study in Belgium, meets a Flemish girl, falls in love with her, marries her, and starts a European family.”

Sustrik points out that none of the glowing obituaries for her mention the sheer scale of Erasmus. The Fulbright in the US is the 2nd largest comparable program, but it's a very distant second:

So far, approximately sixteen million people have taken part in the exchanges. That amounts to roughly 3% or the European population. And with the ever growing participation rates the ratio is going to get even gradually even higher.

Is short, this thing is HUGE.

Sustrik argues that the Erasmus programme is gargantuan-scale social engineering done right:

Substantial portion of students actually does want to spend some time abroad. It’s no different from the Western European marriage pattern, where young people left their parental homes to work as servants, farmhands, or apprentices before they married and set up their own households.

The much-maligned idea of social engineering, in this case, doesn’t mean forcing people to do something they don’t want to do. It means removing the obstacles that prevent them from doing what they already want.

Before Erasmus, studying abroad was seen as having fun rather than as serious academic work, something to be punished rather than rewarded. Universities were reluctant to recognize studies completed elsewhere. Erasmus, with its credit transfer system, changed that and thus unleashed a massive wave of student exchanges.

The backstory to how Sofia came to focus on Erasmus is touching:

In 1957, in her fourth year of studies, she received the opportunity to study in the United States thanks to a Fulbright scholarship. She spent a year at Columbia University where she attended a master’s course in comparative university legislation.[3][4] Upon her return to Rome in 1958, however, her degree was not recognised by the Italian educational system.[4][5] She recalled how she felt humiliated in front of other students as her time in the US was dismissed as a "vacation", and how a functionary had told her "Columbia, you say? I've never heard of that before".[5][6] She had to spend an extra year to obtain her Italian degree.[5] The experience led her to the idea of creating a system of recognition of courses taken abroad and the promotion of university exchanges.[5][7]

Such ideas had already been put forward in Italy, but without any concrete results.[7] After graduating Corradi pursued research on the right to education at the United Nations and became a scientific consultant for the Association of Rectors of Italian Universities at the age of 30.[5][6] It was a post she gained in part to her diploma from Columbia, and she used her position to lobby intensively for her idea of a university exchange programme and mutual recognition. ... (more on Wikipedia)

I've previously wondered what a shortlist of people who've beneficially impacted the world at the scale of ~100 milliBorlaugs might look like, and suggested Melinda & Bill Gates and Tom Frieden. (A "Borlaug" is a unit of impact I made up, it means a billion lives saved.) If you buy Corradi's argument that the Erasmus programme is at heart really a peace programme and that it deserves some credit for the long period of relative peace we've experienced globally post-WWII, then Sofia Corradi seems eminently deserving of inclusion.

Gemini 3 Pro's attempt to visualise Sofia Corradi's beneficial impact in Shapley value terms:

Let's define our terms:

  • Y-Axis: "Level of European Youth-Driven Integration" (0-100%)
    • This is not economic integration (like the Euro) or political integration (like the Parliament).
    • It specifically measures the socio-cultural intermingling, mutual understanding, and reduction of nationalistic stereotypes among young Europeans. This is the "peace program" aspect.
    • It starts at a low baseline post-WWII, as even with the EEC, borders remained strong culturally.
  • X-Axis: Time (1950 - 2025)
  • Key Event 1: Treaty of Rome (1957) - Establishes the EEC. A step towards economic integration, but limited youth movement.
  • Key Event 2: Erasmus Program Launch (1987) - The crucial inflection point.
  • Key Event 3: Schengen Agreement (1995) - Eliminates internal border checks. Facilitates Erasmus, but Erasmus already laid the cultural groundwork.
  • Key Event 4: Euro Adoption (1999/2002) - Further economic integration, making cross-border life easier.

Now, let's plot two scenarios:

  1. "Actual Timeline: With Corradi & Erasmus" (Solid Blue Line): Represents the observed trajectory of youth integration.
  2. "Counterfactual Timeline: Without Corradi & Delayed Erasmus" (Dashed Red Line): This is where we attribute Corradi's Shapley value.
    • Delayed Launch: As argued, without her tireless 30-year lobbying, a pan-European student exchange might have emerged, but likely much later (e.g., 2002).
    • Fragmented Design: Even if it launched, it would likely have been a collection of bilateral agreements, lacking the standardized credit transfer (her key design contribution). This means a slower, less efficient ramp-up of integration.

The "Area Between the Curves" will visually represent Sofia Corradi's Shapley Value, showing the accelerated and enhanced integration due to her efforts.

Sounds like a cause X, helping people gain clarity on tractable subsets of the general issue you mentioned... although as I write this I realise 80K and Probably Good etc are a thing, their qualitative advice is great, and they've argued against doing the quantitative version. (Some people disagree but they're in the minority and it hasn't really caught on in the couple of years I've paid attention to this.) 

Thanks for sharing, really energising to see. Their blog post has really nice illustrations, sharing some of them below.

These are some of the results from the $50M they've donated so far to GiveDirectly to every adult in the Khongoni subdistrict of Malawi:

Impact stats from Canva's cash transfers

This was what they spent it on:

Average amount spent from each direct cash transfer:

This is the framework Canva came up with to track impact on recipients via continuous surveys:

Guiding stars for basic human needs

In the next phase with their $100M commitment to GiveDirectly, these are some of the research questions they're helping GD explore:

Next phase of cash transfers - focuses

And from GiveDirectly's side of the story — the first 2 phases were funded entirely by Canva:

Despite doubling Khongoni's local GDP, inflation was negligible; quoting GD:

Markets were able to absorb large-scale transfers without driving significant price inflation. That gives us strong evidence that this model can be scaled responsibly and rapidly without triggering harmful inflation.

This stability was a product of how recipients and markets responded:

  • 🗓️ Recipients didn’t spend all at once: Findings showed households did not spend all of the transfer immediately and gradually increased their spending over months.
  • 🚲 They also shopped around: Recipients had access to multiple markets, locally and in nearby Lilongwe city, allowing them to choose between sellers if someone didn’t have what they needed or tried to raise prices.
  • 🤝 Vendors chose not to raise prices: Traders reported keeping prices steady to maintain trust, saying opportunistic hikes would damage their reputation once the cash was gone.
  • 📦 Markets adapted to demand: Many vendors simply ordered more stock and new traders entered markets, both without significantly driving up prices.

I'm a fan of cash transfers and think it's a tad underrated in EA circles, so I'll end by quoting Canva on cash:

In over 500 randomised controlled trials, direct cash transfers have significantly improved lives:

You might be wondering why cash transfers are able to have an impact in so many different areas. We believe the impact lies in giving people the autonomy to choose what they need most. In all the examples we’ve seen, each person put the money towards ensuring they and their families were able to access their most basic needs while also putting the money to work to generate income in an ongoing manner.

Research shows the cost-effectiveness of cash transfers continues to be elevated upwards to show more impact. Last year, GiveWell reviewed the latest evidence from GiveDirectly and increased their estimates of cost-effectiveness by 3 to 4 times.

Load more