Frank Fredericks

Executive Director @ One for the World
49 karmaJoined Working (15+ years)New York, NY, USA
1fortheworld.org

Comments
13

TL;DR:  As the sponsoring organization, we are extremely skeptical of the value of these pledges. We attribute this skepticism to the temporary nature of these pledges, rather than permanent alternatives. We need more testing to verify this hypothesis. We do however think this method of tabling is a great top-of-funnel tactic, and commend Middlebury’s team!

Thank you for summarizing this! As the sponsoring organization of this work, I’m happy to share how we are thinking about it, because I think we have a much more bearish interpretation of these efforts so far.

For background, One For the World (OFTW) has had a chapter at Middlebury since January 2025, founded by the awesome folks behind this blog post. To be a OFTW Chapter means you are promoting our 1% Pledge on campus to prevent child deaths globally, and you get invited to our annual in-person training event, access to our funding for on-campus events and activism, and coaching on effective giving promotion from our dedicated and experienced staff. We have 35 campus chapters, with some variation as new join and some sunset. And these chapters have moved $3.4M in effective giving, preventing some 730 deaths as a result.

During the first semester, the Middlebury chapter leaders expressed that they faced an unusual amount of friction around our “hard pledge” with students as compared to other our campus Chapters. See, to “pledge” at One For the World, you currently must submit your credit card (or ApplePay, etc), even if you won’t start paying until years into the future (our “future pledging” feature is one of the more unique aspects of Donational, our proprietary donor platform). Given the fact that we process these donations ourselves (even if years in the future), we have an incredibly detailed view of our donors, donations, attrition rates, and lifetime customer value. With over 10 years of donor data, we created a pledge value estimator that both is categorical by pledge type, and with a sensitivity variable by starting ARR (annualized recurring revenue). Needless to say, we are data obsessed and take outcome-level data as an essential way of validating our value proposition.

However, we also take an entrepreneurial and experimental view to our work, and so when Middlebury leadership (Camiel, Tyler, Sam) presented this challenge, we thought it would be a great idea to develop an entrepreneurial test around it. Will a larger number of “soft pledges” (those without a credit card submission) lead to at least the same amount of actual effective giving as a smaller number of “hard pledges”? 

To aid in our test, we decided together with the campus leaders to use the pledge partnership platform between OFTW and Giving What We Can (GWWC). We’ve partnered in the past years to help drive GWWC pledges, and I believe we’re among the biggest recruiter of GWWC pledges in the pledge partnership program (Luke from GWWC can confirm if that is true). So we were more than happy to leverage this existing technology to test the hypothesis.

The results have been fascinating, but inconclusive. From an output metric point of view, we’ve seen a great number of trial pledges come from these efforts. Specifically, this academic year, we’ve seen ~130 trial pledges, and 1 10% pledge (I’m not sure where the 300 number came from, as we don’t see that on our pledge platform dashboard, but perhaps some pledges were recruited outside the OFTW-GWWC pledge partnership platform). But these pledges are only output metrics, and if these pledges don’t lead to money moved, then it would invalidate the hypothesis, and would suggest these efforts aren’t nearly as impactful as our existing methods. Furthermore, we don’t attribute the larger pledge count to tabling, as most of our chapters engage in tabling, though we think that both the frequency of tabling, and the charisma of the Middlebury Chapter Leaders, who are delightful people if you get to talk with them. However, we definitely encourage people to heed the tabling advice in the article, which follows OFTW best practices brilliantly.

To determine the outcome of effective money moved is a more challenging endeavor. The most empirical method would be to conduct a longitudinal study of actual money moved, but that would delay institutional learning to a degree that would be unhelpful. So there’s three current heuristics being deployed to determine these trial pledge values:

  1. GWWC’s internal Trial Pledge value:  GWWC estimates trial pledges to be worth about $2k. However in internal conversations between GWWC and OFTW, we both agree that student trial pledges (most often just six months at 1%) cannot possibly be valued that highly. I believe Aiden from GWWC said that only $200 in effective giving has come through all OFTW trial pledges this year (if I recall correctly), which further affirms our shared skepticism. 
  2. OFTW’s funder’s proposed value: many of our funders who have reviewed our impact data have been on average incredibly skeptical that any value comes from trial pledges, and have essentially valued them at $0 to student GWWC trial pledges until they either convert to a OFTW hard pledge, or a GWWC 10% pledge.
  3. OFTW’s internal M&E value:  For our institutional learning, we’ve created our own internal method that we will use in deciding whether the hypothesis is validated or not. We’ve taken the average value of this year’s OFTW undergraduate hard pledges ($1,031/pledge), and multiplied it by our estimated probability that they will become permanent 1% pledges (10%), for an estimate of $103/trial pledge. For any 10% pledge conversations, we’d do a step up value (10% pledge value minus $103 already counted). Given the actual giving from trial pledges, plus the lack of evidence of conversions from trial to either OFTW 1% or GWWC 10% pledges, I suspect that $103 is an over-estimate.

So while our initial test was about “hard” versus “soft” pledges, the performance seems to be largely driven by the “trial” versus the permanent, ongoing pledges of either OFTW or GWWC.  What does that mean for promoting trial pledges on campuses? While it’s too early to fully invalidate the hypothesis that soft pledges can generate sufficient returns in effective giving, we  mounting evidence that temporary pledges are not valuable to recruit on campuses. 

While we wait to see if more trial pledges recruited by the awesome Middlebury team can be upgraded to either OFTW 1% hard pledges or GWWC 10% pledges, OFTW is still exploring the feature-set around “soft pledges”. Specifically, our tech team has developed a “soft pledge” donor experience through our proprietary donor platform called Donational, which we will offer to a subset of chapters this year, for a more robust test by increasing the N-count and have a more comparable donor experience (small differences in the OFTW and GWWC pledge experience can affect outcomes). Also, we don’t know if the low quality of these pledge values are driven by the temporary nature of a “trial pledge” or the soft nature of not inputting a credit card (I’m more skeptical of the latter because I believe GWWC has evidence that their 10% soft pledge drives real effective giving, even in the absence of a credit card on file). In short, we haven’t invalidated “soft pledges” entirely, but we are entering the next test with heightened skepticism that soft pledging will outperform our existing campus chapter offering with hard pledges, even in the face of student hesitancy around inputting payment information at the point of pledging. Perhaps students who are unwilling to put in payment methods (or make permanent commitments) weren’t credible donors in the first place. The results above certainly make sense if that were true. Further testing should be done exclusively using “soft”, permanent pledges, rather than offering any trial pledges at all (in our opinion).

 

So our main takeaways at OFTW:

  1. More students may willing to pledge if credit card information isn’t required (increasing pledge count as output) and/or the initial pledge is temporary. We should figure out which is a stronger signal.
  2. These additional pledges appear to be of little to no value in actual effective giving (the outcome we use to determine effectiveness), and we suspect its the temporary nature of those pledges.
  3. Follow-ups from trial pledges are essential, and should be done personally by the student organizers themselves to maximize conversion, as understood by OFTW best practices.
  4. These conclusions are not conclusive, and future testing will give us stronger validation or invalidation evidence.
  5. We would not endorse expanding this pledge offering in its current form, but rather focus campus chapters on promoting our 1% Pledge and GWWC’s 10% pledge (neither of which are temporary). We do however endorse tabling as an effective method of pledge driving.
  6. Experiments like this are crucial to driving more innovation in effective giving, even if the initial hypothesis is ultimately invalidated.
  7. Student leaders like those at Middlebury are the backbone of effective giving and driving innovation in the space. They have our utmost admiration!

Despite the length of this comment, this is a brief summary of our take on this test if you can believe it! We’d love to hear any questions or comments from you all, including readers, Middlebury organizers, and GWWC partners.

Also, if you want to bring effective giving to your campus or workplace, or want to use our proprietary tech at your own effective giving organization, reach out to us at One For the World!

 

Frank Fredericks

Executive Director, One For the World

Thank question and great answer Kestrel. I'll add some color.

There's a few reasons why Open Phil (now CG) no longer funds OFTW. As for the weight between these reasons, they can jump in if they'd like but here's a few dynamics:

  1. CG funds either new effective giving orgs or highly effective ones. We used to be a new org, and while we are no longer "new" we haven't met the benchmark of the more established (and funded) orgs.
  2. While during my first year as ED, we did modestly improve our money moved, and slightly reduced our budget (this happened before we identified and implemented our new growth strategy, which just began in July), the entire baseline of fundable seems to have increased due to the amazing work of other orgs. It's good for the movement, even if it wasn't ideal for us funding wise.
  3. CG shifted to a competitive RFP-style grant round, which then made it less about us alone, but we then competed against all other orgs in our space simultaneously, which given the previous point, made it a defensive decision to not fund us for our current year.

We are looking at becoming competitive enough to receive CG funding within the next two years by better capturing off-platform data (we likely are missing six figures worth of money moved a year due to dynamic Kestrel mentioned), dramatically improving our money moved metrics, and lastly, maintaining operational excellence through austere overhead (while still investing in our team appropriately). However, the early signal looks like our growth strategy is working, with the 4x pledge count YTD as a leading indicator.

I don't think I'd interpret this pattern to mean that restricting immigration would reduce communal violence. Rather, that places where communal violence is happening may correlate with any number of the side effects of colonialism, including a weakened concept of a nation state due to borders that don't reflect identity groups.

In other words, the context by which diversity happens may play a role in whether communal violence happens (forced cohabitation versus elected migration). There's not enough data to sufficiently support either idea, I just want to be clear that I don't see the evidence to suggest anti-immigration as an effective peacebuilding mechanism.

The Institute of Economics and Peace has the best details on that in their Cost of Violence Containment research, but my understanding is that number includes government spending. However, it defines peacekeeping (military presence as a deterrence for violence) as something separate than peacebuilding (any nonprofit programming designed to reduce the likelihood of violence). This number is focused on peacebuilding, not peacekeeping.

Hi Josh, thank you for your thoughtful questions! Here's my answers.

  1. We can ignore the proposed process from my article in terms of tractibility (there's a conversation to be had there, but not core to this conversation), but there is compelling evidence that on the inter-personal level, you can use social nudges to change human behaviors (probabilistically). Danial Kahneman, Eldar Shafir, etc, how shown this in a few ways. Contact Theory suggest this to be the case too for violence, but we don't have a measured way to turn the concept into probabilities. I would love to find out if it's possible, so it's merely a hypothesis at this point. I believe the social payoff would be so large that even if it's unlikely to be found, it's pursuit is worthwhile (high risk high reward in social good terms).

  2. Fair enough, and perhaps worth taking up with the Institute of Economics and Peace. Also these numbers are in PPP, which annoys me since interventions would likely be funded from outside sources, so nominal terms would be more helpful. I think the amount spent on military spending versus peacebuilding is more telling/helpful, which is nominal terms is $1.7T v. $6B respectively. This is crucial because if a peacebuilding intervention is presented with the scientific rigor of medical interventions, we know the resources exist to scale worthy solutions. The DoD, USAID, State Dept, and USIP already fund in this space, but increased funding would be possible with more viable solutions (or more scientific backing of existing solutions).

  3. I think we're talking of one and the same. I'm speaking of peacebuilding (as opposed to Peacekeeping) My hypothesis is to focus on the science of the individual's response to a peacebuilding intervention, not the wider systems where violence is happening. Incidentally, there's already several organizations looking at systems modeling and violence, both in predicting when/where violence will happen, and interventions on that level. I believe if it's possible, those existing initiatives will find it. However, my invitation here isn't as focused on validated my own hypothesis (I'm working on that elsewhere), but rather to evaluate this problem space from an EA point of view.

You've asked some great questions here, is this a topic you're interested in digging into?

Great, feel free to email me, I'm happy to connect. Frank at worldfaith dot org.

This sounds interesting as a model of both community building and fostering collective action. I wonder if there's a MED (minimally effective dose) that can happen in-town, rather than a retreat. I can imagine having a hard time getting commitment for people in NYC (where I'm based) to do this, but perhaps we could do a minimally effective version in 6-8 hours in town. Anyone tried something similar but shorter?

I'd love to see the comparison in multiplier for donating stock versus cash in the UK. In the US our largest donors often give in stock because the donor avoids capital gains tax AND can deduct the FMV (fair market value) of the stock. Would be valuable to see how that plays out in the UK for you larger donors.

I'm not sure I am completely convinced by the premise that EA need to be first in blockchain to be positioned to affect positive change in the blockchain or crypto spaces. If you look at technological innovation from a historical lens, often the first mover fails, and from their ashes rises another company/entity that picks up the concept and runs with it. While not the thesis of The Innovators Delimma by Clayton Christiansen, it's certainly a recognized pattern throughout the book. For us I think that means that both the crytocurruencies and actual blockchain system as it stands will likely fail, but we'll see someone else build an improved version of blockchain technology that can actually be mainstreamed, and most of what we know today will be as relevant as the companies of the first dotcom bust. That's not a fact, but the empiricial evidence makes it probable.

Super interesting, and I was just having this conversation recently. There's one issue I have with the analysis of psycholotherapy (assuming we even get a control group, which few studies do). The data points we're using to calculate effectiveness is self-reported. In other words, we have no external method of evaluating the actual positive impact impartially, only as it was experienced by the participants. Sunk-cost fallacy, the Hawthorne Effect, etc, could inspire truly believed but ultimately inaccurate reporting.

Load more