BW

Brad West🔸

Founder & CEO @ Profit for Good Initiative
2353 karmaJoined Roselle, IL, USAProfit4good.org/

Bio

Participation
2

Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.

 

Comments
347

Great piece! This connects directly to something I've been thinking about in my recent post on "orthogonal impact." The key insight isn't just that government roles have impact—it's that they often have the most impact through levers that aren't measured or incentivized by the job itself.

Consider the contrast: if you work at GiveWell or an EA-aligned charity, your performance metrics likely align with doing good. The counterfactual person who'd take your job would also be trying to maximize impact—that's literally what they're paid to do. But in government (or corporations, academia, etc.), the situation is different. A USDA official is evaluated on processing efficiency and regulatory compliance, not on whether they quietly shifted procurement standards to help millions of animals. A congressional staffer gets promoted for serving their boss well, not for adding crucial language to obscure bills that could save lives.

This misalignment between job incentives and impact opportunities is exactly what creates the leverage. The counterfactual government employee would likely focus on what gets measured—hitting their KPIs, avoiding controversy, climbing the ladder. They wouldn't spend political capital pushing for open data standards or championing unglamorous but vital legislation. These orthogonal opportunities for good sit neglected precisely because they don't help anyone's career.

Your point about government being "essential infrastructure for EA goals" is spot-on. We need people willing to occupy these positions and utilize both the official powers AND the unmeasured discretionary opportunities they contain.

This sounds valuable! Quick question about participation: I'm an EA-aligned lawyer concerned about AI safety, though not currently at a top firm or working directly in AI regulation. Would someone with general legal expertise and strong motivation to contribute to AI safety be useful for this, or are you specifically looking for lawyers already working in tech/AI policy?

I imagine fresh perspectives from lawyers outside the usual AI circles could be valuable for spotting overlooked risks, but wanted to check if that fits what you're envisioning.

Thanks for sharing your piece, Sam. There's a critical insight here that impact-maximizers might miss if they pattern-match "treat community as intrinsically valuable" to "prioritize feelings over outcomes." The actual claim is deeply pragmatic: authentic relationships are instrumentally superior for maximizing long-run expected impact.

Our current model optimizes for legible short-term proxies (fellowship completions, cause-area conversions) that fit neatly in grant reports but poorly predict what matters: who's still contributing meaningfully in 5-10 years, who's thinking independently rather than deferring, and who's building things that wouldn't exist otherwise. In expected-value terms, 20 people with genuine conviction working for a decade dominate 100 people weakly deferring for two years—especially when those 20 bring epistemic diversity and new ideas for impact rather than reproducing consensus.

If we're serious about maximizing impact, we should at least question whether our current metrics actually maximize it. What would it look like to measure success differently—tracking 36-month retention, independent project initiation, or comfort disagreeing with group consensus? If authentic community building could produce superior long-term outcomes (as history and successful movements suggest), then resisting it isn't principled; it's optimizing the wrong proxies. I'm curious what others think: are we measuring the right things, or are we leaving impact on the table?

Thanks for raising this. A large‑scale, preventable humanitarian crisis with mass civilian suffering clearly belongs on the EA radar—at minimum as a candidate problem for more systematic investigation. Right now the post reads more like a signal (“why aren’t we talking about this?”) than a case, so it may not spark the engagement you’re hoping for.

Two quick suggestions that could help:

  1. Recast as a Quick Take or add a two‑paragraph “why this matters” section. Even a concise sketch—e.g. expected mortality, tractable intervention channels (cash relief, medical supply corridors, policy advocacy), and how they compare on cost‑effectiveness to other EA global‑health staples—would give readers a foothold.
  2. Pose a few concrete questions for the community. For example:
    • What existing orgs have the logistical reach to deliver aid inside Gaza right now, and what are their marginal funding gaps?
    • How do political‑risk–adjusted cost‑effectiveness estimates compare with GiveWell‑style benchmarks?
    • Are there neglected advocacy levers (e.g. U.S. or EU policy pressure) where an additional EA dollar or career choice could move substantial resources?

Framing it this way signals that you recognise the need for the usual EA toolkit—scale, neglectedness, tractability—while inviting others to help fill in the numbers. I’d be keen to see a deeper dive or a collaborative back‑of‑the‑envelope if you (or anyone reading) has the bandwidth.

Sofia, love this framework—and love what you're doing with Hive!

Your post sparked a thought: Many constraints you mention (funding, visa support, networks) are actually transferable within EA. Yet we optimize mostly at the "cause area → org" level, not "whose potential is trapped by a solvable constraint?"

What if your calibration tools included asking: "What resources could the community provide to make this realistic for me?" Things like:

  • Micro-grants for career pivots
  • "Lendable" operations talent
  • Treating introductions as community infrastructure, not private assets

I suspect many high-impact projects never happen because founders correctly identify they lack resource X, without realizing it's sitting idle elsewhere in the community. Your framework helps people see constraints clearly—the next step might be making those constraints more permeable.

Yeah, the central idea is that PFGs can have operational parity (or superiority) because how they do good is in the identity of the shareholder, rather than through some way they do their operations. And stakeholders (consumers, employees, media, suppliers, partners, lenders) have a non-zero preference for the PFG (they'd rather a charity benefit from their transaction than a random shareholder). This is why they should have a competitive advantage over normal firms. 

From this competitive advantage, you potentially have an arbitrage opportunity by philanthropists. Basically channel your money through PFGs and you get more than what you pay for. 

This is a very simple and intuitively plausible mechanism for leverage for philanthropists, yet there has been very little curiosity on the potential of this model to multiply philanthropic funding. 

What might help you conceptually is not to think of donations and shareholders as a separate thing (i.e. donations are something that limits returns) but rather think of it as business where charities are the shareholders (not conferring any disadvantage moreso than any other shareholder).

The returns are not lower. They are higher, because economic actors have a non-zero preference for charities but they can operationally do what normal businesses can (hence Humanitix's meteoric rise). 

The limiting factor right now is philanthropic capital. And if philanthropists realize they can get more money to charity through this model, then they would be motivated to use it because it offers the opportunity to multiply impact. And then if the evidence base gets stronger, they can use debt (leveraged buyouts) to expand beyond what philanthropic resources would allow. 

 

See my below article on why PFGs should have a competitive advantage.

Stakeholder non-zero preference > business advantage > philanthropic multiplier opportunity

https://profit4good.org/from-charity-choice-to-competitive-advantage-the-power-of-profit-for-good/

Yeah you are very limited in ability to exchange equity in exchange for cash. So for regular investors you could raise money with bonds. 

 

The idea would be philanthropists would be in the position that for-profit investors would be in normal businesses because they could multiply their money to charity. See the below article for more information (and the whole previous blog series) 

 

https://profit4good.org/above-market-philanthropy-why-profit-for-good-can-surpass-normal-returns/

Then your issue is with systemically flawed reasoning overestimating the likelihood of low-probability events. The solution for that would be to adjust by some factor that adjusts for this systemic epistemic bias, and then proceed with risk-neutral EV maximization (again, with the caveats that I had mentioned in my initial comment).

Load more