Verónica Suárez M.

Founder - Executive Director @ Laboratory of Social Entrepreneurship
154 karmaJoined Working (6-15 years)
emprendimientosocial.org/

Participation
2

  • Completed the Introductory EA Virtual Program
  • Attended an EAGx conference

Sequences
1

Theory of Change Makers

Comments
9

Hi Vasco,

Thanks for the question, happy to answer.

The CEAs were built with different levels of involvement from the fellows, our team, and LLM support. Claude being the main supporting tool.

The template and overall structure of the CEAs were developed outside of Claude, based on our methodology (and heavily comparing different templates from different organizations). 

As for the content of the models:

  • Costs: fellows first analyzed the cost structure of the interventions they were drawing evidence from (i.e., understanding the key cost drivers). Based on that structure, they built localized budgets using country-specific prices and costs. All organizational budgets were developed by the fellows within our template for direct implementation, with those benchmarks and certain assumptions, we calculated the intervention cost at scale, so no Claude involvement here.
  • Key assumptions: (e.g., reach, limiting factors) were fully defined by the teams, based on their contextual knowledge, and we then used scale factors, to project reach at scale, so no Claude involvement here.
  • Evidence: (RCTs, meta-analyses, etc.) was reviewed as part of the research process (we used other LLMs for this as well, mainly Elicit and Perplexity); Claude mainly helped structure and organize the information within the template.
  • Calculations: were done with Claude, but step by step, so we reviewed every output generated (making this the most time-consuming part of the process). I can confidently say we reviewed every cell, adjusting assumptions, asking questions, and catching errors. Of course, this is also the part where errors are most likely to occur, and these are only the first versions of the models, so I am also confident that there are still mistakes we did not catch.

Once the models were completed, we held review sessions with each team to revisit key assumptions and refine the models. For example, asking whether certain implied assumptions made sense for the real world, and because the fellows themselves know their context and intervention details, many things were changed. However, the models are far from being final, but the template structure was created for them to be used as a planning tool (not only a fundraising tool), so teams will definitely adjust them with time and with their internal evidence, once available.

In full honesty, building the models was the part of the incubation process that scared me the most, so I’m very glad Claude appeared at the right time, and it also made me more confident in that using templates like this is a way to make CEAs more accessible: once organizations have clear cost structures (and many orgs have detailed budgets), assumptions, and external evidence, building a first model becomes much more feasible. It’s not perfect, but it’s a strong starting point.

Very happy to share the template or walk through it together, we’re very keen to improve it and learn further with feedback.

Hi Gabrielle,

Thank you for your thoughtful reflection.

Regarding the research question: while fellows are making the decisions, they are doing so within a fairly structured methodology (we provide the tools, templates, and step-by-step process). For example, problem selection is guided by specific thresholds (e.g. Only selecting problems that are affecting >600k people or ~6M animals in the first country of implementation), alongside other criteria like depth, breadth, and trajectory of the problem in the region.

Similarly, intervention selection is constrained by requirements such as being evidence-based (e.g. supported by RCTs, meta-analyses, or strong evidence equivalents for animal welfare), proven to be cost-effective in other contexts, and feasible to adapt locally. We also have a (small) research support team helping throughout the process. And of course, we have used the help of certain LLMs (like Elicit and Perplexity).

Additionally, fellows go through theoretical training (e.g. M&E principles) to guide their reasoning, and we have the support from IPA Colombia, who provided lectures and office hours to review parts of the work.

We don’t think this replaces the depth of a trained researcher, but in a resource-constrained setting, it allows for reasonably rigorous, structured decision-making. It also has the advantage of making the reasoning process explicit so if something doesn’t work (as it sometimes happens in the real world, while implementing), fellows can revisit and iterate more effectively.

Always happy to receive feedback on how to improve things!

Hi Tony, thanks for the thoughtful questions!

Regarding Ambitious Impact/CEIP, we’ve definitely drawn inspiration from their model and have received direct mentoring from them throughout our implementation. They have been very generous with their knowledge and support! Some key differences:

  1. Founders act as researchers, guided by our methodology: they choose the problem (based on the ITN framework) and the intervention (evidence-based and cost-effective), and they decide what to focus on.
  2. They build in their own countries.
  3. They implement an early prototype during the incubation.
  4. It’s a part-time program with a longer duration (12 months total) in their native language.
  5. We don’t have a seed funding circle (yet), but we aim to demonstrate the types of interventions being incubated and build toward that over time.

Thanks for wanting to support! We are hosting a Meet the Founder session on April 9th to facilitate this; connections, mentorship, and feedback are exactly what we’re looking for. If you’d like to join, please fill out this short form (≈3 minutes), and we’ll share the meeting link and details: https://forms.gle/sb5bYBUihReiexJv8

It would be great to have you there!

As for the Laboratory of Social Entrepreneurship, we are a relatively new organization (~1 year), and this is our first cohort, but it builds on prior experience in the sector (M&E, program design, and implementation) and in the region.

Happy to share more if helpful, and thanks again for engaging!

Hi John,

Thanks for the comment and for flagging the issues.

  1. We have solved the formatting.
  2. & 3. Here is GRANA - Website for ease of exploring the organization.

Happy to talk if you would like more information!

Thank you so much, Vaidehi, for this thoughtful comment and for taking the time to engage.

On motivations: we saw a wide spectrum. Some applicants were driven by very personal experiences, e.g. having lived close to poverty or discrimination themselves, and wanting to “fix” what they endured. Others were motivated by specific issues they’ve worked on professionally (education, environment, public health). A few were drawn by the “founder identity” itself, the idea of building something new and leading a team. Part of our methodology is to surface motivations early and help participants refine them. Even with evidence-based tools, unclear or misaligned motivations can steer an org sideways over time. I’ll write a dedicated post on motivations later, but it’s important to flag certain drivers we need to watch out for, such as resentment, ego, the need for power, feelings of superiority, or even a saviour complex. Unfortunately, these do exist in the sector, and because we work with vulnerable populations, we have to be especially careful, not only for founders, but all of us that work on these issues.

On geography and cohort diversity: you’re right, there can be real benefits to multiple orgs in the same geography, especially around resource-sharing and peer support. We didn’t avoid that altogether, in fact, we do have overlaps. Out of the 20 fellows, five are the sole representatives of their country, with one of them currently living in another, more represented country. The constraint was more about balance: we had so many strong candidates from a handful of countries, but since this is the very first program of its kind in the region, we felt it was important to deliberately seed it across more geographies, so that in the future we can create regional clusters while still representing the breadth of Latin America. It’s definitely a trade-off.

On the “good intentions vs. impact” point: thanks for catching that nuance. I didn’t mean to suggest that EA as a whole dismisses the broader social sector, more that I’ve heard an impression that “traditional NGOs (led by people in the Global South) care less about impact than EA orgs.” Like you, I strongly disagree with that oversimplification. In our applicant pool, and in general in the sector, people who’ve worked years in constrained environments deeply care about whether their interventions work. What they often lack are the time, tools, or funding to evaluate rigorously, not the will. And when given those tools, they show remarkable openness to learning and reframing. That’s one of the things that excites me most about bridging EA methods with practitioners already in the field.

Great idea!

For future consideration, enabling different languages to reach a broader audience. And maybe consider something like an Emergency Fund (like Founder's Pledge Rapid Response Fund, or even for disasters, for highly cost-effective emergency responses), as people tend to donate more one-off in those situations.

Also, I like the idea of a single place to manage charitable activities, so GoodWallet can potentially become a recurrent-donation platform. 

Keep up the good work!