Summary
ACTRA’s first year was deliberately ambitious: we partnered with government actors from month two, built a curriculum in record time and ran an in-house Randomized-Controlled Trial (RCT) before our organization was six months old. This gave us unusually fast and concrete learnings, but also created significant overhead and avoidable inefficiencies.
Looking back, we would still choose to design for scale from day 1 — but we would sequence learning more deliberately, separate R&D from scale-oriented delivery earlier, and prioritize learning value per cost more ruthlessly.
This post summarizes our key milestones and eight learnings that we believe are relevant to other early-stage, entrepreneurial nonprofits — especially those aiming to combine rigor, speed, and eventual scale.
Context: ACTRA’s key milestones in year one
Acción Transformadora (ACTRA) emerged from Ambitious Impact's (AIM) August-October 2024 incubation round, starting operations in November 2024. The milestones we present here are meant to give context for the learnings, for a broader organizational overview (including funding needs), you can explore our Year 2 plans brochure or November forum update.
- Month 2 (Dec 2024): Confirmed first government partnership
- Month 3 (Jan 2025): Development of an Minimal Viable Product (MVP) curriculum (9 sessions)
- Months 4–5 (Feb–Mar 2025): First implementation cycle through our government partner, training 10 facilitators and evaluated through an in-house RCT (10 treatment groups vs. 12 active controls)
- Months 6–7 (Apr–May 2025): Integration of learnings and development of Curriculum 2.0 (20 sessions)
- Months 8–9 (Jun–Jul 2025): Curriculum development and testing in our learning lab; development of Curriculum 2.5
- Months 10–13 (Aug–Nov 2025): Full pilot implementation designed for scale (20 sessions), including a second although slightly scaled-back in-house RCT (6 treatment facilitators/groups vs 6 active control)
- In parallel: A second implementation and curriculum development cycle in a small, direct-delivery setting
Eight learnings from our first year of implementation
We group these learnings into four domains (Implementation, MEL, Fundraising and HR), with two core lessons per domain. In a nutshell:
- Design for scale from day one — but don’t confuse designing for scale with scaling
- Don’t forget that you need a learning lab / R&D space
- Measure rigorously even if it feels obvious — especially early steps in your Theory of Change
- Maximize early immersion and qualitative learning
- Don’t overspend resources on formal grants before your org is sufficiently mature
- Develop long-term project planning early — but not for all projects
- Hire in anticipation of capacity gaps, not once they are already painful
- Volunteers are invaluable — but they should not replace core ownership roles
I. Implementation
1. Design for scale from day one — but don’t confuse designing for scale with scaling
One of ACTRA’s core strengths is that we implemented “at scale” from the beginning — meaning within a government partner and using a train-the-trainer model where ACTRA staff trained external facilitators.
We still believe this was the right default. It taught us many things that a direct delivery setting would not have revealed.
However, in hindsight, we went slightly too large, too early. If we could go back, we would still partner with the government and use a train-the-trainer model — but we could have learned as much (or more) by working with 4–5 government facilitators in the first cycle instead of ~10. This would have:
- Reduced organizational overhead substantially
- Allowed deeper qualitative learning in fewer territories
- Preserved the core technical-assistance and scale logic
Generalizable learning: Even when you design for scale, be mindful to right-size your MVP. Early on, smaller “units of scale” that preserve the scaling mechanism may generate the same or more learning.
2. Don’t forget that you need a learning lab / R&D space
Learning for scale is valuable — but where do you actually build the product?
We realized (probably a bit late) that we needed a separate space to experiment and iterate quickly on program design without risking government relationships or affecting large beneficiary groups. We call this Research and Development (R&D) space our “learning lab”.
A learning lab should:
- Have different MEL priorities than scale-oriented delivery
- Optimize for fast MVP-style iteration
- Allow partial, unfinished, or “ugly” versions of the intervention
- Allow you to control most aspects of the implementation
- Be as close as possible to the “ideal” implementation quality that you aim to achieve at scale
This space removes the uncertainty of scale so that you can keep an eye on the core product. It answers the question: if everything went right, what effect would we see in participants? For example, after we saw null results in our first implementation cycle, we didn’t know if this was a consequence of the curriculum itself or the transmission of information from us to trainers to participants. So, we redesigned our 20+ session curriculum, tested it with only one group, in a calm and controlled setting, and with a professional facilitator. We realized then that our curriculum had potential, we just needed to make that stick at scale.. In hindsight, we would have been better off:
- Developing only 10–12 sessions to start
- Iterating those aggressively in controlled settings
- Trying to teach those sessions to external facilitators only once we had evidence that those sessions were understandable and acceptable to our population
Generalizable learning: Early-stage orgs benefit from explicitly separating learning at scale from building the product to scale — even if both happen in parallel.
II. Monitoring, Evaluation, and Learning (MEL)
3. Measure rigorously even if it feels obvious — especially early steps in your Theory of Change
One co-founder initially proposed measuring facilitators’ basic understanding of core concepts via before-after surveys. The other co-founder felt this was unnecessary — the improvement seemed “obvious”.
We measured it anyway. When we ended up with null participant-level results, that decision turned out to be critical to understand where the theory of change was failing. Thanks to it, we knew that:
- Facilitator knowledge did improve, but not to the level we expected
- Participant knowledge, surprisingly, was worse for treatment than for control
- This confirmed our hypothesis that the intervention was being distorted in delivery and confusing participants
- Without measuring the early step, we would have not known for sure whether the knowledge transfer was failing at the ACTRA -> facilitator level, or at the facilitator -> participant level, or both.
Generalizable learning: Measuring “obvious” early steps rigorously is rarely wasted effort. It often becomes the anchor that allows later interpretation of more complex or disappointing results.
4. Maximize early immersion and qualitative learning
Halfway through our first implementation cycle, qualitative observation already showed substantial implementation challenges.
Despite this, we followed through on our commitment to run an in-house RCT with ~400 participants as a five-month-old organization. This taught us a lot about:
- Evaluation logistics
- Data pipelines
- Managing measurement at scale
But it also created huge organizational overhead. In our effort to develop, manage and collect the 400+ surveys, we probably missed:
- Observing more sessions in the field ourselves
- Having more qualitative conversations with facilitators, including from the control group
- More qualitative learning activities like focus groups, cognitive interviews for our measurement tools and program perception interviews with participants
On the other hand, having null RCT results helped us justify to our government partner the decision to scale back the next cycle. Still, in hindsight, we would have invested significantly less resources into the RCT this early by:
- Reducing treatment size from the beginning (e.g. 4–5 groups instead of 10)
- Starting with a clearer learning plan rather than a binary focus on “does it work against a control?”
- Observed control groups more qualitatively and systematically: What’s the alternative treatment they get? How do the data collection sessions happen in practice?
- Invested resources in analysis and reporting instead of coordinating large volunteer teams
We also believe that control groups this early are not strictly necessary. In our case, we had a set-up that made it very easy to include a control group, and still think it was the right choice to take advantage of that. But we don’t think organizations should go out of their way to get a control group this early on.
Generalizable learning: Early rigor should be proportional to learning value, not symbolic credibility. Immersion and qualitative insight often dominate marginal learning returns in year one.
III. Fundraising
5. Don’t overspend resources on formal grants before your org is sufficiently mature
We applied to several large funders very early (one within our first month). Despite good thematic fit, we were rejected in all cases where no prior relationship existed.
Looking back, we should have focused more on:
- On-the-ground experience
- A credible project pipeline
- Relationship-building with high-affinity funders
We only got past the initial expression of interest in one of our applications around month nine, (although we didn’t end up getting the grant) — suggesting timing and maturity mattered more than we realized.
Starting more formalized fundraising around month 6-9 seems like a sensible rule of thumb to us today, while relationship building should obviously be done from day one. For example, it might have been more beneficial to attend a social entrepreneurship or philanthropy conference in our first 9 months, which we didn’t.
Generalizable learning: For most early-stage orgs (especially with seed funding), the opportunity cost of early formal grant writing is high. Six to nine months of implementation and informal relationship-building is often a better investment.
6. Develop long-term project planning early — but not for all projects
We learned that high-quality institutional fundraising requires projects planned 12+ months ahead, which often clashes with fast iteration cycles and government partner timelines. As an early-stage organization, you will never be able to plan all your projects that far ahead. But maybe you can plan 1 project that far ahead?
We are increasingly defining certain projects as “fundraisable” mainly based on timelines and partner relations, while keeping others out of grant writing, aiming to:
- Have some projects planned far in advance for institutional fundraising
- Keep other work streams flexible and short-term for experimentation
For the “far in advance” projects, you will probably want to include some low-cost early stages (covered by unrestricted funds) that allow you to get to know the partner on the way. Also, we believe that it’s very useful to define and communicate in advance clear go / no-go thresholds for each consecutive project stage, both to manage expectations of partners, funders, and for internal decision making scrutiny.
Generalizable learning: Long-term planning is not just operational — it is a core fundraising capability. But fast iteration is not compatible with long-term certainty. Separating partnerships into “far in advance” (used for institutional fundraising) and “flexible experimentation” (covered by unrestricted funds) workstreams clearly, may help to solve this dilemma.
IV. HR and team management
7. Hire in anticipation of capacity gaps, not once they are already painful
Once capacity is already overstretched, running a hiring process makes things worse before they get better — especially for first core team hires.
In our case, we postponed our first formal core-team hiring process by a few months because
- at the time hiring didn’t feel very urgent as we had our focus on specific projects
- we weren’t certain about having the funding to sustain a larger team long-term (but probably relied on an overly conservative forecast)
- we felt that our capacities were overstretched short-term and hiring might fit in better later (in reality, we were only more overstretched later).
In the aftermath, we wish we had deprioritized other workstreams to have the team strengthened earlier. If runway allows, we now believe it is better to hire slightly early rather than slightly late.
Generalizable learning: It is usually better to hire slightly early than slightly late. Hiring earlier reduces the risk of overload, delays, and missed opportunities that are hard to recover from once capacity is already stretched.
8. Volunteers are invaluable — but they should not replace core ownership roles
Volunteers were extremely helpful in the early development of our young organization, especially for well-defined research tasks. When crediting our first year volunteers we were stunned to see that there had been 14 of them! (and we might have forgotten some…) However, using volunteers for roles requiring:
- Deep organizational context
- Ongoing responsibility
- Strategic judgment
Turned out to be risky and unfair, and sometimes caused disappointment — both for us and the volunteers.
Volunteers can leave at any time, and asking for near-staff-level responsibility without compensation is rarely sustainable. Probably we should have contracted a few more paid consultants in year one instead of overly relying on volunteers.
There may be exceptions (e.g. explicit long-term time donations, exceptional technical know-how for a time-bounded need), but they are rare.
Generalizable learning: Volunteers are best used for bounded, modular contributions. Core ownership and oversight roles require paid, accountable capacity.
Closing thoughts
Early-stage organizations should optimize explicitly for learning per unit of effort, not for speed, scale, frugality, or rigor in isolation. That’s easy to say in theory. How that looks in practice is often different and highly context-dependent, which is what makes it tricky to operationalize.
Many of our “mistakes” were pitfalls that — in the abstract — we knew we should evade from the onset:
- From day one we internally treated and communicated our first-cycle RCT as an MVP — in reality we overspent resources building it too big for an MVP.
- Everyone told us we should not worry too much about fundraising in the first six months — yet, when our seed funding fell a bit short we misallocated resources, because we had the urge to extend our runway soon.
- We always thought in terms of lean, iterative learning cycles — yet the idea to leave our curriculum significantly shortened for more than one implementation only arose once we already had version 2.0 fully developed.
Our first year still compressed an incredible amount of learning into a short time. We are very grateful for the wisdom the AIM community and our advisors have passed on to us. Without them, we would have probably fallen into quite a few additional pitfalls.
We hope these reflections are useful to others building in similar spaces and help illustrate what prioritizing learning can look like in practice. We are happy to discuss or clarify any of the points above — or to have our learnings contrasted with different experiences (feel free to comment!).
