Generalist with 15+ y.o.e in people leadership, project management, and impact consulting. My Ikigai is optimising experiences, processes, and systems for people with people.
Fun facts:
*Successfully transitioned into AI safety after a year long career sabbatical
*Despite me being a school principal for 10 years, my children (7 & 10 yo) are unschooled - the world is their classroom!
Insights on effective leadership hacks
+ Effective meetings (e.g. Roundspeak)
+ Trust & accountability in team building & org culture (e.g. Care & Growth model)
+ Effective feedback, conflict prevention, conflict resolution (Non-violent communication)
+ Consent-based decision making (Convergent facilitation)
+ Dynamic, values-based org governance (Sociocracy)
+ MEL toolkit (Theory of change, mechanisms of change, impact evaluation framework, data collection)
Red-teaming/troubleshooting...
+ Personal/organisation's theory of change
+ People leadership-related pain points/uncertainties
+ Your high-impact career shift plan/approach
Tips on alternative education (for children!)
+ Home-ed, Montessori, unschooling, self-directed learning, agile learning, etc
Local surf spots ;)
+ Santa Barbara, Beirut, Casablanca/Rabat, Dubai/Abu Dhabi
Hi Tania! Your ops/strategy background is relevant for impact organizations - many of the challenges you're describing (sectoral gaps, positioning transferable skills) are covered in our recent post "Challenges from Career Transitions".
Your leadership and user-centered design experience could be valuable in impact orgs, though the transition often requires deep networking and strategic upskilling alongside applications - all with a 'winning or learning' mindset.
I went through a similar journey myself, which I wrote about in "To the Bat Mobile!! My Mid-Career Transition into AI Safety" - found that connecting authentically with people already doing the work was more valuable than cause-area expertise initially - although developing 'context' later in my journey proved essential.
Operations/strategy roles do seem open to career switchers in my experience. As a rough heuristic, worth noting that smaller orgs may expect more cause-specific familiarity since ops roles wear multiple hats, while larger orgs tend to have more specialized roles requiring less cause-specific knowledge (although not a hard rule!). Also, "ops" varies widely between organizations - always check the actual role description to see if it's focused on finance, HR, compliance, etc or combines everything.
All the best,
Moneer (Career Advisor at Succesif)
Thanks for your question! Your background in math, software development, and strategic thinking - as well as familiarity with history and politics - may actually be quite relevant for AI governance strategy work, especially in technical policy roles that bridge the gap between research and implementation.
Without knowing specifics of your career profile (e.g. years of experience, location, etc), some very general direct roles for reducing AI risk:
Indirect but valuable:
Resources to explore:
Next steps:
All the best,
Moneer (Career Advisor at Successif)
Hi Christoph, thank you for such an honest reflection. Your self-awareness about what energizes versus drains you is actually a significant strength that will serve you well in finding the right path. I suspect many others will resonate with your question too.
II think you're right - many high-impact activities do require significant social interaction and can bias toward extroverted working styles.
However, the landscape is more nuanced. Some organizations actively consider inclusion across 'diversity dimensions', designing cultures that attempt to meet diverse needs. At Successif, for example, we're working to identify and mitigate opportunities and barriers across gender, race, neurodiversity, and nationality - both for our team and advisees alike.
While general EA culture may seem extroverted, remember it's not homogeneous - there are sub-cultures and streams with different working styles.
Areas where introverted engineers may thrive:
A key suggestion here would be to treat organizational culture as a selection criterion. Use the application process (i.e. job requirements, work tests, interviews) to assess remote work policies, team structures, and communication norms - to actively weed out options that couldn't be sustainable for you.
Questions to consider:
On earning-to-give: Absolutely legitimate and impactful with your background. Consider starting small to explore direct applications - open-source contributions to EA projects, small freelance work, or informational interviews.
Finally, rather than seeing this as an either/or choice, consider starting small: What are some cheap tests/low hanging actions you could take on this week/month?
All the best,
Moneer (Career Advisor at Successif)
Here is an interactive example of what a framework like SydFAIS can lead to.
Imagine we had something similar for AI Safety!!
Some asked asked what the impact of applying SyDFAIS could be?
I've given this ~ 2 hours of thought, aligning with current literature on Bridgespan's 5 observable characteristics of field-building across 35+ fields.
Below is an impact assessment table, presenting quantified estimates across 5 key outcome areas, along with underlying assumptions and uncertainties.
https://docs.google.com/document/d/1gm0LJ2nDifUfnQn0T7ZqWbo4RNf42MwEjd7Bc8jq0Gg/edit?tab=t.0
I may develop this into a separate post...
Thanks Matt.
Based on limited desktop research and two 1:1s with people from BlueDot Impact and GovAI, the types of existing analysis are fragmented, not conducted as part a holistic systems based approach. (I could be wrong)
Examples: What Should AI be Trying to Achieve identifies possible research directions based on interviews with AI safety experts; A Brief Overview of AI Safety Alignment Orgs identifies actor groupings and specific focus areas; the AI Safety Landscape map provides a visual of actor groupings and functions.
Perhaps an improved version of my research would include a complete literature review of such findings, to not only qualify my claim (and that of others I've spoken to) that we lack a holistic approach for both understanding and building the field, but use existing efforts as a starting points (which I hint to in Application Step 1).
As for Open Phil, your comment spurred me to ask them this question in their most recent grant announcement post!
Happy for you to signpost me to other orgs/specific individuals. I'm keen to turn my research into action.
Has Open Phil (or others) conducted a comprehensive analysis for both understanding and building the AI safety field?
If yes, could you share some leads to add to my research?
If not, would Open Phil consider funding such work? (either under the above or other funds)
Here is a recent example: Introducing SyDFAIS: A Systemic Design Framework for AI Safety Field-Building
I'd rank this article amongst top 10% of the +20 Theories of Change that I've co-developed/evaluated as an impact consultant.
Key Strengths:
-coherent change logic [output-->outcomes (short/med)-->impact]
-depth of thought on:
*assumptions (with evidence, cited literature, reasoning transparency)
*anticipated failure modes (including mitigation strategies and risk level)
*key uncertainties (on program, organisational, and field level)
Potential Considerations:
-Think about breaking down ERA's theory of change by stakeholder group, to expand your impact net. Stakeholder group examples: (Fellows) (Mentors) (ERA Staff) (Partners: Uni of Cambridge? Volunteers?). And then ask what are potential outcomes of ERA's activities for each group over time. The current ToC seems to focus mainly on Fellow-related outcomes. What about other groups? Although many Fellow-related outcomes may apply to other stakeholder groups, there may be other outcomes particular to a stakeholder group that is not yet fully understood/measured/improved upon. Speculative examples:
Outcomes
(ERA Staff) -->. Build program management and operations expertise
-->. Create sustainable/effective talent development models for AIS field
(Mentors) -->. Develop teaching and mentorship skills
-->. Gain recognition as field leaders
(Uni of Cam). --> Access talent pipeline for future researchers/students
. -->. Strengthen position as leader in emergin field
-Think about 'mechanisms' of change, which seeks to identify what about your activities 'cause' your intended outcomes? In other words, what about your outcomes would not occur if activities did not have qualities a, b ,c, etc. A fellowship doesn't just automatically lead to intended outcomes, right? So what about the location, timing, duration, content, messaging, format, application process, mentor matching process, alumni relations process, etc etc etc makes it more likely to produce intended outcomes? I've observed that organisations are better positioned to start thinking more intentionally about mechanisms once they've already developed a robust ToC and have some outcome evidence to support existing assumptions - which I think is where ERA are.
-For the benefit of the winder community (e.g. new fellowships in the making), it could be helpful to see your impact evaluation framework (bits and parts of which can already be assumed from your above blog), maybe even sharing specific indicators and tools used to gather evidence across outcomes.
-Your 'Key Uncertainties' section proposes such critical questions! I don't see comments from the wider community. I'm unsure if you've received individual emails/anonymous feedback. Perhaps a shared document would spark collaboration, and offer the community a live glimpse at how you (or others) are attempting to answer these questions?
Thank you for sharing, Christen. Trying to break into a space that seems to require the very experience you're seeking is tricky. Actually, your comment prompted me to realize some biases I'm likely operating under, which I've now included in the section 'What this List is Not'.
I'd think about "high-impact experience" in a CV differently. It doesn't have to come in the form of formal job titles. What hiring managers could find equally valuable is evidence that you understand the space and can contribute meaningfully which can be demonstrated in several ways.
In my own CV, I didn't have traditional AI governance experience either (I had "school principal" and "consultant"). After several months of 'journeying', though, I included:
(To see an example, visit my LinkedIn --> Experience --> Career transition)
This approach essentially "substituted" formal experience with what I call "acquiring context".
Regarding those 170 actions - let me break that down because it's less daunting than it sounds! Over one year, that's roughly 3 actions per week, or one every couple of days. I acknowledge that available time greatly affects this, especially while holding down a full-time job.
I was never good at keeping a diary, but I found tracking actions helpful as a project management tool (I used RAG colors - red for rejected/door closed, amber for pending, green for accepted/success), with hyperlinks to easily retrieve previous applications or contacts, and finally as a way to motivate myself with small sense of progress (it's a game mechanic that works for me).
As for a more direct path - while there isn't really a "streamlined" route, there are definitely "conveyor belts" or "nodes" with high traffic under those three prongs: networking, small projects/pro bono consulting, and upskilling:
EAGs - I attended 2 during the year and found them (extremely) helpful for developing context and walking away with warm networks (I prefer the term 'relationships'), volunteering opportunities, etc. Not sure where you are in the world or what your capacity is, but the next couple months is EAG season.
Upskilling courses like BlueDot Impact (apart from the content) are positive market signals and connections to networks. The capstone projects were an opportunity for me to work on something directly in the space (e.g. milestone #10, #12, #14).
Other conveyer belts (alternatively: on-ramps?) include direct career transition support, such as 80k Hours (one-off career advising), Successif (long-term, relationship-focused career advising specifically in AI risk mitigation for professionals with any 5+ years experience), and the Impact Accelerator Program (6 week structured program within a cohort). AI Safety Collab (8 week course) and fellowships (e.g. GovAI; Arcadia Impact) are likely conveyer belts, although I did not do these myself.
The whole point of my post is to show that you won't find a streamlined path, but to invite you to create your own path and take full advantage of your circle of control. And perhaps to sound less cliche - whatever time you're spending on career transition, consider redistributing to ~40% applications, ~20% deep networking, ~20% upskilling, ~20% small projects/volunteering/pro bono consulting. (i.e. a portfolio approach!). Then again, increasingly conscious of my stated biases.