I advance global challenges where transformative impact can be achieved by integrating frontier (research) knowledge with innovative practice.
To achieve this, I build theories of change and execute program models end-to-end, in particular where strategic convening across disciplines, sectors and systems will drive innovation critical to impact.
Currently skill-building:
- Innovation incentive models (problem framing & solution design, e.g. Prizes, challenges, accelerators, networks)
- Biosecurity, pandemic & health emergency preparedness
- AI safety & governance
- Storytelling & communications
I am generally super curious and love encountering surprising-but-probably-important new ideas and findings.
I'm seeking connections and career opportunities to engage in building, implementing & evaluating theories of change and program models.
I have years of deep and broad experience, but would really love to pair with others to build some new things that we can design, prototype and test-test-test, then maybe get some funds to implement.
I have experience in strategic planning, operational planning, measurement & evaluation, reporting, and fundraising.
Other EAers have told me they find it helpful to talk out their ideas & plans with me and hear me synthesize back what they've said, including unrecognized gaps or opportunities.
Thanks for surfacing this -- in the AI safety courses & organization researching I've been exploring, the ominous absence in agenda-setting of the vast majority of the world both by geographic and population scale is really frightening. So this is me giving an ineffectual +1, I have no solutions.
There's a somewhat along-side this question I've been hovering around. I'm in Canada, and from my perspective while the frontier development US-China poles make the current intent focus on the US make sense, at the same time I'm increasingly confused why the potential for middle power impact seems limited to our failed leverage to shape (ie stop) the frantic American development speed. Surely in concert we can do more than helplessly hang on and hope to benefit more than we're screwed?
I finally found a perspective on this worded way better than I could hope to put it, here: https://substack.com/home/post/p-185388441 (How AI Safety Is Getting Middle Powers Wrong - The case for pivoting from global governance to national interests, Anton Leicht).
Interesting to me is the case for these countries to actually act explicitly in national self-interest with AI safety integrated as national security to better gain salience and strategic action. I could see this picking up traction in even non-democratic contexts.
I'm curious about your thoughts on how this might resonate in Nigeria, SA, etc?
Just adding a me-as-well, Ana! To all of this.
(I even asked at a program info presentation about how adults with care-giving responsibilities could be included in the described activities where one seemingly had no other needs to take care of during the day but my question went unacknowledged lol!)
Mental health IS health. Period.
Integrating mental health with physical health understanding & treatment would really transform outcomes. I started my career in mental health intervention research and it's hard to see so many years later the same efforts having to be made. The org I first worked with has a campaign on now Mental health is health, making the case it's debilitating nonsense that some illnesses are treated like illnesses while others are treated with judgement, neglect, silence.
I appreciate stand-alone, MH illness specific action as it's so neglected and also wonder if whole-health integration as a goal for effective interventions would raise all boats, so to speak?
The simple answer to your question about the noteworthy salaries at core EA orgs: Symbolic Capitalism.
A truly EA approach to EA work would be everything is carried out with very reasonable wages in the lowest cost labour markets in the world, across every level of an organization, because even paying outright for staff members to undertake whatever specific niche skill training might be needed for a role would still never add up to even close to the entry level salaries at some of these US- and UK-based places.
Nice opportunity to share, thanks for posting.
I was just sifting through NATO-DIANA Challenge/Accelerator topics to i.d. shared opportunities with different names. I think space, defense, remote communities, extreme enviros, etc. could bring much more synergy (and funding) than GCR folks recognize. I might map some of this out in the coming weeks if only to expand the field of funding opps people are thinking about.
Adding some weight to others' comments that since 80k went whole-hog for AI-more-AI-nothing-but-AI, what was initially interesting & compelling AI content for me to listen to as part of a broader repertoire of distinctly EA takes on things has felt like a firehose and there isn't interesting content I look to the podcase for now. I miss the other areas of content a lot.
Encountering these, which I'd listen to in 30-45min chunks over a few days, was indescribably useful. The ones with Ajeya Cotra on world-view diversfication, Rachel Glennerster on market shaping, Karen Levy on program dev & eval, and Hugh White on Donald Trump/US change, were so genuinely novel and informative to me that the perspectives they shared are now baked into how I think about things. The podcast change since then to 1000 angles of AI risk has nowhere near this value.
Editing to add something less crabby:
Some areas of AI risk that would be substantially interesting and useful and re-engage me would be around building out an actual understanding of AI risk. AI discourse given any attention here has been representative of a dangerously homogeneous group for something prioritized for its existential level of risk, global impact, etc. (mostly white men, almost entirely W.E.I.R.D. countries, middle-class, narrowly technical interest, etc.). More or less a mirror of the same people causing the risk. For novel + valuable content, I want to know perspectives that can help fill out even a bit more of the ENTIRE REST OF HUMANITY perspectives on this one -- countries/regions, ethnicities, life stages, genders, walks-of-life, socio-econ statuses, faiths, sectors, families, education experiences. I have a sense we can't possibly have a good grasp of what the major risks are if our understanding is based exclusively on what's most valued to the most narrow group of people. It would also open up so much rich space for new problem frames --> new solutions. I would avidly listen to this kind of content. The podcast team expansions would ideally reflect people with the abilities to build this out...
Thank you for this! I think the literacy angle is really powerful as it taps into knowledge-is-power through informing action without reducing its value to whether we can directly affect global power development.
I also realize my comment may be too tangential to your original post to really belong here --I've started a new post on the topic: https://forum.effectivealtruism.org/posts/oELJZFY9LBAkpCccw/is-safe-ai-development-intractable-for-middle-powers-the