This is a special post for quick takes by Savva_Kerdemelidis. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I am a patent attorney and IP/IT legal advisor who is interested in how we can design a more rational system of incentivising the market to develop medical innovations that maximise health impact, by using "financial innovation" or a legal/contractual approach. I wanted to get some feedback from the EA community regarding my social enterprise project, which I hope may be interesting to the members.

This project is based on the principles of EA and my frustration with the lack of clinical trial evidence in support of many therapies that could be viable treatments or cures but lack private incentives to pay for large clinical trials (Phase II+) that would convince the broader medical community of their safety and efficacy (including new uses for off-patent/generic drugs, diets, and lifestyle interventions). I've called these "unmonopolisable therapies" because it is not possible (or uneconomic) to enforce a monopoly price using patents, which is currently the only way that that the private pharmaceutical industry can recover the costs of pre-clinical development and clinical trials. The result is these unmonopolisable therapies often lack clinical trial evidence in support, unrelated to their health impact potential, because grant funding is rarely available for large clinical trials (with some notable exceptions).

The following Medium article is a brief summary of the charitable/social enterprise project and what we are trying to achieve, namely, a matching and facilitation service to establish "pay for success" contracts (e.g. prizes and social impact bonds) for healthcare payers to pay impact investors for successful clinical trials that repurpose generic drugs or other unmonopolisable therapies. We have proposed raising a US$10-50m Covid Prize Fund and/or Social Impact Bond as a pilot because the obvious economic and health burden for payers/govts, which may encourage them to back this "new" approach: https://medium.com/@savvak/can-we-develop-new-affordable-medicines-without-patents-1032399cd428. This project is based on the topic of my LLM thesis which analysed how the current patent system fails to incentivise development of new uses for generic drugs and other "unmonopolisable therapies (see https://ir.canterbury.ac.nz/bitstream/handle/10092/9826/thesis_fulltext.pdf?sequence=1&isAllowed=y). 

As far as I am aware, there are only a few non-profit organisations that are trying a similar "financial innovation" approach of using "pay for success contracts" to incentivise impact investors to reduce healthcare costs for healthcare payers by repurposing generic drugs (Cures within Reach in the US (https://cureswithinreach.org/reflections-on-the-approach-and-challenges-of-developing-social-impact-bonds-to-fund-drug-repurposing-clinical-trials-a-conversation-with-dr-rick-thompson), Mission: Cure in the US (https://mission-cure.org/), and Findacure in the UK (https://www.findacure.org.uk/the-rare-disease-drug-repurposing-social-impact-bond/). Nobody is focussed on using pay for success contracts to repurpose generic drugs to treat Covid-19. The main benefit of pay for success vs grant funding is that you are involving the private industry to crowdsource medical innovation which currently lacks market incentives. This would theoretically be more efficient than grants (or at least worth a try as there is also no risk for healthcare payers with the risk of failed clinical trials taken on by impact investors). This could potentially convince healthcare payers to back a much larger Prize Fund or Social Impact Bond than they would otherwise be willing to in return for "de-risking" these Phase II+ clinical trials. This also helps get over the "valley of death" between pre-clinical and applied clinical research. Our main bottleneck is developing a financial model to convince payers to back a pay for success contract and make "outcome payments" for successful clinical trials. We are looking for healthcare economists or anyone that has worked on developing financial models to justify funding by healthcare payers. Once we can convince a healthcare payer (or perhaps UHNWI) to put a price on successful Phase II+ clinical trials on the basis of health savings, setting up a fund of impact investors to repurpose generic drugs (and fund clinical trials for other unmonopolisable therapies) will be relatively easy.

I have recently relaunched my NZ-based charity, the Medical Prize Charitable Trust which has tax-free status (see crowdfundedcures.org). The intention is to incorporate a wholly-owned social enterprise and seek grant funding/investment and scale based on charging management/consulting fees for the matching and facilitation service. I am mostly indifferent as to where this vehicle will be incorporated, but would prefer UK or NZ as I have more experience in those jurisdictions. 

Looks interesting! I think you might have some interest in MichaelA's shortform about impact certificates. I saw you mentioned some orgs that are in this space. You may also want to check out Dr. Aidan Hollis' paper, An Efficient Reward System for Pharmaceutical Innovation, and his organization which tries to pay for success, the Health Impact Fund.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that