This is a special post for quick takes by John Salter. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
We're sadly no longer accepting sign-ups for our founder's programme. We've had an influx of demand and we're now fully at capacity for the foreseeable future. Its funding situation is precarious and I've sadly got to focus on that now. Results are nuts, but mental health funders are focussed on LMICs, meta funders don't like mental health interventions, so it's a challenging category to even survive in.
For now, I've got to focus on doing a good job for our existing clients. I'm sorry!
We don't have a public page for it; people sign-up via word-of-mouth and invite via incubators. We handpick and train mental health coaches for EA founders from the people who got the best results for regular EAs. The thesis was that people who're founding or scaling an EA charity founders face a ton of mental health challenges and that can be resolved quickly and help them and their charity succeed.
I figured getting the results would be the hard part, or convincing founders you could, but no. Within ~2 years, over half of AIM incubated charities have had one or more founders successfully resolve one mental health problem with us. ~90% of people who do the first session complete the programme and ~50% decide to keep going after it ends to work on their next most pressing problem. This is waaaaaaaay better than our stats for regular EAs and regular people - Founders underinvest in themselves so hard, and are so focussed on making their organisation succeed, that tons of low hanging fruit remain.
The problem is getting someone to fund it long-term:
- Early stage founders are broke, irrationally self-sacrificial, and time-poor - Mental health funders, for good reason, care mostly just for LMICs - Meta funders, for good reason, don't want to choose for others what service would work best for them / their incubatees.
So, while finding seed funding to demonstrate POC was really easy, getting something durable isn't. Donors think incubators should fund it. Incubators think donors should, after all, it's an ecosystem wide service.
It only costs ~$80k a year to run, I'll figure out a way to do it, it's whether I can do that in time to avoid losing talent I can't replace. I have one coach with a ~90% success rate, who only costs ~$33k a year, considering quitting because they don't believe the job will exist in 2 years. The founders she supports collectively have a budget in the tens of millions and several are widely used as examples as EA's most successful ever charities. We can't replace her: she's dramatically better than anyone else we employ, miles better than me, and neither she nor I understand how she does it.
I don't really want a grant. I want some mechanism whereby we can be paid by results or just compete in an open-market that isn't so distorted by the expectation that donors will cover everything.
Blimey. Did you check with CE about offering it as part of their incubation program (funded by them, maybe paid by results as you say)? And/or other incubators like Catalyze, or fellowship programs (not founders per se) like Constellation? (IIRC they have an affiliated executive coach already)
I'm surprised by "I don't really want a grant" though. E.g. the usual process is basically seed funding grant to check/demonstrate progress --> if you achieve that (or seem on track to), you get renewed funding. The mechanism isn't perfect (maybe you can BS your way to success or you aren't funded without good reasons), but it's at least ideally fairly results based.
(I'd be inclined to agree that ideally the founders/participants themselves would pay, but if you have evidence that they are "irrationally self-sacrificial" and will continue to underpay for the service relative to what they'd endorse themselves with hindsight etc, then that seems like a decent case for grant funding.)
It seems that part of the reason communism is so widely discredited is the clear contrast between neighboring countries that pursued more free-market policies. This makes me wonder— practicality aside, what would happen if effective altruists concentrated all their global health and development efforts into a single country, using similar neighboring countries as the comparison group?
Given that EA-driven philanthropy accounts for only about 0.02% of total global aid, perhaps the influence EA's approach could have by definitively proving its impact would be greater than trying to maximise the good it does directly.
This is a really interesting idea and would obviously need a relatively uncorrupt country that is on board with the project.
To some extent this kind of thing already happens, with aid organisations focusing their funding on countries which use it well. Rwanda is an interesting example of this over the last 20 years as they have attracted huge foreign funding after their dictator basically fixed low level corruption and organized the country surprisingly well. This has led to dis proportionate improvements in healthcare and education compared with surrounding countries, although economically the jury is still out.
The big problem in my eyes then is how do you know it's your interventions baking the difference, rather than just really good governance - very hard to tease apart.
Superficially, it sounds similar to the idea of charter cities. The idea does seem (at face value) to have some merit, but I suspect that the execution of the idea is where lots of problems occur.
So, practically aside, it seems like a massive amount of effort/investment/funding would allow a small country to progress rapidly toward less suffering and better life.
My general impression is that "we don't have a randomized control trial to prove the efficacy of this intervention" isn't the most common reason why people don't get helped. Maybe some combination of lack of resources, politics & entrenched interests, and trade-offs are the big ones? I don't know, but I'm sure some folks around here have research papers and textbooks about it.
Feels unlikely either that it would create an actually valid natural experiment (as you acknowledge, it's not a huge proportion of aid, and there are a lot of other factors that affect a country) or persuade people to do aid differently.
Particularly when EA's GHD programmes tend to be already focused on stuff which is well-evidenced at a granular level (malaria cures and vitamin supplementation) and targeted at specific countries with those problems (not all developing countries have malaria), by organizations that are not necessarily themselves EA, and a lot of non-EA funders are also trying to solve those problems in similar or identical ways.
Also feels like it would be a poor decision for, say, a Charity Entrepreneurship founder trying to solve a problem she identified as one she could make a major difference with based on her extensive knowledge of poverty in India deciding to try the programme in a potentially different Guinean context she doesn't have the same background understanding of simply because other EAs happened to have diverted funding to Guinea for signalling purposes.
Y-Combinator wants to fund Mechanistic Interpretability startups
"Understanding model behavior is very challenging, but we believe that in contexts where trust is paramount it is essential for an AI model to be interpretable. Its responses need to be explainable.
For society to reap the full benefits of AI, more work needs to be done on explainable AI. We are interested in funding people building new interpretable models or tools to explain the output of existing models."
It's often easier to get responses from the most senior people in a field.
1. Most people are too intimidated to get in touch with them 2. They're senior for a reason - they tend to be way more productive and opportunity seeking 3. They have VAs, secretaries, and other people to bring serious requests to their attention.
I work in global mental health, and am looking for charities to refer clients to me. The two best-connected people in my field (according to GPT-4) are Dr Vikram Patel and Dr Shekhar Saxena. I sent out ~50 identical cold emails to people I thought could connect me to relevant charities / hospitals etc. Vikram and Saxena were the only two people to reply!
I've also seen this argued by Tim Ferris and other highly productive people, but it resonated so poorly with my prior beliefs that I didn't update sufficiently. The implications here are huge - it could be way easier to gain access to influential people than the average EA perceives, and influence is power-law distributed!
I've strongly had this experience. I have written 5 NYT bestsellers a cold email, and 3 replied. I get good rates with C-levels and I get the poorest rates at lower levels.
But it strongly does depend on your story or organisation in my experience. Your org has a strong story so it warrants a reply. But I did a lot of marketing and some PR for dime in a dozen companies and if you lack a strong story, you can expect reply rates of senior people to be close to zero.
Yes, though use this power wisely. I think it's good to imagine how much you'd pay to talk to said person and scale my effort as the number gets bigger.
If I waste this person's time, they may become less willing to be open and hence I'll have damaged the commons.
I'm hiring a full-time remote administrator from an LMIC to take repetitive tasks off my core teams hands. Got any tips on how best to hire / manage them?
Bit the bullet and paid them $200. So far, it's astonishingly good. If you're in the UK/EU, you can get a refund no questions asked within 14 days so if you're on the fence I'd definitely suggest giving it a go
We're sadly no longer accepting sign-ups for our founder's programme. We've had an influx of demand and we're now fully at capacity for the foreseeable future. Its funding situation is precarious and I've sadly got to focus on that now. Results are nuts, but mental health funders are focussed on LMICs, meta funders don't like mental health interventions, so it's a challenging category to even survive in.
For now, I've got to focus on doing a good job for our existing clients. I'm sorry!
I opened your profile and website and couldn't tell what this referred to? I'm intrigued, even if it's no longer accepting sign ups!
We don't have a public page for it; people sign-up via word-of-mouth and invite via incubators. We handpick and train mental health coaches for EA founders from the people who got the best results for regular EAs. The thesis was that people who're founding or scaling an EA charity founders face a ton of mental health challenges and that can be resolved quickly and help them and their charity succeed.
I figured getting the results would be the hard part, or convincing founders you could, but no. Within ~2 years, over half of AIM incubated charities have had one or more founders successfully resolve one mental health problem with us. ~90% of people who do the first session complete the programme and ~50% decide to keep going after it ends to work on their next most pressing problem. This is waaaaaaaay better than our stats for regular EAs and regular people - Founders underinvest in themselves so hard, and are so focussed on making their organisation succeed, that tons of low hanging fruit remain.
The problem is getting someone to fund it long-term:
- Early stage founders are broke, irrationally self-sacrificial, and time-poor
- Mental health funders, for good reason, care mostly just for LMICs
- Meta funders, for good reason, don't want to choose for others what service would work best for them / their incubatees.
So, while finding seed funding to demonstrate POC was really easy, getting something durable isn't. Donors think incubators should fund it. Incubators think donors should, after all, it's an ecosystem wide service.
It only costs ~$80k a year to run, I'll figure out a way to do it, it's whether I can do that in time to avoid losing talent I can't replace. I have one coach with a ~90% success rate, who only costs ~$33k a year, considering quitting because they don't believe the job will exist in 2 years. The founders she supports collectively have a budget in the tens of millions and several are widely used as examples as EA's most successful ever charities. We can't replace her: she's dramatically better than anyone else we employ, miles better than me, and neither she nor I understand how she does it.
I don't really want a grant. I want some mechanism whereby we can be paid by results or just compete in an open-market that isn't so distorted by the expectation that donors will cover everything.
Blimey. Did you check with CE about offering it as part of their incubation program (funded by them, maybe paid by results as you say)? And/or other incubators like Catalyze, or fellowship programs (not founders per se) like Constellation? (IIRC they have an affiliated executive coach already)
I'm surprised by "I don't really want a grant" though. E.g. the usual process is basically seed funding grant to check/demonstrate progress --> if you achieve that (or seem on track to), you get renewed funding. The mechanism isn't perfect (maybe you can BS your way to success or you aren't funded without good reasons), but it's at least ideally fairly results based.
(I'd be inclined to agree that ideally the founders/participants themselves would pay, but if you have evidence that they are "irrationally self-sacrificial" and will continue to underpay for the service relative to what they'd endorse themselves with hindsight etc, then that seems like a decent case for grant funding.)
Are the services available to founders (or other EAs that might be interested) for a fee?
It seems that part of the reason communism is so widely discredited is the clear contrast between neighboring countries that pursued more free-market policies. This makes me wonder— practicality aside, what would happen if effective altruists concentrated all their global health and development efforts into a single country, using similar neighboring countries as the comparison group?
Given that EA-driven philanthropy accounts for only about 0.02% of total global aid, perhaps the influence EA's approach could have by definitively proving its impact would be greater than trying to maximise the good it does directly.
This is a really interesting idea and would obviously need a relatively uncorrupt country that is on board with the project.
To some extent this kind of thing already happens, with aid organisations focusing their funding on countries which use it well. Rwanda is an interesting example of this over the last 20 years as they have attracted huge foreign funding after their dictator basically fixed low level corruption and organized the country surprisingly well. This has led to dis proportionate improvements in healthcare and education compared with surrounding countries, although economically the jury is still out.
The big problem in my eyes then is how do you know it's your interventions baking the difference, rather than just really good governance - very hard to tease apart.
Superficially, it sounds similar to the idea of charter cities. The idea does seem (at face value) to have some merit, but I suspect that the execution of the idea is where lots of problems occur.
So, practically aside, it seems like a massive amount of effort/investment/funding would allow a small country to progress rapidly toward less suffering and better life.
My general impression is that "we don't have a randomized control trial to prove the efficacy of this intervention" isn't the most common reason why people don't get helped. Maybe some combination of lack of resources, politics & entrenched interests, and trade-offs are the big ones? I don't know, but I'm sure some folks around here have research papers and textbooks about it.
Feels unlikely either that it would create an actually valid natural experiment (as you acknowledge, it's not a huge proportion of aid, and there are a lot of other factors that affect a country) or persuade people to do aid differently.
Particularly when EA's GHD programmes tend to be already focused on stuff which is well-evidenced at a granular level (malaria cures and vitamin supplementation) and targeted at specific countries with those problems (not all developing countries have malaria), by organizations that are not necessarily themselves EA, and a lot of non-EA funders are also trying to solve those problems in similar or identical ways.
Also feels like it would be a poor decision for, say, a Charity Entrepreneurship founder trying to solve a problem she identified as one she could make a major difference with based on her extensive knowledge of poverty in India deciding to try the programme in a potentially different Guinean context she doesn't have the same background understanding of simply because other EAs happened to have diverted funding to Guinea for signalling purposes.
Y-Combinator wants to fund Mechanistic Interpretability startups
"Understanding model behavior is very challenging, but we believe that in contexts where trust is paramount it is essential for an AI model to be interpretable. Its responses need to be explainable.
For society to reap the full benefits of AI, more work needs to be done on explainable AI. We are interested in funding people building new interpretable models or tools to explain the output of existing models."
Link
https://www.ycombinator.com/rfs (Scroll to 12)
What they look for in startup founders
https://www.ycombinator.com/library/64-what-makes-great-founders-stand-out
What AI tools have made the biggest difference to your or your organisation's productivity?
This was from 2018. Does anyone have up-to-date estimates of the value per co-founder per charity?
It's often easier to get responses from the most senior people in a field.
1. Most people are too intimidated to get in touch with them
2. They're senior for a reason - they tend to be way more productive and opportunity seeking
3. They have VAs, secretaries, and other people to bring serious requests to their attention.
I work in global mental health, and am looking for charities to refer clients to me. The two best-connected people in my field (according to GPT-4) are Dr Vikram Patel and Dr Shekhar Saxena. I sent out ~50 identical cold emails to people I thought could connect me to relevant charities / hospitals etc. Vikram and Saxena were the only two people to reply!
I've also seen this argued by Tim Ferris and other highly productive people, but it resonated so poorly with my prior beliefs that I didn't update sufficiently. The implications here are huge - it could be way easier to gain access to influential people than the average EA perceives, and influence is power-law distributed!
I've strongly had this experience. I have written 5 NYT bestsellers a cold email, and 3 replied. I get good rates with C-levels and I get the poorest rates at lower levels.
But it strongly does depend on your story or organisation in my experience. Your org has a strong story so it warrants a reply. But I did a lot of marketing and some PR for dime in a dozen companies and if you lack a strong story, you can expect reply rates of senior people to be close to zero.
Yes, though use this power wisely. I think it's good to imagine how much you'd pay to talk to said person and scale my effort as the number gets bigger.
If I waste this person's time, they may become less willing to be open and hence I'll have damaged the commons.
I'm hiring a full-time remote administrator from an LMIC to take repetitive tasks off my core teams hands. Got any tips on how best to hire / manage them?
ChatGPT deep-research users: What type of stuff does it perform well on? How good is it overall?
Bit the bullet and paid them $200. So far, it's astonishingly good. If you're in the UK/EU, you can get a refund no questions asked within 14 days so if you're on the fence I'd definitely suggest giving it a go
Will do now