While reading the economist yesterday, an article in their fantastic "The Africa gap" series felt strangely familiar - I'd read these ideas last year in @Karthik Tadepalli's fantastic series on economic growth in LMIC's. I appreciated this section
Instead of many large firms with salaried staff, Africa has lots of micro-enterprises and informal workers. More than 80% of employment in Africa is informal, according to the International Labour Organisation. Roughly half of informal workers in cities are self-employed, doing everything from crafting Instagram advertising to fixing roofs. Many Africans mix formal work with informal hustles, which are often poorly paid. Most would love a steady job. Mr Tadepalli suggests that many of the “self-employed” may just be the unemployed “in disguise”
I shouldn't have been surprised to see Karthik's quotes and research directly referred to in the article itself! Nice work Karthik and great to see your work get recognised in the mainstream as well as on the neglected global development corners of the EA forum ;).
Disclaimer: I think the instant USAID cuts are very harmful, they directly affect our organisation's wonderful nurses and our patients. I'm not endorsing the cuts, I just think exaggurating numbers when communicating for dramatic effect (or out of ignorance) is unhelpful and doesn't build trust in institutions like the WHO.
Sometimes the lack of understanding, or care in calulations from leading public health bodies befuddles me.
"The head of the United Nations' programme for tackling HIV/AIDS told the BBC the cuts would have dire impacts across the globe.
"AIDS related deaths in the next five years will increase by 6.3 million" if funding is not restored, UNAIDS executive director Winnie Byanyima said."
https://www.bbc.com/news/articles/cdd9p8g405no
There just isn't a planet on which AIDS related deaths would increase that much. In 2023 an estimated 630,000 people were estimated to have died from AIDS related deaths. The WHO estimates about 21 million Africans on HIV treatment. Maybe 5 million of these in South Africa aren't funded by USAID. Other countries like Kenya and Botswana also contribute to their own HIV treatment.
So out of those 16ish million on USAID funded treatment, ... (read more)
"AIDS related deaths in the next five years will increase by 6.3 million" if funding is not restored, UNAIDS executive director Winnie Byanyima said.
This is a quote from a BBC news article, mainly about US political and legal developments. We don't know what the actual statement from the ED said, but I don't think there's enough here to infer fault on her part.
For all we know, the original quote could have been something like predicting that deaths will increase by 6.3 million if we can't get this work funded -- which sounds like a reasonable position to take. Space considerations being what they are, I could easily see a somewhat more nuanced quote being turned into something that sounded unaware of counterfactual considerations.
There's also an inherent limit to how much fidelity can be communicated through a one-sentence channel to a general audience. We can communicate somewhat more in a single sentence here on the Forum, but the ability to make assumptions about what the reader knows helps. For example, in the specific context here, I'd be concerned that many generalist readers would implicitly adjust for other funders picking up some of the slack, which could lead to dou... (read more)
Thanks Jason - those are really good points. In general maybe this wasn't such a useful thing to bring up at this point in time, and in general its good that she is campaigning for funding to be restored. I do think the large exaggeration though means this a bit more than a nitpick.
I've been looking for her saying the actual quote, and have struggled to find it. A lot of news agencies have used the same quote I used above with similar context. Mrs. Byanyima even reposted on her twitter the exact quote above...
"AIDS-related deaths in the next 5 years will increase by 6.3 million"
I also didn't explain properly but even at the most generous reading of something like After 5 years deaths will increase by 6.3 million if we get zero funding for HIV medication, the number is still wildly exaggurated. Besides the obvious point that many people would self fund the medications if there was zero funding available (I would guess 30%-60%), and that even short periods of self funded treatment (a few months) would greatly increase their lifespan, the 6.3 million is still incorrect at least by a factor of 2.
Untreated HIV in adults in the pre HAART era in Africa had something like an 80% surv... (read more)
Has anyone talked with/lobbied the Gates Foundation on factory farming? I was concerned to read this in Gates Notes.
"On the way back to Addis, we stopped at a poultry farm established by the Oromia government to help young people enter the poultry industry. They work there for two or three years, earn a salary and some start-up money, and then go off to start their own agriculture businesses. It was a noisy place—the farm has 20,000 chickens! But it was exciting to meet some aspiring farmers and businesspeople with big dreams."
It seems a disaster that the Gates foundation are funding and promoting the rapid scale up of factory farming in Africa, and reversing this seems potentially tractable to me. Could individuals, Gates insiders or the big animal rights orgs take this up?
I was encouraged to read this Economist article, "The demise of foreign aid offers an opportunity Donors should focus on what works. Much aid currently does not" which I would say has at least some EA adjacent ideas.
They mention health spending, which by the nature of all 4 of GiveWell's top charities can often be more cost effective than other options, plus pandemic prevention.
"What should they do? One answer is to stop spending on programmes that do not work, and to focus on the things that might, such as health spending. Even here, however, governments ... (read more)
Can we call it the Meat EatING problem?
The currently labelled "meat eater problem" has been referred to a number of times during debate week. The forum wiki on the “meat eater” problem summarises it like this.
“Saving human lives, and making humans more prosperous, seem to be obviously good in terms of direct effects. However, humans consume animal products, and these animal products may cause considerable animal suffering. Therefore, improving human lives may lead to negative effects that outweigh the direct positive effects.”
I think this an important issue to discuss, although I think we should be extremely sensitive and cautious while discussing it.
On this note I think we should re-label this the meat eating problem, as I think there are big upsides with minimal downside.
It's true that meat eating is closer to what we actually care about, but it's worth singling out causal pathways from saving lives and increasing incomes/wealth, as potential backfire effects. "Meat eating problem" seems likely to be understood too generally as the problem of animal consumption, without explanation. I'd prefer a more unique expression to isolate the specific causal pathways.
Some other ideas:
(Eggs and other animal products besides meat matter, too.)
The value of re-directing non-EA funding to EA orgs might still be under-appreciated. While we obsess over (rightly so) where EA funding should be going, shifting money from one EA cause to another "better" ne might often only make an incremental difference, while moving money from a non-EA pool to fund cost-effective interventions might make an order of magnitude difference.
There's nothing new to see here. High impact foundations are being cultivated to shift donor funding to effective causes, the “Center for effective aid policy” was set up (then shut down) to shift governement money to more effective causes, and many great EAs work in public service jobs partly to redirect money. The Lead exposure action fund spearheaded by OpenPhil is hopefully re-directing millions to a fantastic cause as we speak.
I would love to see an analysis (might have missed it) which estimates the “cost-effectiveness” of redirecting a dollar into a 10x or 100x more cost-effective intervention, How much money/time would it be worth spending to redirect money this way? Also I'd like to get my head around how much might the working "cost-effectiveness" of an org improve if its budget shifted from 10%... (read more)
The CE of redirecting money is simply (dollars raised per dollar spent) * (difference in CE between your use of the money vs counterfactual use). So if GD raises $10 from climate mitigation for every $1 it spent, and that money would have otherwise been neutral, then that's a cost-effectiveness of 10x in GiveWell units.
There's nothing complicated about estimating the value of leverage. The problem is actually doing leverage. Everyone is trying to leverage everyone else. When there is money to be had, there are a bunch of organizations trying to influence how it is spent. Melinda French Gates is likely deluged with organizations trying to pitch her for money. The CEAP shutdown post you mentioned puts it perfectly:
... (read more)The core thesis of our charity fell prey to the 1% fallacy. Within any country, much of the development budget is fixed and difficult to move. For example, most countries will have made binding commitments spanning several years to fund various projects and institutions. Another large chunk is going to be spent on political priorities (funding Ukraine, taking in refugees, etc.) which is also difficult for an outsider to influence.
What is left is fought over by hundreds, i
I feel like 5% of EA directed funding is a high bar to clear to agree with the statement "“AI welfare should be an EA priority”. I would have maybe pitched for maybe 1% 2% as the "priority" bar, which would still be 10 million dollars a year even under quite conservative assumptions as to what would be considered unrestricted EA funding.
This would mean that across all domains (X-risk, animal welfare, GHD) a theoretical maximum of 20 causes, more realistically maybe 5-15 causes (assuming some causes warrant 10-30% of funding) would be considered EA Priorities. 80,000 hours doesn't have AI welfare in their top 8 causes but it is in their top 16, so I doubt it would clear the "5%" bar, even though they list it under their "Similarly pressing but less developed areas", which feels priorityish to me (perhaos they could share their perspective?)
It could also depend how broadly we characterise causes. Is "Global Health and development" one cause, or are Mosquito nets, deworming and cash transfers all their own causes? I would suspect the latter.
Many people could therefore consider AI welfare an important cause area in their eyes but disagree with the debate statement because t... (read more)
I'm a little confused as to why we consider the leaders of AI companies (Altman, Hassabis, Amodei etc.) to be "thought leaders" in the field of AI safety in particular. Their job descriptions are to grow the company and increase shareholder value, so their public persona and statements has to reflect that. Surely they are far too compromised for their opinions to be taken too seriously, they couldn't make strong statements against AI growth and development even if they wanted to, because of their job and position.
The recent post "Sam Altman's chip ambitions undercut OpenAI's safety strategy" seems correct and important, while also almost absurdly obvious - the guy is trying to grow his company and they need more and better chips. We don't seriously listen to big tobacco CEOs about the dangers of smoking, or Oil CEOs about the dangers of climate change, or Factory Farming CEOs about animal suffering, so why do we seem to take the opinions of AI bosses about safety even in moderate good faith? The past is often the best predictor of the future, and the past here says that CEOs will grow their companies, while trying however possible to maintain public goodwill as to minimise th... (read more)
Who is considering Altman and Hassabis thought leaders in AI safety? I wouldn't even consider Altman a thought leader in AI - his extraordinary skill seems mostly social and organizational. There's maybe an argument for Amodei, as Anthropic is currently the only one of the company whose commitment to safety over scaling is at least reasonably plausible.
The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I'm not sure exactly why I felt this way, but here are a few ideas.
I think it's fairest to compare HLI's charity analysis with other charity evaluators like Givewell, ACE, and Giving Green.
Giving Green has been criticised regularly and robustly (just look up any of their posts). Givewell publish their analysis and engage with criticism; HLI themselves have actually criticised them pretty robustly! I don't know about ACE because I don't stay up to date on animals but I bet it's similar there.
The dynamics are quite different for example in charitable foundations where they don't need to convince anyone to donate differently, or charities that deliver a service who only need to convince their funders to continue donating.
HLI fucked up their analysis, but because it was public we found out about it. Most EAs are too fearful to expose their work to scrutiny. Compare them to others who work on mental health within EA...
Most coaches and therapists in EA don't do any rigorous testing of whether what they are doing actually works. They don't even allow you to leave public reviews for them. I think we're the only organisation to even have a TrustPilot!!!
I don't think the problem is that HLI got too much hate for fucking up, it's that everyone else gets too little hate for being opaque.
Now HLI have been dragged through the mud, you can bet your ass they won't be making the same mistakes again. So long as they keep being transparent, they'll keep learning and growing as an org. Others will keep making the same mistakes indefinitely, only we'll never know about it and will continue blindly trusting them.
Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair.
One general problem with online discourse is that even if each individual makes a fair critique, the net effect of a lot of people doing this can be disproportionate, since there's a coordination problem. That said, a few things make me think the level of criticism leveled at HLI was reasonable, namely:
Does there need to be a "scrutiny rebalancing" of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less.
I agree with you that GHD organizations tend to be scrutinized more closely, in large part because there is more data to scrutinize. But there is also some logic to balancing scrutiny levels within cause areas. When HLI solicits donations via Forum post, it seems reasonable to assume that donations they receive more likely come out of GiveWell'... (read more)
When HLI solicits donations via Forum post, it seems reasonable to assume that donations they receive more likely come out of GiveWell's coffers than MIRI's. This seems like an argument for holding HLI to the GiveWell standard of scrutiny, rather than the MIRI standard (at least in this case).
I am concerned that rationale would unduly entrench established players and stifle innovation. Young orgs on a shoestring budget aren't going to be able to withstand 2023 GiveWell-level scrutiny . . . and neither could GiveWell at the young-org stage of development.
In this post, HLI explicitly compares its evaluation of StrongMinds to GiveWell's evaluation of AMF, and says:
"At one end, AMF is 1.3x better than StrongMinds. At the other, StrongMinds is 12x better than AMF. Ultimately, AMF is less cost-effective than StrongMinds under almost all assumptions.
Our general recommendation to donors is StrongMinds."
This seems like an argument for scrutinizing HLI's evaluation of StrongMinds just as closely as we'd scrutinize GiveWell's evaluation of AMF (i.e., closely). I apologize for the trite analogy, but: if every year Bob's blueberry pie wins the prize for best pie at the state fair, and this year Jim, a newcomer, is claiming that his blueberry pie is better than Bob's, this isn't an argument for employing a more lax standard of judging for Jim's pie. Nor do I see how concluding that Jim's pie isn't the best pie this year—but here's a lot of feedback on how Jim can improve his pie for next year—undermines Jim's ability to win pie competitions going forward.
This isn't to say that we should expect the claims in HLI's evaluation to be backed by the same level of evidence as GiveWell's, but we should be able to take a hard look at HLI's report and determine that the strong claims made on its basis are (somewhat) justified.
Applying my global health knowledge to the animal welfare realm, I'm requesting 1,000,000 dollars to launch this deep net positive (Shr)Impactful charity. I'll admit the funding opportunity is pretty marginal...
Thanks @Toby Tremlett🔹 for bringing this to life. Even though she doesn't look so happy I can assure you this intervention nets a 30x welfare range improvement for this shrimp, so she's now basically a human.
Im intrigued where people stand on the threshold where farmed animal lives might become net positive? I'm going to share a few scenarios i'm very unsure about and id love to hear thoughts or be pointed towards research on this.
Animals kept in homesteads in rural Uganda where I live. Often they stay inside with the family at night, then are let out during the day to roam free along the farm or community. The animals seem pretty darn happy most of the time for what it's worth, playing and galavanting around. Downsides here include poor veterinary care so sometimes parasites and sickness are pretty bad and often pretty rough transport and slaughter methods (my intuition net positive).
Grass fed sheep in New Zealand, my birth country. They get good medical care, are well fed on grass and usually have large roaming areas (intuition net positive)
Grass fed dairy cows in New Zealand. They roam fairly freely and will have very good vet care, but have they calves taken away at birth, have constantly uncomfortably swollen udders and are milked at least twice daily. (Intuition very unsure)
Free range pigs. Similar to the above except often space is smaller but they do get little hous
It's really hard to judge whether a life is net positive. I'm not even sure when my own life is net positive—sometimes if I'm going through a difficult moment, as a mental exercise I ask myself, "if the rest of my life felt exactly like this, would I want to keep living?" And it's genuinely pretty hard to tell. Sometimes it's obvious, like right at this moment my life is definitely net positive, but when I'm feeling bad, it's hard to say where the threshold is. If I can't even identify the threshold for myself, I doubt I can identify it in farm animals.
If I had to guess, I'd say the threshold is something like
it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering.
To this point, I think the most important things are
Wanted to give a shoutout to Ajeya Cotra (from OpenPhil), for her great work explaining AI stuff on a recent Freakonomics podcast series. Her explanations about both her work on the development of AI, and her easy to understand predictions of how AI might progress from here were great, she was my favourite expert on the series.
People have been looking for more high quality public communicators to get EA/AI safety stuff out there, perhaps Ajeya could be a candidate if she's keen?
Ajeya is already doing that with Kelsey Piper over at their blog Planned Obsolescence :)
Although I'm enamoured as a mostly-neartermist that the front page (for the first time in my experience) is devoid of AI content, I really would like to hear the job experience and journey of a few AI safety/policy workers for this jobs week. The first 10ish wonderful people who shared are almost all neartermist focused, which probably doesn't represent the full experience of the community.
I'm genuinely interested to understand how your AI safety job works and how you wonderful people motivate yourselves on a day to day basis, when seeing clear progress and wins must be hard a lot of the time. I find it hard enough some days working in Global health!
Or maybe your work is so important, neglected and urgent that your can't spare a couple of hours to write a post ;).
The net net effect could be positive or negative. Let me untangle it for you.
In favour of net positivity is the net positive human lives saved through net negative, negative net effects on mosquitos and malaria causing a net positive effect on humans. The insecticides positively embed these bed net, net positive effects.
However there’s a catch in favour of net negativity - as nets are positively dragged to net fish, the netted fish suffer net negative net negativity.
While caught between the net positive net positivity to humans and net negative net negativ... (read more)
I've been on the forum for maybe 9 months now, and I've been intrigued by the idea of "hits based" giving, explained well in this 2016 article by Holden Karnofsky. The idea that "we will sometimes bet on ideas that contradict conventional wisdom, contradict some expert opinion, and have little in the way of clear evidential support."
1) Is there a database with a list of donations considered "hits based"by Openphil? If not that would be a helpful and transparent way of tracking success on these. I had a quick look through their donations but its not c... (read more)
HLI fucked up their analysis, but because it was public we found out about it. Most EAs are too fearful to expose their work to scrutiny. Compare them to others who work on mental health within EA...
Most coaches and therapists in EA don't do any rigorous testing of whether what they are doing actually works. They don't even allow you to leave public reviews for them. I think we're the only organisation to even have a TrustPilot!!!
I don't think the problem is that HLI got too much hate for fucking up, it's that everyone else gets too little hate for being opaque.
Now HLI have been dragged through the mud, you can bet your ass they won't be making the same mistakes again. So long as they keep being transparent, they'll keep learning and growing as an org. Others will keep making the same mistakes indefinitely, only we'll never know about it and will continue blindly trusting them.