Pronouns: she/her or they/them.
I got interested in EA back before it was called EA, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where EA can fit into my life these days and what it means to me.
I think you behaved inappropriately, as I and others explained in the comments on that post about the dubious "fraud" accusation. I completely understand why Sinergia said they don’t want to engage with your criticism anymore.
Upvotes/downvotes are not a meaningless number in this context, but a sign of EA Forum users’ opinion on whether you behaved appropriately and whether your claim that Sinergia committed fraud was true or misleading. You can see this in the comments on that post as well. It seems like there is, so far, unanimous agreement that you behaved inappropriately and that your claim was misleading or false.
I’m not sure if Vetted Causes is a salvageable project at this point. Its reputation is badly damaged. It might be best to put the project to an end and move on to something else.
Speaking for myself, I will never trust any evaluation that Vetted Causes ever publishes about any charity, and I would feel an obligation to warn people in this community that Vetted Causes is an untrustworthy and unreliable source for charity evaluations.
I’m guessing this is probably a response to the post that unfairly accused a charity of fraud? (The post I’m thinking of currently has -60 karma, 0 agree votes, 6 disagree votes, and 4 top-level comments that are all critical.)
Some criticism might be friendly and constructive enough that giving the organization a chance to write a reply before publishing is not that important. Or if the organization is large, powerful, and has lots of money, like Open Philanthropy, and especially if your criticisms are of a more general or a more philosophical kind, it might not be important to send them a copy before you publish. This depends partly on how influential you are in EA and on how harsh your criticisms are.
Definitely accusing a small charity of fraud is something you should run by the charity beforehand. In that case, though, the charity was already so frustrated with the critic’s poor-quality criticism that they had publicly stated (before the fraud accusation) they didn’t want to engage with it anymore.
AGI by 2028 is more likely than not
I gave a number of reasons I think AGI by 2030 is extremely unlikely in a post here.
Here’s the link to the original post: https://epochai.substack.com/p/the-case-for-multi-decade-ai-timelines
One important point in the post — illustrated with the example of the dot com boom and bust — is that it’s madness to just look at a trend and extrapolate it indefinitely. You need an explanatory theory of why the trend is happening and why it might continue or why it might stop. In the absence of an explanatory understanding of what is happening, you are just making a wild, blind guess about the future.
(David Deutsch makes this point in his awesome book The Beginning of Infinity and in one of his TED Talks.)
A pointed question which Ege Erdil does not ask in the post, but should: is there any hard evidence of AI systems invented within the last 5 years or even the last 10 years doing any labour automation or any measurable productivity augmentation of human workers?
I have looked and I have found very little evidence of this.
One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems to me like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:
These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat quality—RR and customer satisfaction—suggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.
If the amount of labour automation or productivity improvement from LLMs is zero or negative, then naively extrapolating this trend forward would mean full labour automation by AI is an infinite amount of time away. But of course I’ve just argued why these kinds of extrapolations are a mistake.
It continually strikes me as odd that people write 3,000-word, 5,000-word, and 10,000-word essays on AGI and don’t ask fundamental questions like this. You’d think if the trend you are discussing is labour automation by AI, you’d want to see if AI is automating any labour in a way we can rigorously measure. Why are people ignoring that obvious question?
Nvidia revenue is a really bad proxy for AI-based labour automation or for the productivity impact of AI. It’s a bad proxy for the same reason capital investment into AI would be a bad proxy. It measures resources going into AI (inputs), not resources generated by AI (outputs).
LLMs seem to be bringing down the costs of software.
Are you aware of hard data that supports this or is this just a guess/general impression?
I've seen very little hard data on the use of LLMs to automate labour or enhance worker productivity. I have tried to find it.
One of the few pieces of high-quality evidence I've found on this topic is this study: https://academic.oup.com/qje/article/140/2/889/7990658 It looked at the use of LLMs to aid people working in customer support.
The results are mixed, suggesting that in some cases LLMs may decrease productivity:
These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat quality—RR and customer satisfaction—suggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.
Anecdotally, what I've heard from people who do coding for a job is that AI does somewhat improve their productivity, but only about the same as or less than other tools that make writing code easier. They've said that the LLM filling in the code saves them the time they would have otherwise spent going to Stack Overflow (or wherever) and copying and pasting a code block from there.
Based on this evidence, I am highly skeptical that software development is going to become significantly less expensive in the near term due to LLMs, let alone 10x or 100x less expensive.
One of the best comments I've ever read on the EA Forum! I agree on every point, especially that making up numbers is a bad practice.
I also agree that expanding the reach of effective altruism (including outreach and funding) beyond the Anglosphere countries sounds like a good idea.
And I agree that the kind of projects that get funded and supported (and the kind of people who get funded and supported) seems unduly biased toward a Silicon Valley worldview.
I believe Bob Jacobs is a socialist, although I don't know what version of socialism he supports. "Socialism" is a fraught term and even when people try to clarify what they mean by it, sometimes it still doesn't get less confusing.
I'm inclined to be open-minded towards Bob's critiques of effective altruism, but I get the sense that his critiques of EA and his ideas for reform are going to end up being a microcosm of socialist or left-wing critiques of society at large and socialist or left-wing ideas for reforming society.
My thought on that is summed up in the Beatles' song "Revolution":
You say you got a real solution, well, you know
We'd all love to see the plan
In principle, democracy is good, equality is good, less hierarchy is better than more hierarchy, not being completely reliant on billionaires and centimillionaires is good... But I need to know some more specifics on how Bob wants to achieve those things.
I see it primarily as a social phenomenon because I think the evidence we have today that AGI will arrive by 2030 is less compelling than the evidence we had in 2015 that AGI would arrive by 2030. In 2015, it was a little more plausible that AGI could arrive by 2030 because that was 15 years away and who knows what can happen in 15 years.
Now that 2030 is a little less than 5 years away, AGI by 2030 is a less plausible prediction than it was in 2015 because there's less time left and it's more clear it won't happen.
I don't think the reasons people believe AGI will arrive by 2030 are primarily based on evidence but are primarily a sociological phenomenon. People were ready to believe this regardless of the evidence going back to Ray Kurzweil's The Age of Spiritual Machines in 1999 and Eliezer Yudkowsky's "End-of-the-World Bet" in 2017. People don't really pay attention to whether the evidence is good or bad, they ignore obvious evidence and arguments against near-term AGI, and they mostly make a choice to ignore or attack people who express disagreement and instead tune into the relentless drumbeat of people agreeing with them. This is sociology, not epistemology.
Don't believe me? Talk to me again in 5 years and send me a fruit basket. (Or just kick the can down the road and say AGI is coming in 2035...)
Expert opinion has changed? First, expert opinion is not itself evidence, it's people's opinions about evidence. What evidence are the experts basing their beliefs on? That seems way more important than someone just saying a number based on an intuition.
Second, expert opinion does not clearly support the idea of near-term AGI.
As of 2023, the expert opinion on AGI was... well, first of all, really confusing. The AI Impacts survey found that the experts believed there is a 50% chance by 2047 that "unaided machines can accomplish every task better and more cheaply than human workers." And also that there's a 50% chance that by 2116 "machines could be built to carry out the task better and more cheaply than human workers." I don't know why these predictions are 69 years apart.
Regardless, 2047 is sufficiently far away that it might as well be 2057 or 2067 or 2117. This is just people generating a number using a gut feeling. We don't know how to build AGI and we have no idea how long it will take to figure out how to. No amount of thinking of numbers or saying numbers can escape this fundamental truth.
We actually won't have to wait long to see that some of the most attention-catching near-term AI predictions are false. Dario Amodei, the CEO of Anthropic (a company that is said to be "literally creating God"), has predicted that by some point between June 2025 and September 2025, 90% of all code will written by AI rather than humans. In late 2025 and early 2026, when it's clear Dario was wrong about this (when, not if), maybe some people will start to be more skeptical of attention-grabbing expert predictions. But maybe not.
There are already strong signs of AGI discourse being irrational and absurd. On April 16, 2025, Tyler Cowen claimed that OpenAI's o3 model is AGI and asked, "is April 16th AGI day?". In a follow-up post on April 17, seemingly in response to criticism, he said, "I don’t mind if you don’t want to call it AGI", but seemed to affirm he still thinks o3 is AGI.
On one hand, I hope that in 5 years the people who promoted the idea of AGI by 2030 will lose a lot of credibility and maybe will do some soul-searching to figure out how they could be so wrong. On the other hand, there is nothing preventing people from being irrational indefinitely, such as:
I think part of the sociological problem is that people are just way too polite about how crazy this all is and how awful the intellectual practices of effective altruists have been on this topic. (Sorry!) So, I'm being blunt about this to try to change that a little.
This post is about criticism of EA organizations, so it doesn’t apply to OpenAI or the U.S. government.
I interpreted this post as mostly being about charities with a small number of employees and relatively small budgets that either actively associate themselves with EA or that fall into a cause area EA generally supports, such as animal welfare or global poverty.
For example, if you wanted to criticize 80,000 Hours, New Harvest, or one of these charities focusing on mental health in poor countries, then I’d say you should send them a copy of your criticism before publishing and give them a chance to prepare a reply before you post. These organizations are fairly small in terms of their staff, have relatively little funding, and aren’t very well-known. So, I think it’s fair to give them more of an opportunity to defend their work.
If you wanted to criticize Good Ventures, Open Philanthropy, GiveWell, GiveDirectly, or the Against Malaria Foundation, then I think you could send them a courtesy email if you wanted, but they have so much funding and — in the case of Open Philanthropy at least — a large staff. They’re also already the subject of media attention and public discourse. With one of the smaller charities, you could plausibly hurt them with your post, so I think more caution is warranted. With these larger charities with more resources that are already getting debated and criticized a lot, an EA Forum post has a much lower chance of doing accidental harm.