Yarrow🔸

889 karmaJoined Canada

Bio

Pronouns: she/her or they/them. 

I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.

Comments
211

Topic contributions
1

[Personal blog] I’m taking a long-term, indefinite hiatus from the EA Forum.

I’ve written enough in posts, quick takes, and comments over the last two months to explain the deep frustrations I have with the effective altruist movement/community as it exists today. (For one, I think the AGI discourse is completely broken and far off-base. For another, I think people fail to be kind to others in ordinary, important ways.)

But the strongest reason for me to step away is that participating in the EA Forum is just too unpleasant. I’ve had fun writing stuff on the EA Forum. I thank the people who have been warm to me, who have had good humour, and who have said interesting, constructive things.

But negativity bias being what it is (and maybe “bias” is too biased a word for it; maybe we should call it “negativity preference”), the few people who have been really nasty to me have ruined the whole experience. I find myself trying to remember names, to remember who’s who, so I can avoid clicking on reply notifications from the people who have been nasty. And this is a sign it’s time to stop.

Psychological safety is such a vital part of online discussion, or any discussion. Open, public forums can be a wonderful thing, but psychological safety is hard to provide on an open, public forum. I still have some faith in open, public forums, but I tend to think the best safety tool is giving authors the ability to determine who is and isn’t allowed to interact with their posts. There is some risk of people censoring disagreement, sure. But nastiness online is a major threat to everything good. It causes people to self-censor (e.g. by quitting the discussion platform or by withholding opinions) and it has terrible effects on discourse and on people’s minds. 

And private discussions are important too. One of the most precious things you can find in this life is someone you can have good conversations with who will maintain psychological safety, keep your confidences, “yes, and” you, and be constructive. Those are the kind of conversations that loving relationships are built on. If you end up cooking something that the world needs to know about, you can turn it into a blog post or a paper or a podcast or a forum post. (I’ve done it before!) But you don’t have to do the whole process leading up to that end product in public.

The EA Forum is unusually good in some important respects, which is kind of sad, because it shows us a glimpse of what maybe could exist on the Internet, without itself realizing that promise.

If anyone wants to contact me for some reason, you can send me a message via the forum and I should get it as an email. Please put your email address in the message so I can respond to you by email without logging back into the forum. 

Take care, everyone.

The economic data seems to depend on one's point of view. I'm no economist and I certainly can't prove to you that AI is having an economic impact. Its use grows quickly though: Statistics on AI market size


This is confusing two different concepts. Revenue generated by AI companies or by AI products and services is a different concept than AI’s ability to automate human labour or augment the productivity of human workers. By analogy, video games (another category of software) generate a lot of revenue, but automate no human labour and don’t augment the productivity of human workers. 

LLMs haven’t automated any human jobs  and the only scientific study I’ve seen on the topic found that LLMs slightly reduced worker productivity. (Mentioned in a footnote to the post I linked above.)

If AI is having an economic impact by automating software engineers' labour or augmenting their productivity, I'd like to see some economic data or firm-level financial data or a scientific study that shows this.

Your anecdotal experience is interesting, for sure, but the other people who write code for a living who I've heard from have said, more or less, AI tools save them the time it would take to copy and paste code from Stack Exchange, and that's about it. 

I think AI's achievements on narrow tests are amazing. I think AlphaStar's success on competitive StarCraft II was amazing. But six years after AlphaStar and ten years after AlphaGo, have we seen any big real-world applications of deep reinforcement learning or imitation learning that produce economic value? Or do something else practically useful in a way we can measure? Not that I'm aware of.

Instead, we've had companies working on real-world applications of AI, such as Cruise, shutting down. The current hype about AGI reminds me a lot of the hype about self-driving cars that I heard over the last ten years, from around 2015 to 2025. In the five-year period from 2017 to 2022, the rhetoric on solving Level 4/5 autonomy was extremely aggressive and optimistic. In the last few years, there have been some signs that some people in the industry are giving up, such as Cruise closing up shop.

Similarly, some companies, including Tesla, Vicarious, Rethink Robotics, and several others have tried to automate factory work and failed. 

Other companies, like Covariant, have had modest success on relatively narrow robotics problems, like sorting objects into boxes in a warehouse, but nothing revolutionary.

The situation is complicated and the truth is not obvious, but it's too simple to say that predictions about AI progress have overall been too pessimistic or too conservative. (I'm only thinking about recent predictions, but one of the first predictions about AI progress, made in 1956, was wildly overoptimistic.[1])

I wrote a post here and a quick take here where I give my other reasons for skepticism about near-term AGI. That might help fill in more information about where I'm coming from, if you're curious.

  1. ^

    Quote:

    An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

This is a Forum Team crosspost from Substack

What does this mean? Is the author of this post, Matt Reardon, on the EA Forum team? Or did a moderator/admin of the EA Forum crosspost this from Matt Reardon's Substack, under Matt's EA Forum profile? 

I completely feel the same way that racism and sympathy toward far-right and authoritarian views in effective altruism is a reason for me to want to distance myself from the movement. As well as people maybe not agreeing with these views but basically shrugging and acting like it's fine.

Here's a point I haven't seen many people discuss:

...many people could have felt betrayed by the fact that EA leadership was well aware of FTX sketchiness and didn't say anything (or weren't aware, but then maybe you'd be betrayed by their incompetence). 

What did the EA leadership know and when did they know it? About a year ago, I asked in a comment here about a Time article that claims Will MacAskill, Holden Karnofsky, Nick Beckstead, and maybe some others were warned about FTX and/or Sam Bankman-Fried. I might have missed some responses to this, but I don't remember ever getting a clear answer on this.

If EA leaders heard credible warnings and ignored them, then maybe that shows poor judgment. Hard to say without knowing more information. 

Most people who do linkposts to their own writing put the link and also include the full text.

More people will read this if you put the full text of the post here. 

Thanks for this post! Your forum bio says you're a professional economist at the Bank of Canada, so that makes me trust your analysis more if you were just a random layperson.

I don't know if you're interested in creating a blog or a newsletter, but it seems like this analysis should be shared more widely!

It seems in a lot of cases you have disagreed with concepts before understanding them fully. Would you agree? And if so, why do you think this happened here, where I'm sure that you are great at making evidence-based judgements in other areas?

This comes across as passive-aggressive. Neel's patient response below is right on the money. 

If I recommend a book to someone on the EA Forum (or any forum), there's a slim chance they're going to read that book. The only way there's going to be a realistic chance they'll read it is either if I said something so interesting about it that it got them curious or if they were already curious about that topic area and decided the book is up their alley.

The same idea applies, to varying extents, to any other kind of media — blog posts, papers, videos, podcasts, etc. 

A few of your other comments also contain stuff that comes across as passive-aggressive. (Particularly the ones that have zero or negative karma.)

I can empathize with your position in that I can understand what it's like to try to engage with people who have really different perspectives on a topic that is important to me, and that this often feels frustrating. 

All I can say is that if your goal is persuasion or to have some kind of meeting of the minds, then saying stuff like this just pushes people further away.

Load more