BM

Benjamin M.

226 karmaJoined Working (0-5 years)

Comments
35

Topic contributions
3

I do think there's also a significant chance of a larger bubble, to be fair, affecting the big AI companies. But my instinct is that a sudden fall in investment into small startups and many of them going bankrupt would get called a bubble in the media, and that that investment wouldn't necessarily just go into the big companies.

Benjamin M.
4
1
0
60% agree

What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?

I'm not exactly sure about the operationalization of this question, but it seems like there's a bubble among small AI startups at the very least. The big players might be unaffected however? My evidence for this is some mix of not seeing a revenue pathway for a lot of these companies that wouldn't require a major pivot, few barriers to entry for larger players if their product becomes successful, and having met a few people who work in AI startups who claim to be optimistic about earnings and stuff but can't really back that up.

I think you're right that frugality is good, but I'm not sure where you're getting the idea that it isn't discussed any, although it maybe could use a bit more discussion on the margin. I also think the main con is that it would alienate people who aren't willing to be particularly frugal, but will donate some anyways. The personal finance tag has some posts you might be interested in.

This might not fit the idea of a prioritization question, but it seems like there are a lot of "sure bets" in global development, where you can feel highly confident an intervention will be useful, and not that many in AI-related causes (high chance it either ends up doing nothing or being harmful), with animal welfare somewhere in between. It would be interesting to find projects in global development that look good for risk-tolerant donors, and ones in AI (and maybe animal welfare or other "longtermist" causes) that look good for less risk-tolerant donors. 

Not really a criticism of this post specifically, but I've seen a bunch of enthusiasm about the idea of some sort of AI safety+ group of causes and not that much recognition of the fact that AI ethicists and others not affiliated with EA have already been thinking about and prioritizing some of these issues (particularly thinking of the AI and democracy one, but I assume it applies to others). The EA emphases and perspectives within these topics have their differences, but EA didn't invent these ideas from scratch.

For me at least, that implies an institute founded or affiliated with somebody named Petrov, not just inspired by somebody, and it would seem slightly sketchy for it not to be.

Benjamin M.
1
0
0
10% agree

I'd be doing less good with my life if I hadn't heard of effective altruism

The only thing I think EA has actually done counterfactually for me is encourage me to cut out eggs from my diet. I'm pretty confident that everything else I could have gotten from non-EA sources; a class my freshman year taught by somebody who afaik isn't an EA but had independently come to agree with a lot of the principles was pretty impactful on my life since it led to me changing my major.

Edit: Oh and I've won small-ish amounts of money in random Metaculus contests, which I probably heard about through EA?

I wrote up something for my personal blog about my relationship with effective altruism. It's intended for a non-EA audience - at this point my blog subscribers are mostly friends and family - so I didn't think it was worth cross posting as I spend a lot of time trying to explain what effective altruism is exactly, but some people might still be interested. My blog mostly is about books and whatnot, not effective altruism, but if I do write some more detailed stuff on effective altruism I will try to post it to the forum also.

I think this is a good analysis and I agree with your conclusions, but I have one minor point:

If younger people are disproportionately not taking jobs that are more exposed to AI, there are two possibilities:

  1. They can't get the jobs because firms are using AI instead.
  2. They don't try to enter those fields because they expect that there will be decreased demand due to AI.

Your claim seems to be that a decrease would be due to point 1, but I think it could be equally well due to point 2. Anecdotally, people who are interested in translation and interpretation do tend to think seriously about whether there will be declining demand due to computer systems, so I think point 2 would be plausible were we to see an effect. I might also want to compare the proportion of young workers in AI affected occupations to those in AI-proof occupations (physical labor? heavily licensed industries?) over time, to make sure that any effects aren't due to overall changes in how easy it is for young people to enter the labor force. But this is really interesting and my comments are mostly moot since we aren't seeing an effect in the main data.

Benjamin M.
12
0
0
20% disagree

There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention

Possible candidates:

  • We're severely underrating tractability and importance (specifically in terms of sentience) for wild animals
  • We're severely underrating neglectedness (and maybe some other criteria?) for improving data collection in LMICs
  • We're severely underrating tractability and neglectedness for some category of political interventions
  • Something's very off in our model of AI ethics (in the general sense, including AI welfare)
  • We're severely underrating tractability of nuclear security-adjacent topics
  • There's something wrong with the usual EA causes that makes them ineffective, so we get left with more normal causes
  • We have factually wrong beliefs about the outcome of some sort of process of major political change (communism? anarchism? world government?)

    None of these strike me as super likely, but combining them all you still get an okay chance.

Load more