BM

Benjamin M.

195 karmaJoined Pursuing an undergraduate degree

Bio

Here to talk about phytomining for now.

Comments
27

Topic contributions
3

I think this is a good analysis and I agree with your conclusions, but I have one minor point:

If younger people are disproportionately not taking jobs that are more exposed to AI, there are two possibilities:

  1. They can't get the jobs because firms are using AI instead.
  2. They don't try to enter those fields because they expect that there will be decreased demand due to AI.

Your claim seems to be that a decrease would be due to point 1, but I think it could be equally well due to point 2. Anecdotally, people who are interested in translation and interpretation do tend to think seriously about whether there will be declining demand due to computer systems, so I think point 2 would be plausible were we to see an effect. I might also want to compare the proportion of young workers in AI affected occupations to those in AI-proof occupations (physical labor? heavily licensed industries?) over time, to make sure that any effects aren't due to overall changes in how easy it is for young people to enter the labor force. But this is really interesting and my comments are mostly moot since we aren't seeing an effect in the main data.

Benjamin M.
12
0
0
20% disagree

There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention

Possible candidates:

  • We're severely underrating tractability and importance (specifically in terms of sentience) for wild animals
  • We're severely underrating neglectedness (and maybe some other criteria?) for improving data collection in LMICs
  • We're severely underrating tractability and neglectedness for some category of political interventions
  • Something's very off in our model of AI ethics (in the general sense, including AI welfare)
  • We're severely underrating tractability of nuclear security-adjacent topics
  • There's something wrong with the usual EA causes that makes them ineffective, so we get left with more normal causes
  • We have factually wrong beliefs about the outcome of some sort of process of major political change (communism? anarchism? world government?)

    None of these strike me as super likely, but combining them all you still get an okay chance.

Benjamin M.
1
1
0
30% agree

Should EA avoid using AI art for non-research purposes?

I'm unconvinced by the arguments for first-order harms (environment, copyright) being sufficiently big, but I think it's worthwhile to send a signal that EA is anti-giving-AI-too-much-power. Also I think it's mostly mediocre, but I'm only a mild agree vote because it's not really something worth policing. Maybe this is what people mean by disagree reacting the post itself?

Hmm it seems like the Metaculus poll linked is actually on a random selection of benchmarks being arbitrarily defined as a weakly general intelligence. If I have to go with the poll resolution, I think there's a much greater chance (not going to look into how difficult the Atari game thing would be yet, so not sure how much greater).

Benjamin M.
1
0
0
0% agree

Bioweapons are an existential risk

I don't buy the Parfitian argument, so I'm not sure what a binary yes-no about existential risk would mean to me. 

Benjamin M.
1
0
0
78% disagree

AGI by 2028 is more likely than not

I agree with a bunch of the standard arguments against this, but I'll throw in two more that I haven't seen fleshed out as much: 

  1. The intuitive definition of AGI includes some physical capabilities (and even ones that nominally exclude physical capabilities probably necessitate some), and we seem really far behind on where I would expect AI systems to be in manipulating physical objects.
  2. AIs make errors in systematically different ways than humans, and often have major vulnerabilities. This means we'll probably want AI that works with humans in every step, and so will want more specialized AI. I don't really buy some arguments that I've seen against this but I don't know enough to have a super confident rebuttal.

Cats' economic growth potential likely has a heavy-tailed distribution, because how else would cats knock things off shelves with their tail. As such, Open Philanthropy needs to be aware that some cats, like Tama, make much better mascots than other cats. One option would be to follow a hits-based strategy: give a bunch of areas cat mascots, and see which ones do the best. However, given the presence of animal welfare in the EA movement, hitting cats is likely to attract controversy. A better strategy would be to identify cats that already have proven economic growth potential and relocate them to areas most in need of economic growth. Tama makes up 0.00000255995% of Japan's nominal GDP (or something thereabouts, I'm assuming all Tama-related benefits to GDP occurred in the year 2020). If these benefits had occurred in North Korea, they would be 0.00086320506% of nominal GDP or thereabouts. North Korea is also poorer, so adding more money to its economy goes further. Japan and North Korea are near each other, so transporting Tama to North Korea would be extremely cheap. Assuming Tama's benefits are the same each year and are independent of location (which seems reasonable, I asked ChatGPT for an image of Tama in North Korea and it is still cute), catnapping Tama would be highly effective. One concern is that there might be downside risk, because people morally disapprove of kidnapping cats. On the other hand, people expressing moral disapproval of kidnapping cats are probably more likely to respect animal's boundaries by not eating meat, thus making this an intervention that spans cause areas. In conclusion: EA is solved, all we have to do is kidnap some cats.

It seems like, from the chart in the appendix, that more active outreach sources produce higher-engagement EAs. Is this actually true, or does it reflect a confounder (such as age)? If true, it seems very surprising; I would have expected that people who sought out EA on their own would be the most engaged, because they want something from EA specifically. Maybe this has something to do with how engagement was measured (i.e. it seems high on sources that active outreach tries to get people to do, like contact with the EA community, rather than EA-endorsed behaviors like charitable donations)

My rough sense is that one reason for EA's historical lack of focus on systemic change is that it's really hard to convert money to systemic change (difficult to measure effectiveness, hard to coördinate on optimal approach, etc.). On the other hand, I do think that this leads to an undervaluing of careers that work in systemic change (and important considerations that cross cause areas, since they're also hard to donate to). This might not be true if you have AI timelines too short for systemic changes to come into being.

Not super confident about this, though. Feel free to try to change my mind.

There's probably something that I'm missing here, but:

  • Given that the dangerous AI capabilities are generally stated to emerge from general-purpose and agentic AI models, why don't people try to shift AI investment into narrower AI systems? Or try to specifically regulate those systems?

Possible reasons: 

  • This is harder than it sounds
  • General-purpose and agentic systems are inevitably going to outcompete other systems
  • People are trying to do this, and I just haven't noticed, because I'm not really an AI person
  • Something else

Which is it?

Load more