BM

Benjamin M.

178 karmaJoined Pursuing an undergraduate degree

Bio

Here to talk about phytomining for now.

Comments
21

Topic contributions
3

Cats' economic growth potential likely has a heavy-tailed distribution, because how else would cats knock things off shelves with their tail. As such, Open Philanthropy needs to be aware that some cats, like Tama, make much better mascots than other cats. One option would be to follow a hits-based strategy: give a bunch of areas cat mascots, and see which ones do the best. However, given the presence of animal welfare in the EA movement, hitting cats is likely to attract controversy. A better strategy would be to identify cats that already have proven economic growth potential and relocate them to areas most in need of economic growth. Tama makes up 0.00000255995% of Japan's nominal GDP (or something thereabouts, I'm assuming all Tama-related benefits to GDP occurred in the year 2020). If these benefits had occurred in North Korea, they would be 0.00086320506% of nominal GDP or thereabouts. North Korea is also poorer, so adding more money to its economy goes further. Japan and North Korea are near each other, so transporting Tama to North Korea would be extremely cheap. Assuming Tama's benefits are the same each year and are independent of location (which seems reasonable, I asked ChatGPT for an image of Tama in North Korea and it is still cute), catnapping Tama would be highly effective. One concern is that there might be downside risk, because people morally disapprove of kidnapping cats. On the other hand, people expressing moral disapproval of kidnapping cats are probably more likely to respect animal's boundaries by not eating meat, thus making this an intervention that spans cause areas. In conclusion: EA is solved, all we have to do is kidnap some cats.

It seems like, from the chart in the appendix, that more active outreach sources produce higher-engagement EAs. Is this actually true, or does it reflect a confounder (such as age)? If true, it seems very surprising; I would have expected that people who sought out EA on their own would be the most engaged, because they want something from EA specifically. Maybe this has something to do with how engagement was measured (i.e. it seems high on sources that active outreach tries to get people to do, like contact with the EA community, rather than EA-endorsed behaviors like charitable donations)

My rough sense is that one reason for EA's historical lack of focus on systemic change is that it's really hard to convert money to systemic change (difficult to measure effectiveness, hard to coördinate on optimal approach, etc.). On the other hand, I do think that this leads to an undervaluing of careers that work in systemic change (and important considerations that cross cause areas, since they're also hard to donate to). This might not be true if you have AI timelines too short for systemic changes to come into being.

Not super confident about this, though. Feel free to try to change my mind.

There's probably something that I'm missing here, but:

  • Given that the dangerous AI capabilities are generally stated to emerge from general-purpose and agentic AI models, why don't people try to shift AI investment into narrower AI systems? Or try to specifically regulate those systems?

Possible reasons: 

  • This is harder than it sounds
  • General-purpose and agentic systems are inevitably going to outcompete other systems
  • People are trying to do this, and I just haven't noticed, because I'm not really an AI person
  • Something else

Which is it?

This is an understandable point to leave out, but one issue with the portfolio analogy is that, as far as I can tell, it assumes all "EA" money is basically the same. However, big donors might have advantages in certain areas, for instance if a project is hard to evaluate without extensive consultation with experts, or if a project can only be successful if it has a large and guaranteed funding stream. As such, I'm not sure it holds that, if somebody thinks Open Phil is underinvesting in longtermism compared to the ideal allocation, then they should give to longtermist charities- the opportunities available to Open Phil might be significantly stronger than the ones available to donors, especially ones who don't have a technical background in the area.

I'll try to write a longer comment later, but right now I'm uncertain but lean towards global health because of some combination of the following:
1. I suspect negative lives are either rare or nonexistent, which makes it harder to avoid logic-of-the-larder-type arguments

2. I'm more uncertain about this, but I lean towards non-hedonic forms of consequentialism (RP parliament tool confirms that this generally lowers returns to animals)

3. Mostly based on the above, I think many moral weights for animals are too high

I'm also not sure if the 100 million would go to my preferred animal welfare causes or the EA community's preferred animal welfare causes or maybe the average person's preferred animal welfare causes. This matters less for my guesses about the impact of health and development funding.

Both my state senator and my state representative have responded to say that they'll take a look at it. It's non-commital, but it still shows how easy it is to contact these people.

Do you like SB 1047, the California AI bill? Do you live outside the state of California? If you answered "yes" to both these questions, you can e-mail your state legislators and urge them to adopt a similar bill for your state. I've done this and am currently awaiting a response; it really wasn't that difficult. All it takes is a few links to good news articles or opinions about the bill and a paragraph or two summarizing what it does and why you care about it. You don't have to be an expert on every provision of the bill, nor do you have to have a group of people backing you. It's not nothing, but at least for me it was a lot easier than it sounded like it would be. I'll keep y'all updated on if I get a response.

Load more