M

Midtermist12

329 karmaJoined

Comments
41

"Charitable purpose" can be pretty broad under U.S. law, for instance, and could probably encompass such funding. Or do you mean that funders are not interested in it?

A lot of the specific things you've mentioned make a lot of sense.

Generally, I would be cautious about asking for help in contexts where review and/or supervision will be necessary or your reliance on the volunteer could be detrimental. People are often excited to help and overstate what they actually can do. Often the value of the time involved with dealing with volunteers is far more valuable than what they produce.

Another virtue of the houses as a way of doing it is would be a pretty strong signal that someone using the resource is in it for the right reasons. There are a lot more constraints than someone just receiving a check and hoping that the person is trying to make an impact.

Teaching counterfactual reasoning in economics education

A crucial EA concept for high school economics is counterfactual reasoning – systematically asking "what would have happened if agent X had not done action Y?" This is essential for understanding the actual impact of interventions.

Why it matters:

  • Many interventions don't create as much value as they appear because something similar would have happened anyway
  • The true impact is only the additional change caused by the intervention – the difference between what actually happened and what would have happened in the counterfactual scenario
  • It's counterintuitive – our brains naturally credit actions without considering what would have occurred otherwise

Methods to evaluate counterfactual impact:

Randomized controlled trials (RCTs): Randomly assign some groups to receive an intervention and others not, then compare outcomes. The control group approximates what would have happened without the intervention.

Before-and-after with comparison groups: Compare changes in a treated group to changes in a similar untreated group over the same period. This helps account for broader trends that would have occurred anyway.

Trend analysis: Plot pre-intervention trends and project them forward. If post-intervention outcomes match the projected trend, the intervention may have had little counterfactual impact.

Natural experiments: Find situations where an intervention occurred in one place but not another similar place due to arbitrary reasons, allowing comparison.

Classroom applications:

  • Analyze case studies using these methods (e.g., evaluating a job training program's effectiveness)
  • Have students design simple evaluation plans for school or community interventions
  • Critique news articles that claim causation without proper counterfactual analysis

This teaches students both to think counterfactually and to evaluate causal claims empirically.

(Comment made in collaboration with generative AI)

I have thought for a long time that the EA power centers have a lack of curiosity or appropriate respect for the potential of the EA community and has a pretty specific set of markers that suggest that people should be given the time of day. I have the impression that there is a pretty small "nerve center" that sets priorities and considers how the EA community might be helpful to address these priorities along the paths that it sees. 

This seems to me to limit the power of EA significantly: if we could be getting more perspectives and ideas taken seriously and dedicating resources in these direction, rather than just to relatively narrow agents of the "nerve center", we might be able to accomplish a lot more. Right now it seems pretty sad that EA is often just identified with the people who have power in it, rather than its more basic and important idea of maximizing good with the resources that we have.

I suppose there have probably been posts along these lines, but I guess a "democratizing EA funding and power as maximizing epistemic hygiene and reach" would be appreciated.

Yeah, I understand the need for credibility with the animal rights community, but it probably would be helpful if there were more prominent omnivores who emphatically identified as animal advocates. Probably one of the reasons factory farming can be so successful is that there's a perceived barrier to entry to fighting it as becoming vegan. The more that vegans reinforce the narrative that "to be on our side, you need to be vegan", the more they are alienating potential allies and making it easier for the monstrous system to persist. I think what might be the most important in broadening the movement would be prominent animal rights activists who are omnivores.

I don't really understand why I am getting downvoted/disagreevoted. I was just pointing out the contrast between humans - who also have the traits discussed in this post - and AI, which is basically that human beings have intrinsic motivations and virtues that AI does not have. I thought that this was a critical piece that was not really emphasized. It is pretty dispiriting to read through an article, point out something you think might be helpful, and have this happen.

Thank you for this detailed analysis. I found the human analogy initially confusing, but I think there's an important argument here that could be made more explicit.

The essay documents extensive DSL (deceitful, sycophantic, lazy) behavior in humans, which initially seems to undermine the claim that AI will be different. However, you do address why humans accomplish difficult things despite these traits:

"People do not appreciate how much of science relies on the majority of people practicing it to be rigorous, skilled, and ideologically dedicated to truth-seeking."

If I understand correctly, your core argument is:

Humans have DSL traits BUT also possess intrinsic motivation, professional pride, and ideological commitment to truth/excellence that counterbalances these tendencies. AI systems, trained purely through reward optimization, will lack these counterbalancing motivations and therefore DSL traits will dominate more completely.

This is actually quite a sophisticated claim about the limits of instrumental training, but it's somewhat buried in the "vibe physics" section. Making it more prominent might strengthen the essay, as it directly addresses the apparent paradox of "if humans are DSL, why do they succeed?"

Does this capture your argument correctly?

(note the above comment was generated with the assistance of AI)

Thanks for sharing this. While I think there are strong reasons to invest heavily in AI safety, I'm concerned this particular cost-benefit framing may not be as compelling as it initially appears.

The paper uses a $10 million value of statistical life (VSL) to justify spending $100,000 per person to avoid a 1% mortality risk. However, if we're being consistent with cost-effectiveness reasoning, we should note that GiveWell-recommended charities save lives in the developing world for approximately $5,000 each—roughly 2,000 times cheaper per life saved.

By this logic, the same funding directed toward global health interventions would save orders of magnitude more lives with near-certainty, compared to reducing AI x-risk with uncertain probability.

This doesn't mean AI safety is a bad investment—there are strong arguments based on:

  • The value of preserving future generations (which Jones notes would increase spending estimates)
  • Diminishing returns or bottlenecks in scaling proven global health interventions
  • The categorical importance of preventing existential catastrophe
  • Portfolio diversification across different types of risk

(Note: comment generated in collaboration with AI)

As someone who endorses offsetting (or donating to animal charities in excess of offset) as a form of being an ally to animals, would not being an omnivore who donates far in excess of the offset make you more credible regarding this position?

Load more