Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed
157
· · · 3m read

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
Me: "Well at least this study shows no association beteween painted houses and kid's blood lead levels. That's encouraging!" Wife: "Nothing you have said this morning is encouraging NIck. Everything that I've heard tells me that our pots, our containers and half of our house are slowly poisoning our baby" Yikes touche... Thanks @Lead Research for Action (LeRA) for this unsettling but excellently written report. Our house is full of aluminium pots and green plastic food containers. Now to figure out what to do about it! https://drive.google.com/file/d/1pqRUeejiRCX2bXekeZnL0zGi34zbK23w/view
26
Linch
4d
4
Recent generations of Claude seem better at understanding blog posts and making fairly subtle judgment calls than most smart humans. These days when I’d read an article that presumably sounds reasonable to most people but has what seems to me to be a glaring conceptual mistake, I can put it in Claude, ask it to identify the mistake, and more likely than not Claude would land on the same mistake as the one I identified. I think before Opus 4 this was essentially impossible, Claude 3.xs can sometimes identify small errors but it’s a crapshoot on whether it can identify central mistakes, and certainly not judge it well. It’s possible I’m wrong about the mistakes here and Claude’s just being sycophantic and identifying which things I’d regard as the central mistake, but if that’s true in some ways it’s even more impressive. Interestingly, both Gemini and ChatGPT failed at these tasks. (They can sometimes directionally approach the error I identified, but their formulation is imprecise and broad, and they only have it in a longer list of potential quibbles rather than zero in on the most damning issue).  For clarity purposes, here are 3 articles I recently asked Claude to reassess (Claude got the central error in 2/3 of them). I'm also a little curious what the LW baseline is here; I did not include my comments in my prompts to Claude. https://terrancraft.com/2021/03/21/zvx-the-effects-of-scouting-pillars/ https://www.clearerthinking.org/post/what-can-a-single-data-point-teach-you https://www.lesswrong.com/posts/vZcXAc6txvJDanQ4F/the-median-researcher-problem-1 
Why don’t EA chapters exist at very prestigious high schools (e.g., Stuyvesant, Exeter, etc.)? It seems like a relatively low-cost intervention (especially compared to something like Atlas), and these schools produce unusually strong outcomes. There’s also probably less competition than at universities for building genuinely high-quality intellectual clubs (this could totally be wrong).
Here’s a random org/project idea: hire full-time, thoughtful EA/AIS red teamers whose job is to seriously critique parts of the ecosystem — whether that’s the importance of certain interventions, movement culture, or philosophical assumptions. Think engaging with critics or adjacent thinkers (e.g., David Thorstad, Titotal, Tyler Cowen) and translating strong outside critiques into actionable internal feedback. The key design feature would be incentives: instead of paying for generic criticism, red teamers receive rolling “finder’s fees” for critiques that are judged to be high-quality, good-faith, and decision-relevant (e.g., identifying strategic blind spots, diagnosing vibe shifts that can be corrected, or clarifying philosophical cruxes that affect priorities). Part of why I think this is important is because I generally think have the intuition that the marginal thoughtful contrarian is often more valuable than the marginal agreer, yet most movement funding and prestige flows toward builders rather than structured internal critics. If that’s true, a standing red-team org — or at least a permanent prize mechanism — could be unusually cost-effective. There have been episodic versions of this (e.g., red-teaming contests, some longtermist critiquing stuff), but I’m not sure why this should come in waves rather than exist as ongoing infrastructure (org or just some prize pool that's always open for sufficiently good criticisms).
Is the recent partial lifting of US chip export controls on China (see e.g. here: https://thezvi.substack.com/p/selling-h200s-to-china-is-unwise) good or bad for humanity? I’ve seen many takes from people whose judgment I respect arguing that it is very bad, but their arguments, imho, just don’t make sense. What am I missing? For transparency, I am neither Chinese nor American, nor am I a paid agent of them. I am not at all confident in this take, but imho someone should make it. I see two possible scenarios: A) you are not sure how close humanity is to developing superintelligence in the Yudkowskian sense. This is what I believe, and what many smart opponents of the Trump administration’s move to ease chip controls believe. Or B) you are pretty sure that humanity is not going to develop superintelligence any time soon, let’s say in the next century. I admit that the case against the lifting of chip controls is stronger under B), though I am ultimately inclined to reject it in both scenarios. Why is easing of chip controls, imho, a good idea if the timeline to superintelligence might be short? If superintelligence is around the corner, here is what should be done: an immediate international pause of AI development until we figure out how to proceed. Competitive pressures and resulting prisoner’s dilemmas have been identified as the factor that might push us toward NOT pausing even when it would be widely recognized that the likely outcome of continuing is dire. There are various relevant forms of competition, but plausibly the most important is that between the US and China. In order to reduce competitive dynamics and thus prepare the ground for a cooperative pause, it is important to build trust between the parties and beware of steps that are hostile, especially in domains touching AI. Controls make sense only if you are very confident that superintelligence developed in the US, or perhaps in liberal democracy more generally, is going to turn out well for h