Thomas Kwa

Researcher @ MATS/Independent
2939 karmaJoined Feb 2020Working (0-5 years)Berkeley, CA, USA



Mechinterp researcher under Adrià Garriga-Alonso.


Being attention-getting and obnoxious probably paid off with slavery because abolition was tractable. But animal advocacy is different. I think a big question is whether he was being strategic, or just obnoxious by nature? If we put Benjamin Lay in 2000, would he start cage-free campaigns or become PETA? Or perhaps find some angle we're overlooking?

My comment is not an ad hominem. An ad hominem attack would be if someone is arguing point X and you distract from X by attacking their character. I was questioning only Remmelt's ability to distinguish good research from crankery, which is directly relevant to the job of an AISC organizer, especially because some AISC streams are about the work in question by Forrest Landry. I apologize if I was unintentionally making some broader character attack. Whether it's obnoxious is up to you to judge.

Crossposted from LessWrong.

Maybe I'm being cynical, but I'd give >30% that funders have declined to fund AI Safety Camp in its current form for some good reason. Has anyone written the case against? I know that AISC used to be good by talking to various colleagues, but I have no particular reason to believe in its current quality.

  • MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn't attract the best talent.
    • If so, AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it's been successful.
  • Why does the founder, Remmelt Ellen, keep linkposting writing by Forrest Landry which I'm 90% sure is obvious crankery? It's not just my opinion; Paul Christiano said "the entire scientific community would probably consider this writing to be crankery", one post was so obviously flawed it gets -46 karma, and generally the community response has been extremely negative. Some AISC work is directly about the content in question. This seems like a concern especially given the philosophical/conceptual focus of AISC projects, and the historical difficulty in choosing useful AI alignment directions without empirical grounding. [Edit: To clarify, this is not meant to be a character attack. I am concerned that Remmelt does not have the skill of distinguishing crankery from good research, even if he has substantially contributed to AISC's success in the past.]
  • All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier. Because I'm interested in the current quality in the presence of competing programs, I looked at the two from 2022 or later: this in a second-tier journal and this in a NeurIPS workshop, with no top conference papers. I count 52 participants in the last AISC so this seems like a pretty poor rate, especially given that 2022 and 2023 cohorts (#7 and #8) could both have published by now. (though see this reply from Linda on why most of AISC's impact is from upskilling)
  • The impact assessment was commissioned by AISC, not independent. They also use the number of AI alignment researchers created as an important metric. But impact is heavy-tailed, so the better metric is value of total research produced. Because there seems to be little direct research, to estimate the impact we should count the research that AISC alums from the last two years go on to produce. Unfortunately I don't have time to do this.

Surely as their gold standard “career change” pin-up story, they could find a higher EV career change.

You're assuming that the EV of switches from global health to biosecurity is lower than the EV of switching from something else to biosecurity. Even though global health is better than most cause areas, this could be false in practice for at least two reasons

  • If the impact of biosecurity careers is many times higher than the impact of global health, and people currently in global health are slightly more talented, altruistic, or hardworking.
  • If people currently in global health are not doing the most effective global health interventions.

This article just made HN. It's a report saying that 39 of 50 top offsetting programs are likely junk, 8 "look problematic", and 3 lack sufficient information, with none being found good.

I think most climate people are very suspicious of charities like this, rather than or in addition to not believing in ethical offsetting. See this Wendover Productions video on problematic, non-counterfactual, and outright fraudulent climate offsets. I myself am not confident that CATF offsets are good and would need to do a bunch of investigation, and most people are not willing to do this starting from, say, an 80% prior that CATF offsets are bad.

Upvoted. I don't agree with all of these takes but they seem valuable and underappreciated.

But with no evidence, just your guesses. IMO we should wait until things shake out and even then the evidence will require lots of careful interpretation. Also EA is 2/3 male, which means that even minor contributions of women to scandals could mean they cause proportionate harms.

I'm looking for AI safety projects with people with some amount of experience. I have 3/4 of a CS degree from Caltech, one year at MIRI, and have finished the WMLB and ARENA bootcamps. I'm most excited about activation engineering, but willing to do anything that builds research and engineering skill.

If you've published 2 papers in top ML conferences or have a PhD in something CS related, and are interested in working with me, send me a DM.

Load more