Tim Hua

MARS/AISC
86 karmaJoined Working (0-5 years)Berkeley, CA, USA
timhua.me

Bio

Participation
6

Used to run Middlebury Effective Altruism
Worked as an economist and Walmart and gave a bunch of money away
Now working in AI safety
See timhua.me

Comments
14

I think these "preserve trees" offsets might be somewhat fake to begin with? I've personally given to make sunsets (direct aerosol injection) and also Climateworks (direct carbon capture from the air and injecting it into groundwater/the rocks). 

In case people don't know, Oath is an Democratic party affiliated org that identifies underfunded and close races where your marginal donations could really matter. 

I'd encourage you to stop making these sorts of posts. I think they're off-putting for people that might otherwise engage more with more reasonable EA ideas.

 

I strong downvoted this comment because I think this type of discourse censorship is terrible. Effective Altruism should be about figuring out how to do the most good, and then doing just that. 

"This idea is off putting" can be use as a fully general counterargument against any new intervention or pivot. Helping farmed animals is off putting to many. Helping people abroad before helping those at home is off putting to many. 

This is, by the way, not to say that you can't dismiss an argument if the logic lead to absurd conclusions. Reasoning from first principals can be a dangerous activity if you take your ideas seriously (see e.g., epistemic learned helplessness, memedic immune system). But when trying to figure out how to do the most good, I think it's really really bad to have any sort internal thought censors. 

(I think it's comparably better to consider "does this sound off putting" deciding what actions to take.)

While you can use o1 and gemini with internet access, I think they almost certainly evaluated it without such access (see the original paper here).

I really really do not think you should put the plot there. It's like comparing two different students performance except one of them has access to the internet. I think it's extremely misleading. If you want to illustrate progress you could just use the FrontierMath/GPQA results or even ARC-AGI. 

Wait that humanity's last exam plot is super misleading right? Since the other models did not have access to the internet but Deep Research does?

I don't seem to see the fireside chat with forethought on the agenda, will it be added later? I'd love to attend!

I endorse moral reasoning where you start from a conclusion, and then work backwards to discover general principals. 

I think this community is much more at risk of being led astray by convincing-sounding but actually incorrect arguments, as opposed to having starting assumptions that vastly limit their ability to do good (I will probably give the opposite advice to most other people).

See e.g., Epistemic learned helplessness, Memetic immune system

I strongly disagree that we should avoid doing a thing just because the optics/vibes of it might not be mainstream, or that it requires people to change what they're doing.


I am also strongly against "we shouldn't do this because it is culturally insensitive." There are lots of cultural practices I find abhorrent (e.g., female genital mutilation). I don't care if stopping it "offends" other people. Cultures are perfectly capable of promoting very bad practices. 

This is not trying to do the most good with limited resources. This is "trying to do the most good with limited resources, subject to the constraint of not making some people angry or seeming too weird." 

(For what it's worth, I voted fairly strongly on the side of spending on more on global health as opposed to animal welfare.)

I'm not sure I understand: on one side, we have a stronger obligation to those close to us, but on another side, it is good to help strangers that are thousands of kilometers away

 

I don't see how this is contradictory? For example, you might prefer saving 10 American lives to saving 11 non-American lives, but prefer saving 100 non-American lives to 5 American lives.

That and the anti-expanding moral circle argument suggests that it's OK (and in fact, in my opinion, good) to assign different weights to different entities.

I don't believe in complete impartiality. I think we have a stronger moral obligation to those who are closer to us--be it family, friends, or co-nationals. The vast majority of my donations have gone to global health simply because it is much much more cost-effective to help the poorest in the world. 
 

I also think that a blind push to expand the moral circle is misguided. See: https://gwern.net/narrowing-circle.

Load more