Daniel_Eth

2618 karmaJoined

Posts
17

Sorted by New

Comments
250

I think my introductory explainer on the topic is a pretty good resource for that sort of audience:
https://medium.com/@daniel_eth/ai-alignment-explained-in-5-points-95e7207300e3

OpenAI taking security more seriously seems good, and also I expect is good for reducing race dynamics (the less that US adversaries are able to hack US labs, the less tight I expect a race to be).

I think there's a decently-strong argument for there being some cultural benefits from AI-focused companies (or at least AGI-focused ones) – namely, because they are taking the idea of AGI seriously, they're more likely to understand and take seriously AGI-specific concerns like deceptive misalignment or the sharp left turn. Empirically, I claim this is true – Anthropic and OpenAI, for instance, seem to take these sorts of concerns much more seriously than do, say, Meta AI or (pre-Google DeepMind) Google Brain.

Speculating, perhaps the ideal setup would be if an established organization swallows an AGI-focused effort, like with Google DeepMind (or like if an AGI-focused company was nationalized and put under a government agency that has a strong safety culture).

I'm pretty confident that people who prioritize their health or enjoyment of food over animal welfare can moral handshake with animal suffering vegans by tabooing poultry at the expense of beef.


Generally disagree, because the meat eaters don't get anything out of this agreement. "We'll both agree to eat beef but not poultry" doesn't benefit the meat eater. The one major possible exception imho is people in relationships – I could image a couple where one person is vegan and the other is a meat eater where they decide both doing this is a pareto-improvement.

I think it is worth at least a few hours of every person's time to help people during a war and humanitarian crisis. 

 

I don't think this is true, and I don't see an a priori reason to expect cause prioritization research to result in that conclusion. I also find it a little weird how often people make this sort of generalized argument for focusing on this particular conflict, when such a generalized statement should apply equally well to many more conflicts that are much more neglected and lower salience but where people rarely make this sort of argument (it feels like some sort of selective invocation of a generalizable principle).

My personal view is that being an EA implies spending some significant portion of your efforts being (or aspiring to be) particularly effective in your altruism, but it doesn't by any means demand you spend all your efforts doing so. I'd seriously worry about the movement if there was some expectation that EAs devote themselves completely to EA projects and neglect things like self-care and personal connections (even if there was an exception for self-care & connections insofar as they help one be more effective in their altruism).

It sounds like you developed a personal connection with this particular dog rather quickly, and while this might be unusual, I wouldn't consider it a fault. At the same time, while I don't see a problem with EAs engaging in that sort of partiality with those they connect with, I would worry a bit if you were making the case that this sort of behavior was in itself an act of effective altruism, as I think prioritization, impartiality, and good epistemics are really important to exhibit when engaged in EA projects. (Incidentally, this is one further reason I'd worry if there was an expectation that EAs devote themselves completely to EA projects – I think this would lead to more backwards rationalizations about why various acts people want to do are actually EA projects when they're not, and this would hurt epistemics and so on.) But you don't really seem to be doing that.

IIRC it took me about a minute or two. But I already had high context and knew how I wanted to vote, so after getting oriented I didn't have to spend time learning more or thinking through tradeoffs.

I am curious to know how many Americans were consulted about the decision to spend about $10,000 per tax-payer on upgrading nuclear weapons... surely this is a decision that American voters should have been deeply involved in, given that it impacts both their taxes and their chance of being obliterated in a nuclear apocalypse. 

I think there's a debate to be had about when it's best for political decisions be decided by what the public directly wants, vs when it's better for the public to elect representatives that make decisions based on a combination of their personal judgment and deferring to domain experts. I don't think this is obviously a case where the former makes more sense. 

 

It feels like that much money could be much better spend in other areas. 

Sure, but the alternative isn't the money being spent half on AMF and half on the LTFF – it's instead some combination of other USG spending, lower US taxes, and lower US deficits. I suspect the more important factor in whether this is good or bad will instead be the direct effects of this on nuclear risk (I assume some parts of the upgrade will reduce nuclear risk – for instance, better sensors might reduce the chances of a false positive of incoming nuclear weapons – while other parts will increase the risk).

 

Isn't there a contradiction between the idea that nuclear weapons serve as a deterrent and the idea that we need to upgrade them? The implication would seem to be that the largest nuclear missile stockpile on the planet still isn't a sufficient deterrent, in which case what exactly would constitute a deterrent?  

Not necessarily – the upgrade likely includes many aspects for reducing the chances that a first-strike from adversaries could nullify the US stockpile (efforts towards this goal could include both hardening and redundancy), thus preserving US second-strike capabilities.

 

More to the point, is this decision being taken by people who see nuclear war as a zero-sum game - we win or we lose

I'm sure ~everyone involved considers nuclear war a negative-sum game. (They likely still think it's preferable to win a nuclear war than to lose it, but they presumably think the "winner" doesn't gain as much as the "loser" loses.)

 

If the US truly needs to upgrade its nuclear arsenal, then surely the same is true of Russia

Yeah, my sense is multiple countries will upgrade their arsenals soon. I'm legitimately uncertain whether this will on net increase or decrease nuclear risk (largely I'm just ignorant here – there may be an expert consensus that I'm unaware of, but I don't think the immediate reaction of "spending further money on nukes increases nuclear risk" is obviously necessarily correct). Even if it would be better for everyone to not, it may be hard to coordinate to avoid doing so (though may still be worth trying).

 

Given the success of Oppenheimer and the spectre of nuclear annihilation that has been raised by the war in Ukraine, this might be the moment to get the public behind such an initiative. 

I think it's not crazy to think there might be a relative policy window now to change course, given these reasons.

I don't have any strong views on whether this user should have been given a temporary ban vs a warning, but (unless the ban was for a comment which is now deleted or a private message, which are each possible, and feel free to correct me if so), from reading their public comments, I think it's inaccurate (or at least misleading) to describing them as "promoting violence". Specifically, they do not seem to have been advocating that anyone actually use violence, which is what I think the most natural interpretation of "promoting violence" would be. Instead, they appear to have been expressing that they'd emotionally desire that people who hypothetically would do the thing in question would face violence, that (in the hypothetical example) they'd feel the urge to use violence, and so on.

I'm not defending their behavior, but it does feel importantly less bad than what I initially assumed from the moderator comment, and I think it's important to use precise language when making these sorts of public accusations.

Worth noting that in humans (and unlike in most other primates) status isn't primarily determined solely by dominance (e.g., control via coercion), but instead is also significantly influenced via prestige (e.g., voluntary deference due to admiration). While both dominance and prestige play a large role in determining status among humans, if anything prestige probably plays a larger role.

 

(Note – I'm not an expert in anthropology, and anyone who is can chime in, but this is my understanding given my amount of knowledge in the area.)

Load more