D

det

351 karmaJoined

Comments
27

det
32
9
0

While I agree with many points in this post, I think it would be stronger if it engaged more with the existing discussion within EA on mental health, on the Forum and elsewhere. 

For example:

  • The "Self-care and wellbeing in the EA community" tag contains over 270 posts, including some of the most highly-upvoted posts on the site.
  • 80,000 Hours has dedicated significant attention here:
  • Several programs have popped up over the years offering to provide or connect EAs with mental health services. Currently Rethink Wellbeing is active and provides CBT and IFS-based peer-facilitated programs. (There may be others I’m forgetting.)

A few of these seem to me like the sort of thing the suggestions were asking for, e.g. "a few podcast episodes that could succinctly demonstrate the way therapy might explore common blind spots held by EAs, providing a rapid update with less resistance than other methods."

I've personally experienced mental health challenges due to EA, so I'm certainly not saying the problems are all solved, or that the resources above cover everything. Publishing one podcast doesn't solve a community-wide problem. But parts of this post read to me as suggesting these resources and discussions don't exist, so I want to provide an alternate perspective.

What makes the agriculture transition stand out as the "point of no return"? Agriculture was independently invented several times, so I'd expect the hunter-gatherer -> agriculture transition to be more convergent than agriculture -> AGI.

I didn't say universal or 50% support.

Sorry you're right, you didn't say this -- I misread that part of your comment. 

I still think your framing misses something important: the logic "50% of people are women so I think women’s suffrage had a pretty strong support base" applies at all points in time, so it doesn't explain why suffrage was so unpopular for so long. Or to put it another way, for some reason the popularity and political influence of the suffrage movement increased dramatically without the percentage of women increasing, so I'm not sure the percentage of people who are women is relevant in the way you're implying.

The idea that you can go regulating without considering public support/resistance is silly

On the other hand I didn't say this! The degree of public support is certainly relevant. But I'm not sure what your practical takeaway or recommendation is in the case of an unpopular movement.

For example you point out abolition as an example where resistance caused massive additional costs (including the Civil War in the US). I could see points 1, 3, 7, and possibly 8 all being part of a "Ways I see the Quaker shift to abolitionism backfiring" post. They could indeed be fair points that Quakers / other abolitionists should have considered, in some way -- but I'm not sure what that post would have actually wanted abolitionists to do differently, and I'm not sure what your post wants EAs to do differently.

Maybe you just intend to be pointing out possible problems, without concluding one way or another whether the GH -> AW shift is overall good or bad. But I get a strong sense from reading it that you think it's overall bad, and if that's the case I don't know what the practical upshots are.

I'm not super knowledgeable about women's suffrage, but

  • It was not universally supported by women (See e.g. here; I couldn't quickly find stats but I'd be interested).
  • Surely the relevant support base in this case is those who had political power, and the whole point is that women didn't. So "50% support" seems misleading in that sense.

I could similarly say ">99.999% of animals are nonhumans, so nonhuman animal welfare has an extremely large support base." But that's not the relevant support base for the discussion at hand.

if I assume that mosquitoes fall somewhere between black soldier flies and silkworms in their welfare range then killing 100-1000 mosquitoes a year (assuming this causes suffering) could be the moral equivalent to killing a human.

I don't think this is a correct reading of the welfare range estimates. If I understand correctly, these numbers would mean that a mosquito can have hedonic states 0.1% - 1% as intense as humans. So 100-1000 days of mosquito suffering might be on par with one day of human suffering. (And of course this number is a wild guess based on other insects, whose numbers are already very uncertain.)

The harm of death is a different question that RP's numbers don't straightforwardly address. Even a purely hedonic account has to factor in lifespan (mosquitos live for about six weeks). And killing a human is bad for a whole host of additional reasons unrelated to preventing future happiness.

So while I think the welfare range estimates suggest huge moral updates, they're not as huge as you say. It's good to be able to take bold conclusions seriously, but it's also worth taking seriously that there might be a good reason for a result to be extremely counterintuitive.

det
35
15
0

I'm surprised you say you have "no idea what people mean." The Manifest / Summer Camp / LessOnline trio made Manifest seem closer to "project the LessWrong team is deeply involved with" than "some organization is renting out our space."

Among the things that gave me this impression were Raemon's post "some thoughts on LessOnline" and the less.online website, both of which integrate content about Manifest without clear differentiation.

Now that I'm looking at these with a more careful eye, I can see that they all say Manifest is independently operated with its own organizers, etc. I can understand how from the inside, it would be obvious that Manifest was run by completely different people and had (I'm now presuming) little direct LessWrong involvement. I just think it should be apparent that this is less clear from the outside, and it wouldn't be hard for someone to be confused on this point.

Granted, I didn't go to any of these, I've just seen some stuff about them online, so discount this take appropriately. But my impression is that if a friend had asked me "hey, I heard about Manifest, is that a Rationalist thing?" I think "yes" would have been a less misleading answer than "no."

Yes, based on the last month, "the number of safety-focused researchers is dropping rapidly" certainly seems true.

I'd guess "most" is still an overstatement; I doubt the number of people has actually dropped by >50%. But the departures, plus learning that Superalignment never got their promised compute, have caused me to revise my fuzzy sense of "how much core safety work OpenAI is doing" down by a lot, probably over 50%.

det
12
0
0

Relevant: Émile Torres posted a "TESCREAL FAQ" today (unrelated to this article I assume; they'd mentioned this was in the works for a while).

I've only skimmed it so far, but here's one point that directly addresses a claim from the article.

Ozy:

However, Torres is rarely careful enough to make the distinction between people’s beliefs and the premises behind the conversations they’re having. They act like everyone who believes one of these ideas believes in all the rest. In reality, it’s not uncommon for, say, an effective altruist to be convinced of the arguments that we should worry about advanced artificial intelligence without accepting transhumanism or extropianism. All too often, Torres depicts TESCREALism as a monolithic ideology — one they characterize as “profoundly dangerous.”

TESCREAL FAQ:

5. I am an Effective Altruist, but I don't identify with the TESCREAL movement. Are you saying that all EAs are TESCREALists?

[...] I wouldn’t say—nor have I ever claimed—that everyone who identifies with one or more letters in the TESCREAL acronym should be classified as “TESCREALists.” ... There are some members of the EA community who do not care about AGI or longtermism; their focus is entirely on alleviating global poverty or improving animal welfare. In my view, such individuals would not count as TESCREALists.

Having followed Torres's work for a while, I felt like Ozy's characterization was accurate -- I've shared the impression that many uses of TESCREAL have blurred the boundaries between the different movements / treated them like a single entity. (I don't have time to go looking for quotes to substantiate this, however, so it's possible my memory isn't accurate -- others are welcome to check this if they want.) Either way, it seems like Torres is now making an effort to avoid this (mis)use of the label.

the number of safety focussed researchers employed by OpenAI is dropping rapidly

Is this true? The links only establish that two safety-focused researchers have recently left, in very different circumstances.

It seemed like OpenAI made a big push for more safety-focused researchers with the launch of Superalignment last July; I have no idea what the trajectory looks like more recently.

Do you have other information that shows that the number of safety-focused researchers at OpenAI is decreasing?

Load more