D

D0TheMath

1079 karmaJoined College Park, MD 20742, USA
Interests:
Forecasting

Bio

An undergrad at University of Maryland, College Park. Majoring in math.

After finishing The Sequences at the end of 9th grade, I started following the EA community, changing my career plans to AI alignment. If anyone would like to work with me on this, PM me!

I’m currently starting the EA group for the university of maryland, college park.

Also see my LessWrong profile

Sequences
1

Effective Altruism Forum Podcast

Comments
167

I do think this is correct to an extent, but also that much moral progress has been made by reflecting on our moral inconsistencies, and smoothing them out. I at least value fairness, which is a complicated concept, but also is actively repulsed by the idea that those closer to me should weigh more in society's moral calculations. Other values I have, like family, convenience, selfish hedonism, friendship, etc are at odds with this fairness value in many circumstances.

But I think its still useful to connect the drowning child argument with the parts of me which resonate with it, and think about actually how much I care about those parts of me over other parts in such circumstances.

Human morality is complicated, and I would prefer more people 'round these parts do moral reflection by doing & feeling rather than thinking, but I don't think there's no place for argument in moral reflection.

Even if most aren't receptive to the argument, the argument may still be correct. In which case its still valuable to argue for and write about.

I agree with you about the bad argumentation tactics of Situational Awareness, but not about the object level. That is, I think Leopold's arguments are both bad, and false. I'd be interested in talking more about why they're false, and I'm also curious about why you think they're true.

Otherwise I think that you are in part spending 80k's reputation in endorsing these organizations

Agree on this. For a long time I've had a very low opinion of 80k's epistemics[1] (both podcast, and website), and having orgs like OpenAI and Meta on there was a big contributing factor[2].


  1. In particular that they try to both present as an authoritative source on strategic matters concerning job selection, while not doing the necessary homework to actually claim such status & using articles (and parts of articles) that empirically nobody reads & I've found are hard to find to add in those clarifications, if they ever do. ↩︎

  2. Probably second to their horrendous SBF interview. ↩︎

The second two points don’t seem obviously correct to me.

First, the US already has a significant amount of food security, so its unclear whether cultivated meats would actually add much.

Second, If cultivated meats destroy the animal agriculture industry, this could very easily lead to a net loss of jobs in the economy.

rationalist community kind of leans right wing on average

Seems false. It leans right compared to the extreme left wing, but right compared to the general population? No. Its too libertarian for that. I bet rightists would also say it leans left, and centrists would say its too extreme. Overall, I think its just classically libertarian.

There's much thought in finance about this. Some general books are:

  1. Options, Futures, and Other Derivatives

  2. Principles of Corporate Finance

And more particularly, The Black Swan: The Impact of the Highly Improbable, along with other stuff by Taleb (this is kind-of his whole thing).

The same standards applied to anything else: A decent track record of such experiments succeeding, and/or well-supported argument based on (in this case) sound economics.

So far the track-record is heavily against. Indeed, many of the worst calamities in history took the form of "revolution".

In lieu of that track record, you need one hell of an argument to explain why your plan is better, which at the minimum likely requires basing it on sound economics (which, if you want particular pointers, mostly means Chicago school, but sufficiently good complexity economics would also be fine).

More broadly I think Anthropic, like many, hasn’t come to final views on these topics and is working on developing views, probably with more information and talent than most alternatives by virtue of being a well-funded company.

It would be remiss to not also mention the large conflict of interest analysts at Anthropic have when developing these views.

Load more