OscarD🔸

1153 karmaJoined Working (0-5 years)Oxford, UK

Comments
187

Nice, you could consider making this an event: https://forum.effectivealtruism.org/events

(not sure who many people look at/find out about events from that page though)

Thanks for writing this, heaps of interesting points. Most surprising and saddening to me was that you think there is a 70% chance EA will be net-negative! Could you explain why you think this? Your various concerns about power centralisation and so forth make sense to me, but to my mind this isn't nearly enough to flip the sign, and EA still seems overwhelmingly good to me.

I was also struck by your melancholy tone - somehow I think I implicitly hoped that if I accomplished all the things you have I would feel more resoundingly happy with my impact! But maybe EAish people are unusually cognisant of missed opportunities and impact that could have been but wasn't.

I think I agree with you that many people won't want rapid change.

However, it seems inevitable that some people will (even just part of the EA/rationalist sphere, though I think people wanting explosive growth would be a fair bit broader set). And so if even a small fraction of the population wants to undertake explosive growth, and they are free to do so, then it will happen and they will quickly comprise ~all of the world economy.

This is a huge if: maybe the status quo ante will have powerful enough proponents that they prevent anyone from pursuing explosive growth.

But I think it is also quite plausible a few people will go and colonise space or do some other explosive-growth-conducive thing, and that there will be a bunch of people kind of technologically 'left behind', perhaps by choice.

At the time of this comment the post has 25 karma from 12 votes which seems like not necessarily (or certainly not many) downvotes? Maybe it was different earlier. But I agree downvotes would be strange.

I am also quite sad that EA has a less positive reputation than it did/could/'should'. I think EA ideas and communities being high-status and widely admired would be good for the world. And I think to make that happen it is useful for the the Rutger Bregman's and Yuval Harari's of the world to be public about EA-ish things. For 'normal' people like us without a lot of name recognition or large online following, my guess is that EA community building is best done just by talking with friends and acquaintances, than by having a quite EA online presence.

So where I have come down so far is to be quite open and forthcoming about my EA-ness in person, and to have some EA markings online (GWWC diamond on LinkedIn etc) but I semi-recently removed my surname from my EAF account. I think this is mainly a tail-risk insurance policy and will probably not be useful/important, but even a ~5% chance of the EA community's reputation deteriorating further and me wanting to get a non-EA job later means this very weak partial anonymisation is worth it. But I'm not at all sure about that, maybe there are important benefits to having a bio on the EAF about who I am and linking to my LinkedIn profile and so forth (though my guess is most people reading something I write on the Forum would be able to fairly quickly find/infer my surname if they want to).

I was surprised to see you mention at the end you didn't get a job offer partially because of being publicly EA as this seems to cut against your thesis? I think you are saying that the benefits are just large enough that that cost is bearable. Which makes sense, but not getting a job offer seems like quite a big cost to me, or at least that is something I care a lot about.

I think this is a useful analysis and it reveals to me how opposed to the 'indefinite moratorium' proposal I think I am.

To get the moratorium lasting indefinitely, there seems to be a dilemma between:

  1. General scientific/technological progress stops permanently along with AGI progress stopping permanently, or
  2. We create a Bostrom-style arrangement where all people and all computers are constantly surveilled to prevent any AI-relevant advances from occurring.

I think both of these would constitute an existential catastrophe (in the traditional sense of failing to realise most of humanity's potential). 1 seems clearly bad (outside of e.g. a degrowth perspective). 2 would be necessary as if not 1 and science and tech is progressing but just AGI is not, then it will become increasingly easy to build AGI (or at least make progress towards it) and we will need more and more restrictive mechanisms to prevent people from doing so. Probably we just cannot access many regions of science-technology space without superintelligence, so eventually 2 may morph into 1 as well.

So I think for me a moratorium for years or decades could be very good and valuable, but only if we eventually lift it.

I would be interested in anyone taking the position that AGI/superintelligence should never be built, and potentially writing an EAF dialogue together or otherwise trying to understand each other better.

Great points, I hadn't thought about the indirect benefits of poor cybersecurity before, interesting!

And yes, your point about considering non-humans is well-taken and I agree. I suppose even on that my guess is liberalism is more on track to a pro-animal future than authoritarianism, even if both are very far from it (but hard to tell).

Is there a principled place to disembark the crazy train?

To elaborate, if we take EV-maximization seriously, this appears to have non-intuitive implications about e.g. small animals being of overwhelming moral importance in aggregate, the astronomical value of X-risk reduction, the possibility of infinite amounts of (dis)value, suffering in fundamental physics (in roughly ascending order of intuitive craziness to me).

But rejecting EV maximization also seems problematic.

(answered my own question - 'read' means stayed on page >30s).

However, one of my posts has a negative bounce rate, seems like a bug! Or maybe my post was just that engaging ;)
 

Nice! I didn't actually know we had access to our author stats, cool. What is the difference between 'views' and 'reads'? Also, how 'true' do you think these numbers are? They seem rather surprisingly high to me, could there just be a bunch of bots racking up numbers?

Load more