Nathan Young

Product Management @ Forecasting Consultancy
14858 karmaJoined May 2019Working (0-5 years)London, UK

Bio

Participation
4

Create prediction markets and forecasting questions on AI risk and biorisk. I also work part-time at a prediction market.

Use my connections on Twitter to raise the profile of these predictions and increase the chance that decision-makers discuss these issues.

How others can help me

Talking to those in forecasting to improve my forecasting question generation tool

Writing forecasting questions on EA topics.

Meeting EAs I become lifelong friends with.

How I can help others

Connecting them to other EAs.

Writing forecasting questions on metaculus.

Talking to them about forecasting.

Sequences
1

Moving In Step With One Another

Comments
2227

Topic contributions
19

Interesting take. I don't like it. 

Perhaps because I like saying overrated/underrated.

But also because overrated/underrated is a quick way to provide information. "Forecasting is underrated by the population at large" is much easier to think of than "forecasting is probably rated 4/10 by the population at large and should be rated 6/10"

Over/underrated requires about 3 mental queries, "Is it better or worse than my ingroup thinks" "Is it better or worse than my ingroup thinks?" "Am I gonna have to be clear about what I mean?"

Scoring the current and desired status of something requires about 20 queries "Is 4 fair?" "Is 5 fair" "What axis am I rating on?" "Popularity?" "If I score it a 4 will people think I'm crazy?"...

Like in some sense your right that % forecasts are more useful than "More likely/less likely" and sizes are better than "bigger smaller" but when dealing with intangibles like status I think it's pretty costly to calculate some status number, so I do the cheaper thing.

 

Also would you prefer people used over/underrated less or would you prefer the people who use over/underrated spoke less? Because I would guess that some chunk of those 50ish karma are from people who don't like the vibe rather than some epistemic thing. And if that's the case, I think we should have a different discussion.

I guess I think that might come from a frustration around jargon or rationalists in general. And I'm pretty happy to try and broaden my answer from over/underrated - just as I would if someone asked me how big a star was and I said "bigger than an elephant". But it's worth noting it's a bandwidth thing and often used because giving exact sizes in status is hard. Perhaps we shouldn't have numbers and words for it, but we don't.

How is this as a snapshot of the discussion so far?

You can edit the image here and post as a comment: https://link.excalidraw.com/l/82wslD39E6w/5wUzJOIPnRl 

Kind of frustrating that there isn't a single place for it to be discussed. 

There seems some pretty large things I disagree with in each of your arguments:

The second is a situation in which some highly capable AI that developers and users honestly think is safe or fine turns out not to be as safe as they imagined.

This seems exactly the sort of situation I want AI developers to think long and hard about. Frankly your counterexample looks like an example to me.

Autonomy seems like a primary feature of the highly capable advanced AI currently being built. None of the comparators typically used shares this feature. Surely, that should matter in any analysis.

To me, where I cannot impose costs directly on the autonomous entity, autonomy again makes strict liability better, not worse. If nuclear explosions or trains were autonomous you seem to argue we shouldn't place strict liability on their creators. This seems the opposite of what I'd expect. 

Given the interests at play, strict liability will struggle to gain traction 

I do not trust almost anyone's ability to predict this stuff. If it's good on its merits let's push for it. Notably Robin Hanson and some other more "risk is low" people support strict liability (because they don't think disasters will happen). I think there is the case for a coalition around this. I don't buy that you can predict that this will struggle. 

I am interested in what bad things you think might happen with strict liability or how you think it's gone in the past?

I sense this post shouldn't be a community post. I know it's written to the EA community, but it's discussing a specific project. Feels like it shouldn't be relegated to the community section because of style.

I think that kind of thinking is appropriate in all these cases. The Whytham abbey purchase was an investment, but it is reasonable to compare the cost compared to other investments in these terms.

Thanks for writing this.

I am confused why people are defensive of @Sam Bankman-Fried. I am fond of him as person and he was gracious to me personally. I even checked up on him after the crash. But that doesn't change the fact he did a massive crime. 

It doesn't seem hard to say that I want Sam to be well as a person (and Caroline, Nishad, Gary and anyone else close to them) whilst also saying this was a huge and deliberate fraud. And I don't even think we need to have discussions about utilitarianism. Why trade so sloppily? Why hide it for such a long time from anyone who could have given better advice[1]. I don't get the temptation to say 'Sam was trying his best'.

  1. ^

    unless many here are lying about not knowing, which I doubt

I don't love this article but it's fine. In general many other articles about EA are too negative so it doesn't really seem worth writing a big correction when the median person who hears about EA probably hears about the right thing.

Specifically, are new readers gonna believe that EA has done a load of useful soul-searching because this articles says it? I doubt it. There are enough articles saying that EAs are a bunch of cynical psychopaths that many will probably assume this is the fluff piece (that it is). 

I don't really think this meta discussion is that high priority, the better question is, how does EA improve and focus back on effective altruism, but I don't have a great answer to that, other than write articles specifically focused at that.

Maybe, but nonetheless it is true. I don't read 'em. Do you?

I guess I feel a lot of things:

  • Empathy - I try to save slugs and snails etc, so I get this feeling that we should take all lives mattering seriously. There is something caring and beautiful in this and I like this intuition
  • Confusion - I have felt this about veganism a bit recently. I don't really think it's worth the amount of stress it caused me to be vegan in terms of animal lives saved. Perhaps I should do it for a month a year to remind me of the cost, but I until I hit diminishing returns on my work I should probably do that. I used to think "if I were in slave owning times I should have divested entirely" but I dunno these days. Probably my anti-slavery resources were better spent first and foremost funding abolitionists. I don't know the exact costs
  • Frustration - I find this story sort of a bit insane. It's about someone I know who is very kind tying themselves in knots over over a few hundred hours of micro-consciousness. I have a voice of a friend in my head being like "that's an insane story" and for myself I'd allow it a bit but at some point I think I'd say that it isn't the best way to help moths or all consciousness and that most minds would agree with the parts of me that want to throw in the towel
  • Sadness - I'm sad that you are sad, especially after trying to be so kind. And I agree that it's weird how we behave to people who we maybe think are doing bad things like the insect guy. 
Load more