RW

Robert_Wiblin

5428 karmaJoined

Posts
76

Sorted by New

Comments
463

Thanks for this post, it warmed our hearts! Glad we've been able to help you understand the world better over the years and maybe even have more impact too. ❤️

I threaded the top ten list here: https://x.com/robertwiblin/status/1834613676034113817

(By the way the next episode we plan to release, one of Luisa's, actually has more pushback on AI and robotics, have a listen and see what you think.)

For what it's worth SBF put this idea to me in an interview I did with him and I thought it sounded daft at the time, for the reasons you give among others.

He also suggested putting private messages on the blockchain which seemed even stranger and much less motivated.

That said, at the time I regarded SBF as much more of an expert on blockchain technology than I was, which made me reluctant to entirely dismiss it out of hand, and I endorse that habit of mind.

As it turns out people are now doing a Twitter clone on a blockchain and it has some momentum behind it: https://docs.farcaster.xyz/

So my skepticism may yet be wrong — the world is full of wonders that work even though they seem like they shouldn't. Though how a project like that out-competes Twitter given the network effects holding people onto the platform I don't know.

Now having data for most of October, knowing our release schedule, and being able to see month-by-month engagement, I'd actually forecast that 80k Podcast listening time should grow 15-20% this year (not 5%), for ~300,000 hours of consumption total.

(If you forecast that Q4 2023 will be the same as Q3 2023 then you get 11% growth, and in fact it's going to come in higher.)

That is indeed still a significant reduction from last year when it grew ~40%.

Let me know if you'd like to discuss in more detail!

Basically we're grabbing analytics from Apple Podcasts, Spotify for Podcasters and Google Podcast Manager (which internally I call the 'Big 3'), and adding them up.

But Spotify and Google Podcasts only became available around/after Nov 2019. Drop me an email if you'd like to discuss! :)

"I'm really not sure what this means and surprised Rob didn't follow up on this."

Just the short time constraint. Sometimes I have to I trust the audience to assess for themselves whether or not they find an answer convincing.

Ah OK, I agree it's not that consistent with GiveWell's traditional approach.

I think of high-confidence GiveWell-style giving as just one possible approach one might take in the pursuit of 'effective altruism', and it's one that I personally think is misguided for the sorts of reasons Shruti is pointing to.

High-confidence (e.g. GiveWell) and hits-based giving (e.g. Open Phil, all longtermism) are both large fractions of the EA-inspired portfolio of giving and careers.

So really I should just say that there's nothing like a consensus around whether EA implies going for high-confidence or low-confidence strategies (or something in the middle I guess).

(Incidentally from my interview with Elie I'd say GiveWell is actually now doing some hits-based giving of its own.)

Sorry in what sense does Shruti say that EA solutions aren't effective in the case of air pollution? Do you mean that the highest 'EV' interventions are likely to be ones with high uncertainty about whether they work or not?

(I don't think of EA as being about achieving high confidence in impact, if anything I'd associate EA with high-risk hits based giving.)

Seems like David agrees that once you were spread across many star systems this could reduce existential risk a great deal.

The other line of argument would be that at some point AI advances will either cause extinction or a massive drop in extinction risk.

The literature on a 'singleton' is in part addressing this issue.

Because there's so much uncertainty about all this, it seems like an overly-confident claim that it's extremely unlikely for extinction risk to drop near zero within the next 100 or 200 years.

Ah great, glad I got it!

I think I had always assumed that the argument for x-risk relied on the possibility that the annual risk of extinction would eventually either hit or asymptote to zero. If you think of life spreading out across the galaxy and then other galaxies, and then being separated by cosmic expansion, then that makes some sense.

To analyse it the most simplistic way possible — if you think extinction risk has a 10% chance of permanently going to 0% if we make it through the current period, and a 90% chance of remaining very high even if we make it through the current period, then extinction reduction takes a 10x hit to its cost-effectiveness from this effect. (At least that's what I had been imagining.)

I recall there's an Appendix to The Precipice where Ord talks about this sort of thing. At least I remember that he covers the issue that it's ambiguous whether a high or low level of risk today makes the strongest case for working to reduce extinction being cost-effective. Because as I think you're pointing out above — while a low risk today makes it harder to reduce the probability of extinction by a given absolute amount, it simultaneously implies we're more likely to make it through future periods if we don't go extinct in this one, raising the value of survival now.

I'm not much at maths so I found this hard to follow.

Is the basic thrust that reducing the chance of extinction this year isn't so valuable if there remains a risk of extinction (or catastrophe) in future because in that case we'll probably just go extinct (or die young) later anyway?

Load more