Welcome!
If you're new to the EA Forum:
- Consider using this thread to introduce yourself!
- You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
- (You can also put this info into your Forum bio.)
Everyone:
- If you have something to share that doesn't feel like a full post, add it here! (You can also create a quick take.)
- You might also share good news, big or small (See this post for ideas.)
- You can also ask questions about anything that confuses you (and you can answer them, or discuss the answers).
For inspiration, you can see the last open thread here.
Other Forum resources
The blog post "This can't go on" is quite prominent in the introductory reading lists to AI Safety. I really struggle to see why. Most of the content in the post is about why the growth we currently have is very unusual and why we can't have economic growth forever. I think mainstream audience is already OK with that and that's a reason they are sceptical to AI boom scenarios. When I first read that post I was very confused. After organising a few reading groups other people seem to have similar confusions too. It's weird to argue from "look, we can't grow forever, growth is very rare" to "we might have explosive growth in our lifetimes." A similar reaction here.
I disagree with the following:
"very strong evidence against "the world in 100 years will look kind of similar to what it looks like today"."
Growth is an important kind of change. Arguing against the possibility of some kind of extreme growth makes it more difficult to argue that the future will be very different. Let me frame it this way:
Scenario -> Technological "progress" under scenario
- AI singularity -> Extreme progress within this century
- AI doom -> Extreme "progress" within this century
- Constant growth -> Moderate progress within this century
... (read more)