RR

Ray Raven

3 karmaJoined

Comments
3

Well, I appreciate your counterpoint but dont know why you bother to take it back.
First, you counterpoint on mass adoption is valid, I myself am from a poor country in south asia and most of my friends think of ai as 'entertaining  chatbot' nothing else. Even though technology advanced radically,  most people do not access. Many don't have smart phone or device, most don't have sufficient knowledge. What's going on in the world doesn't matter to them. 
But you have to see how much things are changing rapidly. Who could have thought half a century ago that most people in future will use wallet size computer within their pocket which is far more powerful than the computer nasa used to send people on moon. How future might turn out simply beyond us. We can't predict if adoption will be slower or faster. Most people in my country is addicted to facebook (even my parents ); just 10 years ago it was completely unthinkable for me; mobile was luxury, my parents only had  buttons phones to use. Things changed rapidly before my eyes. 
 

your second point is absolutely valid. Our planning quality will improve drastically. But I'm afraid it still wont be able to predict exponentially changing future. There's tons of wall street quants trying to beat the market with there math more complex than quantum physics. but, end of the day, their return is slightly better than the market average.

Well, as human being it's our desire to predict the future. We afraid uncertainty more than anything. today, we can predict the lifetime of a star millions of lightyear far from us, but can't predict our own future back in earth. And that is something that we dont like. We will try to predict future and keep doing this even if it just serve our own curiosity. I think we should keep going no matter how many times we fail. 

 

What if the philosophical movement dedicated to securing humanity's distant future has fundamentally misunderstood the forces that will shape it?

In "Technology's Double Edge: Reassessing Longtermist Priorities in an Age of Exponential Innovation " I argue that longtermism's most influential thinkers; despite their sophisticated moral frameworks and rigorous analysis have made a critical error. They treat artificial intelligence and biotechnology as variables to be managed in their utilitarian calculations, when these technologies actually represent something far more disruptive: forces that may render the entire longtermist project obsolete within decades.

The irony is striking. A movement built on taking the long view has failed to grasp how exponential technological change makes long-term planning increasingly meaningless. When AI could fundamentally alter human civilization  by 2050, and genetic engineering may redesign even  human being  itself, what does it mean to optimize for outcomes in the year 3000?

This isn't just an academic quibble. The essay reveals how longtermism's precautionary approach to dangerous technologies may actually increase both present suffering and future risk. While philosophers debate global AI governance, people die from diseases that biotechnology could cure. While ethicists worry about enhancement technologies, aging continues its relentless march. The very technologies longtermists fear may be our only tools for addressing existential threats.

The piece culminates in a provocative thesis: rather than trying to control humanity's technological trajectory, longtermists should accelerate beneficial innovations while building institutions capable of navigating radical uncertainty. This means abandoning comfortable assumptions about human nature, moral progress, and our ability to predict what conscious beings will value millennia hence.

Most unsettling of all, the essay suggests that longtermism's anthropocentric focus may be its greatest limitation. If technology transcends current human limitations, shouldn't our moral concern extend to whatever forms of consciousness emerge from that transformation—even if we cannot comprehend their values or experiences?

"Technology's Double Edge" challenges readers to confront an uncomfortable possibility: that in our rush to secure humanity's future, we may have misunderstood both technology's power and our own moral limitations. 

Ray Raven
2
1
0
50% disagree

We can't determine far future. The farther we try to predict , the more error will we face. It's a lesson we learnt from the past. None from a two or decades ago could have imagined that information technology would've changed beyond exponentially but it did. There's thousands of wall street quants whose job is to predict future stock price for short term using thousands of computers , and yet there's return is little better than average investors.