Hide table of contents
3 min read 10

88

Links are to prediction markets, subscripts/brackets are my own forecasts, done rapidly. 

I open my eyes. It’s nearly midday. I drink my morning Huel. How do I feel? My life feels pretty good. AI progress is faster than ever, but I've gotten used to the upward slope by now. There has perhaps recently been a huge recession, but I prepared for that. If not, the West feels more stable than it did in 2024. The culture wars rage on, inflamed by AI, though personally I don't pay much attention.

Either Trump or Biden won the 2024 election. If Biden, his term was probably steady growth and good, boring decision making. If Trump there is more chance of global instability due to pulling back from NATO , lack of support for Ukraine, incompetence in handling the Middle East. Under both administrations there is a moderate chance of a global recession, slightly more under Trump. I intend to earn a bit more and prep for that, but I can imagine that the median person might feel worse off if they get used to the gains in between.

AI progress has continued. For a couple of years it has been possible possible for anyone to type a prompt for a simple web app and receive an entire interactive website. AI autocomplete exists in most apps, AI images and video are ubiquitous. Perhaps an AI has escaped containment. Some simple job roles have been fully automated . For the last 5 years the sense of velocity we felt in 2023 onwards hasn't abated . OpenAI has made significant progress on automating AI engineers .

And yet we haven’t hit the singularity yet , in fact, it feels only a bit closer than it did in 2024 . We have blown through a number of milestones, but AIs are only capable of doing tasks that took 1-10 hours in 2024 , and humans are better at working with them . AI regulation has become tighter. With each new jump in capabilities the public gets more concerned and requires more regulation. The top labs are still in control of their models, with some oversight from the government, but they are red-teamed heavily, with strong anti-copyright measures in place. Political deepfakes probably didn't end up being as bad as everyone feared, because people are more careful with sources. Using deepfakes as scams is a big issue. People in the AI safety community are a little more optimistic.

The world is just "a lot" (65%). People are becoming exhausted by the availability and pace of change (60%). Perhaps rapidly growing technologies focus on bundling the many new interactions and interpreting them for us (20%).

There is a new culture war (80%), perhaps relating to AI (33%). Peak woke happened around 2024, peak trans panic around a similar time. Perhaps eugenics (10%) is the current culture war or polyamory (10%), child labour (5%), artificial wombs (10%). It is plausible that with the increase in AI this will be AI Safety, e/acc and AI ethics. If that's the case, I am already tired (80%).

In the meantime physical engineering is perhaps noticeably out of the great stagnation. Maybe we finally have self-driving cars in most Western cities (60%), drones are cheap and widely used, we are perhaps starting to see nuclear power stations (60%), house building is on the up. Climate change is seen as a bit less of a significant problem. World peak carbon production has happened and nuclear and solar are now well and truly booming. A fusion breakthrough looks likely in the next 5 years.

China has maybe attacked Taiwan (25%), but probably not. Xi is likely still in charge (75%) but there has probably been a major recession (60%). The US, which is more reliant on Mexico is less affected (60%), but Europe struggles significantly (60%).

In the wider world, both Africa and India are deeply unequal. Perhaps either has shrugged off its dysfunction (50%) buoyed up by English speakers and remote tech jobs, but it seems unlikely either is an economic powerhouse (80%). Malaria has perhaps been eradicated in Sub-Saharan Africa (50%).

Animal welfare is roughly static, though cultivated meat is now common on menus in London (60%). It faces battles around naming conventions (e.g. can it be called meat) (70%) and is growing slowly (60%). 

Overall, my main feeling is that it's gonna be okay (unless you’re a farm animal). I guess this is partly priors-based but I've tried to poke holes in it with the various markets attached. It seems to me that I want to focus on the 5 - 15 year term when things get really strange rather than worry deeply about the next 5 years.


This is my first post like this. How could I make the next one better?

Crossposted from my blog here: https://nathanpmyoung.substack.com/p/the-world-in-2029 

Comments10


Sorted by Click to highlight new comments since:

I found the cultivated meat one a little surprising so made a market: 

Thanks!

You thought it was too low or to high?

Too high. I thought there were huge scaling barriers based on something Linch wrote ~2 years ago. Maybe that's wrong or been retracted. 

I think that's generally the picture I had, but I put some decent chance on people overcoming those kinds of barriers.

See e.g. https://scitechdaily.com/breakthrough-could-reduce-cultivated-meat-production-costs-by-up-to-90/ which seems in the category 'huge if true/generalisable'

Cultured meat might not have to progress that far too be a novelty item at many high end restaurants. I don't think this would mean it would necessarily have made any significant penetration or impact on the overall meat market.

Nice post love the combination of storytelling and forecasting.

Minor correction from my field, as great as it would be if there was any real chance of malaria bring eradicated in sub Saharan Africa by 2029, that prediction was for a a 50 percent chance of malaria bring eradicated in only one country, not the whole region.

My biased suggestion for improvement? A couple more global health predictions ;).

This was really cool to read. All quite reasonable predictions IMHO. I'm interested in seeing another one of this type when you get a chance.

I was pulled in by your narrative at the beginning. Maybe end it off with something fun and cultural, maybe a little hint at what dating will be like in the future :)

If I were to do another, what should it be about?

Executive summary: The post outlines a speculative but moderately optimistic vision of the world in 2029, with continuing AI progress, economic and political changes, new cultural debates, and technological advancements, while acknowledging uncertainties and potential risks.

Key points:

  1. AI capabilities will continue advancing rapidly, automating many tasks, but without reaching a "singularity" level yet (60-90% probability estimates).
  2. Economic and political landscape shifts include potential global recession (30%), changes in US administration (Trump/Biden), China's economic challenges (60%), and geopolitical instability risks (Taiwan, Ukraine, Middle East).
  3. New technological developments are expected in various domains: self-driving cars (60%), nuclear power (60%), fusion breakthrough likelihood (in 5 years), climate change mitigation.
  4. Emerging cultural debates and societal changes revolve around AI safety/ethics, post-woke ideological shifts, cultivated meat adoption (60%), and animal welfare stagnation.
  5. Overall outlook is cautiously optimistic ("it's gonna be okay"), despite uncertainties, with a focus on more transformative changes in the 5-15 year timeframe.
  6. Key areas of uncertainty flagged include AI governance, political instability risks, and effects of continued rapid technological change on society.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Summary looks pretty solid.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
calebp
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 1m read
 · 
Need help planning your career? Probably Good’s 1-1 advising service is back! After refining our approach and expanding our capacity, we’re excited to once again offer personal advising sessions to help people figure out how to build careers that are good for them and for the world. Our advising is open to people at all career stages who want to have a positive impact across a range of cause areas—whether you're early in your career, looking to make a transition, or facing uncertainty about your next steps. Some applicants come in with specific plans they want feedback on, while others are just beginning to explore what impactful careers could look like for them. Either way, we aim to provide useful guidance tailored to your situation. Learn more about our advising program and apply here. Also, if you know someone who might benefit from an advising call, we’d really appreciate you passing this along. Looking forward to hearing from those interested. Feel free to get in touch if you have any questions. Finally, we wanted to say a big thank you to 80,000 Hours for their help! The input that they gave us, both now and earlier in the process, was instrumental in shaping what our advising program will look like, and we really appreciate their support.
Recent opportunities in Forecasting
20
Eva
· · 1m read