Hide table of contents

LLMs are getting much more capable, and progress is rapid. I use them in my daily work, and there are many tasks where they're usefully some combination of faster and more capable than I am. I don't see signs of these capability increases stopping or slowing down, and if they do continue I expect the impact on society to start accelerating as they exceed what an increasing fraction of humans can do. I think we could see serious changes in the next 2-5 years.

In my professional life, working on pathogen detection I take this pretty seriously. Advances in AI make it easier for adversaries to design and create pathogens, and so it's important to get a comprehensive detection system in place quickly. Similarly, more powerful AIs are likely to speed up our work in some areas (computational detection) more than others (partnerships) and increase the value of historical data, and I think about this in my planning at work.

In other parts of my life, though, I've basically been ignoring that I think this is likely coming. In deciding to get more solar panels and not get a heat pump I looked at historical returns and utility prices. I book dance gigs a year or more out. I save for retirement. I'm raising my kids in what is essentially preparation for the world of the recent past.

From one direction this doesn't make any sense: why wouldn't I plan for the future I see coming? But from another it's more reasonable: most scenarios where AI becomes extremely capable look either very good or very bad. Outside of my work, I think my choices don't have much impact here: if we all become rich, or dead, my having saved, spent, invested, or parented more presciently won't do much. Instead, in my personal life my decisions have the largest effects in worlds where AI ends up being not that big a deal, perhaps only as transformative as the internet has been.

Still, there are probably areas in our personal lives where it's worth doing something differently? For example:

  • Think hard about career choice: if our kids were a bit older I'd want to be able to give good advice here. How is AI likely to impact the fields they're most interested in? How quickly might this go? What regulatory barriers are there? How might the portions they especially enjoy change as a fraction of the overall work?

  • Maybe either hold off on having kids or have them earlier than otherwise. If we were trying to decide whether to have (another) kid I'd want to think about how much of wanting to have a kid was due to very long term effects (seeing them grow into adulthood, increasing the chance grandchildren, pride in their accomplishments), how I'd feel if children conceived a few years from now had some (embryo selection) or a lot of (genome editing) advantages, how financial constraints might change, what if I never got to be a parent, etc.

  • Postponing medical treatment that trades short-term discomfort for long-term improvement: I'm a bit more willing to tolerate and work around the issues with my wrists and other joints than I would be in a world where I thought medicine was likely to stay on its recent trajectory.

  • Investing money in ways that anticipate this change: I'm generally a pretty strong efficient markets proponent, but I think it's likely that markets are under-responding here outside of the most direct ways (NVDA) to invest in the boom. But I haven't actually done anything here: figuring out which companies I expect to be winners and losers in ways that are not yet priced in is difficult.

  • Avoiding investing money in ways that lock it up even if the ROI is good: I think it's plausible that our installing solar was a mistake and keeping the money invested to retain option value would have been better. I might prefer renting to owning if we didn't already own.

What are other places where people should be weighing the potential impact of near-term transformative AI heavily in their decisions today? Are there places where most of us should be doing the same different thing?

43

2
0

Reactions

2
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

I don't have much confidence in how AI will go, so this is very speculative, but one consideration for personal planning that I think about:

If AI does become as powerful as some hope (and doesn't kill us all), then maybe your personal situation (money, power) at a particular crucial point will be very important. Examples:

  • are you still alive when crucial health advances come that could keep you alive much longer?
  • can you afford those crucial health advances? (for yourself and/or loved ones)
  • are you still alive when technology to "upload" your mind works well, and can you afford it?
  • is there going to be some future grab for resources at a crucial time (before or after uploading...), and will you be in a good position for that?
    • hard for me to speculate about what those resources are, but for a probably-quite-silly example: Maybe we'll auction off whole solar systems?

How you answer these questions could affect whether you live for the next million years, and what that life is like. I see those as reasons to prioritize personal health, money, and power more than you would otherwise.

Note: I'm not actually living my life according to this prescription. If I had to answer why, I think it's partly that I think probably AI progress will stall out before creating such scientific/tech breakthroughs that allow for uploading minds. But even a small chance could be worth optimizing for, so I'm not sure I'm being rational about this.

(This is about personal planning, but sort of parallels some EA considerations, like "value lock-in".)

I see AI as just another tool, much like personal computers or computer programming were in their time. I believe that people will need to learn how to effectively use AI by mastering the art of writing prompts and distinguishing between the various AI tools available.

Just as learning to code was essential for harnessing the power of computers, developing skills in prompt engineering is becoming increasingly important in our AI-driven world. Understanding the strengths and limitations of different AI systems will be crucial for their effective utilization.

That said, I also recognize that AI has unique characteristics that set it apart from traditional tools. Its ability to learn, adapt, and make decisions autonomously introduces new challenges that we need to consider.

Overall, I think viewing AI as a tool to be learned and mastered is a pragmatic approach. It highlights the importance of education and skill development as we prepare for a future where AI plays a significant role in our lives.

I went to buy a ceiling fan recently. The salesperson said I might not want a particular model because it had a light with only 10,000 hours in it, and they've decommissioned replacements. I told him I wasn't worried 😭

Hi Yanni. If you like, I am open to a bet like the one I did with Greg.

PS. I liked and upvoted your comment.

Hey mate! I use the light for about 4 hours a day. Which means I'll get 6.84 years from it.

In case I wasn't clear, I was suggesting that tech will have progressed far enough in ~ 6.84 years that worrying about a light in a ceiling fan doesn't make sense.

Curated and popular this week
Relevant opportunities