The way I hope for you to read this series is with an entrepreneurial eye
I appreciate this specific call to action, so I'll kick us off. How will AI advances affect sectoral transformations moving people into the service sector? It will depend on where AI is a complement vs replacement for labor. In the complement case, high-quality instantaneous translations could dramatically expand the export market for services by eliminating English fluency as a barrier. In the replacement case, AI agents trained to write code could replace many routine contract software jobs.
Is there literature on skill gaps in LMI countries that is more granular than proxy metrics like years of schooling? That could be a good place to start to look for where AI complements could open up opportunities.
I've been thinking about this for a while now and I agree with all of these points. Another route would be selling AI Safety solutions to non-AI safety firms. This solves many of the issues raised in the post but introduces new ones. As you mentioned in the Infertile Idea Space section, companies often start with a product idea, talk to potential customers, and then end up building a different product to solve a bigger problem that the customer segment has. In this context that could look like offering a product with an alignment tax, customers not being willing to pay it, pivoting to something accelerationist instead. You might think "I would never do that!" but it can be very easy to fool yourself into thinking you are still having a positive impact when the alternative is the painful process of firing your employees and telling your investors (who are often your friends and family) that you lost their money.
Outsourcing a function is usually not binary. For example, Red Bull's brand was actually originally developed by an outside agency https://kastner.agency/work/red-bull-brand and they still use a mix of internal and external marketing teams today. Often internal teams at a company function serve as a bridge between the company and contractors.
With that said, I wonder if the people asking about outsourcing are thinking of it in the literal "employe vs contractor" sense that you covered. When I have heard these debates I believe that people meant "If we have money now, why not hire non-EAs for these positions?".
Ah, yeah I misread your opinion of the likelihood that humans will ever create AGI. I believe it will happen eventually unless AI research stops due to some exogenous reason (civilizational collapse, a ban on development, etc.). Important assumptions I am making:
I'm not saying that I think this would be the best, easiest, or only way to create AGI, just that if every other attempt fails, I don't see what would prevent this from happening. Particularly since we are already to simulate portions of a mouse brain. I am also not claiming here that this implies short timelines for AGI. I don't have a good estimate of how long this approach would take.
I'm going to attempt to summarize what I think part of your current beliefs are (please correct me if I am wrong!)
If I got that right I would describe that as both having (appropriately loosely held) beliefs about AI Safety and agreement that AI Safety is a risk with some unspecified probability and magnitude.
What you don't have a view on, but you believe people in AI safety do have strong views on is (again not trying to put words in your mouth just my best attempt at understanding):
My (fairly uninformed view) is that people working on AI safety don't know the answer to that first or second question. Rather, they think that the probability and magnitude of the problem are high enough that it swamps those questions in calculating the importance of the cause area. Some of these people have tried to model out this reasoning, while others are leaning more on intuition. I think reducing the uncertainty of any of these three questions is useful in itself, so I think it would be great if you wanted to work on that.
it would be ideal for you to work on something other than AGI safety!
I disagree. Here is my reasoning:
"I feel like it would be useful to write down limitations/upper bounds on what AI systems are able to do if they are not superintelligent and don’t for example have the ability to simulate all of physics (maybe someone has done this already, I don’t know)" - I think it would be useful and interesting to explore this. Even if someone else has done this, I'd be interested in your perspective.
I want to strongly second this! I think that a proof of the limitations of ML under certain constraints would be incredibly useful to narrow the area in which we need to worry about AI safety or at least limit the types of safety questions that need to be addressed in that subset of ML
Higher incomes is the goal so why is it a problem if that comes from staying in agriculture? Is the idea that this tops out at a lower level than manufacturing or service-focused economies? Aren't there some developed countries, like New Zealand, where agriculture makes up more than half of their exports?