Ozzie Gooen

8338 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
751

Topic contributions
1

Happy to see conversation and excitement on this!

Some quick points:
- Eli Lifland and I had a podcast episode about this topic a few weeks back. This goes into some detail on the details and viability of forecasting+AI being a cost-effective EA intervention.
- We at QURI have been investigating a certain thread of ambitious forecasting (which would require a lot of AI) for the last few years. We're a small group, but I think our writing and work would be interesting for people in this area.
- Our post Prioritization Research for Advancing Wisdom and Intelligence from 2021 described much of this area as "Wisdom and Intelligence" interventions, and there I similarly came to the conclusion that AI+epistemics was likely the most exciting generic area there. I'm still excited for more prioritization work and direct work in this area.
- The FTX Future Fund made epistemics and AI+epistemics a priority. I'd be curious to see other funders research this area more. (Hat tip to the new OP forecasting team)
- "A forecasting bot made by the AI company FutureSearch is making profit on the forecasting platform Manifold. The y-axis shows profit. This suggests it’s better even than collective prediction of the existing human forecasters." -> I want to flag here that it's not too hard for a smart human to do as good or better. Strong human forecasters are expected to make a substantial profit. A more accurate statement here is, "This suggests that it's powerful for automation to add value to a forecasting platform, and to outperform some human forecasters", which is a lower bar. I expect it will be a long time until AIs beat Humans+AIs in forecasting, but I agree AIs will add value.

 

I've known Marisa for a few years and had the privilege of briefly working with her. I was really impressed by her drive and excitement. She seemed deeply driven and was incredibly friendly to be around. 

This will take me some time to process. I'm so sorry it ended like this. 

She will be remembered.

Sorry - it was automatically sent out to multiple platforms, but I don't think our system can to spotify. I recommend trying another podcasting platform. 

(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.)

Thoughts on the OpenAI Strategy

OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten.

First, they say flat out that they're going for AGI.

Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back.

"Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1]

On Hacker News, one of their employees says,

"We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2]

You can read more about this mission on the charter:

"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3]

This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing:

  1. Make AGI
  2. Turn AGI into huge profits
  3. Give 100x returns to investors
  4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit
  5. Use AGI for "the benefit of all"?

I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like.

Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries.

I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI).

This would be a massive power gain for a small subset of people.

If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI.

On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company.

And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors.

But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves. 


(Aside on the details of Step 5)
I would love more information on Step 5, but I don’t blame OpenAI for not providing it.

  • Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people.
  • Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible.
  • I assume it’s really hard to actually put together any reasonable plan now for Step 5. 

My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious.

[1] https://openai.com/blog/openai-lp/

[2] https://news.ycombinator.com/item?id=19360709

[3] https://openai.com/charter/
[4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom

[5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/


Also, see:
https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html
Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now.

https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm

https://moores.samaltman.com/

https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/

Yea, this is what I was assuming the action/alternative would be. This strategy is very tried-and-true. 

Of course! In general I'm happy for people to make quick best-guess evaluations openly - in part, that helps others here correct things when there might be some obvious mistakes. :)

Thanks for the replies! Some quick responses.

First, again, overall, I think we generally agree on most of this stuff.

Perhaps, but I think you gain a ton of info from actually trying to do stuff and iterating. I think prioritization work can sometimes seem more intuitively great than it ends up being, relative to the iteration strategy.

I agree to an extent. But I think there are some very profound prioritization questions that haven't been researched much, and that I don't expect us to gain much insight from by experimentation in the next few years. I'd still like us to do experimentation (If I were in charge of a $50Mil fund, I'd start spending it soon, just not as quickly as I would otherwise). For example:

  • How promising is it to improve the wisdom/intelligence of EAs vs. others?
  • How promising are brain-computer-interfaces vs. rationality training vs. forecasting?
  • What is a good strategy to encourage epistemic-helping AI, where philanthropists could have the most impact?
  • What kinds of benefits can we generically expect from forecasting/epistemics? How much should we aim for EAs to spend here?

I would love for this to be true! Am open to changing mind based on a compelling analysis.

We might be disagreeing a bit on what the bar for "valuable for EA decision-making" is. I see a lot of forecasting like accounting - it rarely leads to a clear and large decision, but it's good to do, and steers organizations in better directions. I personally rely heavily on prediction markets for key understandings of EA topics, and see that people like Scott Alexander and Zvi seem to. I know less about the inner workings of OP, but the fact that they continue to pay for predictions that are very much for their questions seems like a sign. All that said, I think that ~95%+ of Manifold and a lot of Metaculus is not useful at all.

I think you might be understating how fungible OpenPhil's efforts are between AI safety (particularly governance team) and forecasting

I'm not sure how much to focus on OP's narrow choices here. I found it surprising that Javier went from governance to forecasting, and that previously it was the (very small) governance team that did forecasting. It's possible that if I evaluated the situation, and had control of the situation, I'd recommend that OP moved marginal resources to governance from forecasting. But I'm a lot less interested in this question than I am, "is forecasting competitive with some EA activities, and how can we do it well?"

Seems unclear what should count as internal research for EA, e.g. are you counting OP worldview diversification team / AI strategy research in general?

Yep, I'd count these. 

Quick notes on your QURI section:

"after four years they don't seem to have a lot of users" -> I think it's more fair to say this has been about 2 years. If you look at the commit history you can see that there was very little development for the first two years of that time. 
https://github.com/quantified-uncertainty/squiggle/graphs/contributors

We've spent a lot of time at blog posts / research, and other projects, as well as Squiggle Hub. (Though in the last year especially, we've focused on Squiggle) 

Regarding users, I'd agree it's not as many as I would have liked, but think we are having some. If you look through the Squiggle Tag, you'll see several EA groups who have used Squiggle. 

We've been working with a few EA organizations on Squiggle setups that are mostly private. 

I like that it's for-profit.

I think for-profits have their space, but I also think that nonprofits and open-source/open organizations have a lot of benefits.

Obvious point that it would be neat for someone to write forecasting questions for each one, if there can be some easy way of doing so. 

Load more