Hide table of contents
by frib
5 min read 3

10

Epistemic status: speculative ideas.

Core Idea

Large philanthropic institutions might be better off buying insurance for some of the projects they fund: they shouldn’t be risk neutral with respect to money because money has decreasing returns at the scale they are working at.

Moreover, this would have two major additional benefits:

  • It would create natural prediction markets about the success rate of various projects, giving the community valuable information about them;
  • It would make it easier for people at these organizations to face the perspective of making loosing bets.

A Simple Example

Let’s say you are a large philanthropic organization with $8B at your disposal. Two potential use of this money seem promising to you:

  1. Use $6B to fund a large campaign to pass a piece of legislation that makes it harder for the US president to use nukes. You estimate that without your intervention, the piece of legislation has a 0% chance of passing, and with your intervention, it has a 75% chance of passing. If it is passed, you expect that it reduces the probability of an existential catastrophe by 0.2% in the long run.
  2. Wait until crunch time for AGI, and then spend $6B to help the team which you think has the highest chance of developing safe AGI. You expect this investment to reduce the probability of an existential catastrophe by 0.1%.

You expect money spent beyond $6B on either projects to have drastic diminishing returns, and projects other than these two to be much worse. You also expect it to be infeasible to raise enough money to finance both projects.

What is the right course of action?

I think the right course of action is to buy insurance:

First, convince an insurance company that the probability of the piece of legislation being passed, if you do project A, is 75%. Then they should agree to the following deal: you give them $2B now, and they give you $6B if the legislation doesn’t pass despite your best effort (their expect value is $2B - 0.25 x $6B = $500M). Use the remaining $6B to execute plan A. If the piece of legislation doesn’t pass, use the money the insurance company gave you to execute plan B.

This raises your expected existential catastrophe reduction to 0.175%, up from 0.15% if you chose A, and up from 0.1% if you chose B.

Note: all incentives are aligned with you doing your best to make the legislation pass: the “best effort” clause of the contract makes it mandatory for you to spend $6B on campaign-related expenses. If you made a poor use of these $6B, you would end up with reducing existential catastrophe probability by only 0.1% (by carrying out project B), and would lose the $2B insurance premium in the process.

The General Setup

If a project requires spending that is obviously only related to this project, and if this project changes the probability of a short-term objective outcome, then philanthropic institutions should probably try to get insurance for this project.

Additional Benefit 1: Insurance Markets Are Predictions Markets and Will Give You Valuable Information

This is not legal advice. I don’t have the knowledge required to evaluate if the law would allow this.

It seems to me that it would be possible to express your interest in an insurance against failing to pass a piece of legislation, and then have people making you insurance offers. The price they are willing to give you already gives you an idea of what the success probability of project A is in the simple example above.

You can turn this into a proper market by giving people 2 billion contracts which say “if I carry out project A, I will give you $1, and you will owe me $3 if the project fails”. The expected value of this contract if you carry out project A should be 1 - 3 x 0.75 = 0.5, so they should each sell at $0.5. If they sell below this price, the project is likely to fail, if they sell above it, the project is likely to succeed. A marketplace could allow people to buy and sell these contracts, with actual money being transferred only if you decide to carry out A (just like in crowdfunding projects, where you only pay if the project happens).

The price of such a market gives you a crowd estimate of the probability of success of the piece of legislation, which is an extremely valuable piece of information.

Note: This exact financial contract is probably not feasible, but I expect some related financial contract to be feasible.

Note: Insurance contract prices will be impacted not only by the probability of the event, but also by its correlation with the overall market, and by the irrationality of the market. The former means that you shouldn’t use this system when the event has a strong correlation with the overall market. I’m unsure what to make of the latter.

Additional Benefit 2: It Lowers Irrational Risk Aversion

I would like to be responsible for leading a $6B project with a failure probability of 25%. I would probably prefer leading the project if the organization only lost $2B if the project failed. If this feeling is shard by many, it would mean that insurance would make it easier:

  • given a set of leaders at the organizations, to make them carry out risky projects
  • given a set of projects you want to carry out, to find talented individuals to lead them

Is Insurance Valuable for Small Donors?

The main argument doesn’t apply to small donors because there are no decreasing returns of money when you are responsible for a tiny fraction of the projects you are contributing too.

But the two additional benefits still apply:

  1. Insurances for common charities might allow good forecast of the effectiveness of current actions, which will be measured in the future. For example, AMF would probably gain some valuable information if a portion of its donors insured themselves against future studies showing that AMF’s action wasn’t as effective as claimed (though the details of the insurance contract might be tough to work out, since AMF would probably have incentives to make the insurance price higher or lower).
  2. Small donors can be risk averse. You might feel bad about giving money to GPI (Global Priorities Research) because you feel like the probability of them finding anything interesting in the next five years is too small. Then you would benefit from using insurance contracts: give money to GPI, and pay an insurance premium such that if GPI doesn’t find anything interesting in the next five years, the insurance gives to AMF (Against Malaria Foundation) the amount of money. Then your minimum impact will be the same as if you directly gave money to AMF. The expected value of your impact will be close but smaller than if you gave money to GPI without paying an insurance premium (if you believe the expected impact per dollar of GPI is higher than AMF’s). In any case, if there are enough actors on the market competing on insurance prices, your expected impact won’t drop bellow “giving your money to AMF directly”.
Comments3


Sorted by Click to highlight new comments since:

Interesting idea, but I forsee several challenges in implementation:

  • First, few organizational outcomes are truly binary -- it is possible that the organization gets some, but not all, of its objectives achieved, in which case there is going to be litigation about whether the actual outcome is an insured loss. 
  • Second, it is going to be expensive for an insurance company to develop an accurate sense of the odds of success, especially because many of the relevant pieces of information are under the control of the organization and may be very difficult to measure without organizational influence. If I were the insurer, I'd require a significant application fee just to provide a quote, and I would quote very conservatively.
  • Third, incentives can change. Even if the insurer believes your assertion about preferences, those preferences could change over time and you could then have an incentive to "throw" the first project. Detecting failure to provide "best efforts" is challenging and uncertain. I think the workaround for this is for the insurance company to require significant co-insurance -- e.g., only half of the loss from the failed initiative is covered. That gives the organization a much more concrete sense of skin in the game than its mere assertions that it would prefer success to collecting the insurance payout.
  • Finally, the hypothetical scenario (in which you don't seem to have any good alternative use for $2B) is fairly unlikely. That doesn't mean that insurance would have no use cases, only that they may be limited.

One interesting possible application would be having different EA cause areas potentially "insure" each other. E.g., if animal-welfare people want to try a high-risk, mega-high-reward intervention but is having a hard time tolerating the idea of losing some high-value and fairly safe options if the intervention fails, groups from another cause area might be willing to "insure." As opposed to an insurance company, other EAs are going to be better at developing an accurate sense of the odds of success and assessing whether the insured's interests are likely to change. 

Moreover, the insurance "payout" would likely still have good value for the "insuring" EAs -- even if I would not have donated to animal-welfare causes in the first instance, the fulfillment of high value options in that area still brings me utilons. Likewise, if you're a animal-welfare person, the payment of an insurance "premium" to global health/development still generates utilons in your book, even if not as many as applied to animal welfare.

nit: the per-mille symbol (‰) is easily confused for the percent symbol (%), and isn't well-known. I think this would be clearer if you stuck to percent ('0.1%' instead of '1‰' etc)

Agreed, I'll edit the post.

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat