Epistemic status: I probably have no idea what I'm talking about.

Prediction markets have several constraints due to laws against online gambling (at least in the US). PredictIt, for example, must operate as a nonprofit, limit each contract to 5,000 total traders, and limit investment to $850 per contract. [1]

Given the usefulness prediction markets, it would be great if we could remove these contraints on financial incentives for the users and operators of these platforms. [2] Currently, we rely on fake internet points for incentive on websites like Metaculus; although, people do seem to like those.

So, I tried to think of an alternative model. In a betting market, the winners are paid with the losers' money. Instead, imagine the prediction platform is a bank. Forecasters deposit money in the bank, and are given ballots to vote on questions. The bank, as banks do, lends that money out to earn interest. Then, interest earned from all forecasters' deposits is paid to only to the winners. Winners win by earning interest from others' money, and losers lose by opportunity cost and inflation.

Legally, I don't think this counts as online gambling. This would be like if your regular bank only let you earn interest on the money in your account if you can guess how many jelly beans are in the jar in the lobby.

Here's a simple version of the model. Forecasters buy 1-year CDs from the prediction bank, with the stipulation that they only earn this interest if they get questions right. The bank buys a bunch of US 1-year Treasury Bills with the money deposited. At the end of the year, the earned interest is divided among winners for each question, minus what bank keeps as profit.

There are a lot of variations for this model, but hopefully you get the general idea. Note that risk for forecasters is limited relative to the betting market approach, thus constraining both profits and losses. Also, there's the risk that the bank makes a bad investment and cannot pay out interest.

I think it could work, but I'm pretty confident I'm missing something here, so I wanted to share it for feedback. Let me know if (a) I got the economics of this right, and (b) it seems feasible. 

  1. ^
  2. ^

25

0
0

Reactions

0
0
Comments10


Sorted by Click to highlight new comments since:

I think another possible route around gambling restrictions to prediction markets is to ensure all proceeds go to charity, but the winners get to choose which charity to donate to. I wrote about this more here:

https://forum.effectivealtruism.org/posts/d43f6HCWawNSazZqb/charity-prediction-markets

Great idea! Makes me think, it would be interesting to see a political prediction market where the winnings go to your preferred candidate in the race. Not sure about if that would have a positive impact, but would be cool to study.

Edit: Just read your post and see that you discuss this haha

In the crypto world, Hedgehog Markets is built around this concept - you stake your money in a tournament, and then bet with their play money; the staked rewards go to the winners, and everyone gets their initial money back.

At https://manifold.markets/ we elected to start with fake internet points, but hope that careful rationing of the fake points can make them valuable the way in-game currencies can become valuable.

One more spinoff of your idea - if the information provided by the prediction market is valuable enough, perhaps the platform could pay out without ever having to take in money, and not qualify as gambling. Eg if the platform sells early access to market data to a hedge fund and distributes the proceeds to its users?

Came here to say this.

Oh wow, that's cool! Do you know how Hedgehog invests the play money?

Your last idea is a lot like a company giving bonuses to internal forecasters, seems promising. If prediction markets prove themselves worthy, maybe an EA organization will eventually decide it's worth it to regularly sponsor forecasting tournaments for global priorities work.  I'm excited to see where the space goes.

This pays far too little to the winners to make it worthwhile to have any money in this. It wouldn't have much more liquidity than a moneyless prediction book.

That entirely depends on the return on the bank’s investment, right? I have no idea what that could be in practice. If it were similar to the stock market, say 8% annually, then I think that would be very attractive to forecasters. Being right on a poll where 50% of respondents were also right would be like doubling what you expect to earn from stocks. But obviously that’s risky investing, so probably not feasible. Or if you were already a big bank, you could afford such risk, and make it worthwhile for winners. Was that what you were thinking?

This is a good effort, but I'm not sure if the return would be worth it. You said stock market in a different comment -- yes, it returns 8% annually, on average, but what happens when it returns -10%? You're subjecting the forecaster's to investment risk, whereas what you want to do is subject them only to the forecasting risk (and compensate them appropriately).

CFTC regulations have been at least as much of an obstacle as gambling laws. It's not obvious whether the CFTC would allow this strategy.

It seems like there is a lot of value in creating something that feels legitimate. Ideally more than just a niche group of EA/Rationalist folk would be interested in participating, and I don't think a "the feds would never know" argument would be convincing for a big audience. Thanks for sharing those examples though!

Curated and popular this week
 ·  · 7m read
 · 
This is a linkpost for a paper I wrote recently, “Endogenous Growth and Excess Variety”, along with a summary. Two schools in growth theory Roughly speaking: In Romer’s (1990) growth model, output per person is interpreted as an economy’s level of “technology”, and the economic growth rate—the growth rate of “real GDP” per person—is proportional to the amount of R&D being done. As Jones (1995) pointed out, populations have grown greatly over the last century, and the proportion of people doing research (and the proportion of GDP spent on research) has grown even more quickly, yet the economic growth rate has not risen. Growth theorists have mainly taken two approaches to reconciling [research] population growth with constant economic growth. “Semi-endogenous” growth models (introduced by Jones (1995)) posit that, as the technological frontier advances, further advances get more difficult. Growth in the number of researchers, and ultimately (if research is not automated) population growth, is therefore necessary to sustain economic growth. “Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed. (We do this, in the model, because each innovator needs a monopoly on his or her invention in order to compensate for the costs of developing it.) Improvements in process efficiency are called “vertical innovations” and increases in good variety are called “horizontal innovations”. Variety is desirable, so the one-off increase in variety produced by an increase to the population size increases real GDP, but it does not increase the growth rate. Likewise exponential population growth raise
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 14m read
 · 
As we mark one year since the launch of Mieux Donner, we wanted to share some reflections on our journey and our ongoing efforts to promote effective giving in France. Mieux Donner was founded through the Effective Incubation Programme by Ambitious Impact and Giving What We Can. TLDR  * Prioritisation is important. And when the path forward is unclear, trying a lot of different potential priorities with high productivity leads to better results than analysis paralysis. * Ask yourself what the purpose of your organisation is. If you are a mainly marketing/communication org, hire people from this sector (not engineers) and don’t be afraid to hire outside of EA. * Effective altruism ideas are less controversial than we imagined and affiliation has created no (or very little) push back * Hiring early has helped us move fast and is a good idea when you have a clear process and a lot of quality applicants Summary of our progress and activities in year 1 In January 2025, we set a new strategy with time allocation for our different activities. We set one clear goal - 1M€ in donations in 2025. To achieve this goal we decided: Our primary focus for 2025 is to grow our audience. We will experiment with a variety of projects to determine the most effective ways to grow our audience. Our core activities in 2025 will focus on high-impact fundraising and outreach efforts. The strategies where we plan to spend the most time are : * SEO content (most important) * UX Optimization of the website * Social Media ; Peer to Peer fundraising ; Leveraging our existing network The graphic below shows how we plan to spend our marketing time: We are also following partnership opportunities and advising a few high net worth individuals who reached out to us and who will donate by the end of the year. Results: one year of Mieux Donner On our initial funding proposal in June 2024, we wrote down where we wanted to be in one year. Let’s see how we fared: Meta Goals * Spendi