Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity.
Talking to those in forecasting to improve my forecasting question generation tool
Writing forecasting questions on EA topics.
Meeting EAs I become lifelong friends with.
Connecting them to other EAs.
Writing forecasting questions on metaculus.
Talking to them about forecasting.
I am pretty sure I thought this, yes. That's how it is in the UK. And all prediction markets push in this direction. I thought that the benefits would outweigh the costs, but I am less confident of that now. (Though I think the benfits are huge, really really large)
I weakly support regulation of huge sports gambling losses which seems very possible to do.
[COI: I work at the Swift Centre as a forecaster, I have worked for a prediction market, I am very involved in forecasting. It is not my current work however, which is on community notes]
A few things points attempting to say things other commenters haven't, though I largely agree with the critical comments and the things they agree with Marcus on:
I agree that the $100M doesn't seem super well allocated. Not because forecasting is useless, but because the money flowed to big institutions and platforms rather than smaller, weirder, mechanism-design bets. I like Metaculus, but it has absorbed a lot of money in the last 5 years and not clearly changed much. I don't know if I think FRI has been worth it, I am glad someone has done the research but, again, how much are we talking? I would have preferred smaller projects were funded on the margin. Coefficient's strategy in forecasting has felt poor to me, often ignoring the community who in my view come up with the most interesting projects and going for marginal spending on incumbents.
Nobody funds mechanism design or institutional epistemics. I recently spoke to someone at a household name enormous tech company who described their institutional process. It was almost unbelievably dysfunctional to me. Who is funding the work to help institutions think better? It doesn't promise near-term wins and frankly should't be the priority of any non-research org. So basically no one. Forecasting is an attempt. How much value is there in the joint stock company, or in democracy. To me, that's what we are talking about. Figuring out fundamentally better ways of making decisions. It is a problem at scale, it is neglected, and given the deregulation of prediction markets, tractable (though maybe bad, more later).
On "feels useful when it isn't" (point 6). I don't entirely disagree. I deliberately try not to spend time forecasting unless I'm being paid to. It can be a distraction. Where I disagree is that some forecasting is genuinely mentally sharpening, at least for me as a thinking discipline. And I think it's a not unreasonable status hierarchy. Do I endorse the status that Peter Wildeford or Eli Leifland have gotten from forecasting? Yes. Frankly, who do I not endorse having got status from being a forecaster?
Why don't AI 2027 and Ajeya count? Tangible forecasting outputs that demonstrably moved discourse and decision-making. Why don't these count as valuable forecasting outputs. AI 2027 is clearly informed by judgemental forecasters and was read by (I think) the Vice President. Habryka said something like 'too much time has been wasted down the resolution criteria mines' and I disagree but even if one agrees, I'm not sure even he thinks the whole field is a waste of time.
Prediction markets may be net-harmful, but not useless. I've said publicly I'm less sure PMs are net-positive — bankruptcies and intimate partner violence are real and huge problems that may be as large as any coordinaton benefits. But 'bad on net' and 'useless' are different claims, and the later seems more obviously incorrect to me. I would be more interested in a post entitled "EA forecasting efforts have caused massive harm".
EA focus has shifted towards AI and longtermism
Is there good discussion of that on here either? I have been tempted to put an hour aside to read longform articles and comment on them, but I rarely want to.
I think the problem is that discussion happens internally because it's not that fun or alive to discuss technical stuff here. Note that that isn't the case on LessWrong.
The thing it's fun to discuss here is community drama. But I'm not sure that's good for me, so I try to avoid it.
Yeah, I think you make good points. I think that forecasts are useful on balance, and then people should investigate them. Do you think that forecasting like this will hurt the information landscape on average?
Personally, to me, people engaged in this forecasting generally seem more capable of changing their minds. I think the AI2027 folks would probably be pretty capable of acknowledging they were wrong, which seems like a healthy thing. Probably more so than the media and academic?
Seems like a lot of specific, quite technical criticisms.
Sure, so we agree?
(Maybe you think I'm being derogatory, but no, I'm just allowing people who scroll down to the comments to see that I think this article contains a lot of specific, quite technical criticisms. If in doubt, I say things I think are true.)
Some thoughts:
On the other hand:
I don't expect this to change your mind, but maybe there are reasons you aren't convincing very informed people besides us being blind to reality. I admit I'd enjoy being rich, but I'm not particularly convinced I'll go try and work for a lab. And I don't think I bend my opinions towards Coefficient, either, and have never been funded by them.
I think you're right to sat that a large proportion of the public will come to agree with you. But also I expect a large proportion of the public to give talking points about water and energy use and that disney has a moral right to their characters for as long as copyright says they do. This doesn't seem good to me. I sense it seems fine to you.
I don't think this is all our war. I guess that you do. If so, we disagree. I will help to the extent I agree with you and be flatfooded and confused to the extent that I don't. I get that that's annoying. I feel some of that annoyance myself at ways I disagree with the community. But to me it feels part of being in a community. I have to convince people. And you have't convinced me.
I feel this quite a lot:
And so I think Holly's advice is worth reading, because it's fine advice.
Personally I feel a bit differently. I have been hurt by EA, but I still think it's a community of people who care about doing good per $. I don't know how we get to a place that I think is more functional, but I still think it's worth trying for the amout of people and resources attached to this space. But yes, I am less emotionally envolved than once I was.
Interesting take. I don't like it.
Perhaps because I like saying overrated/underrated.
But also because overrated/underrated is a quick way to provide information. "Forecasting is underrated by the population at large" is much easier to think of than "forecasting is probably rated 4/10 by the population at large and should be rated 6/10"
Over/underrated requires about 3 mental queries, "Is it better or worse than my ingroup thinks" "Is it better or worse than my ingroup thinks?" "Am I gonna have to be clear about what I mean?"
Scoring the current and desired status of something requires about 20 queries "Is 4 fair?" "Is 5 fair" "What axis am I rating on?" "Popularity?" "If I score it a 4 will people think I'm crazy?"...
Like in some sense your right that % forecasts are more useful than "More likely/less likely" and sizes are better than "bigger smaller" but when dealing with intangibles like status I think it's pretty costly to calculate some status number, so I do the cheaper thing.
Also would you prefer people used over/underrated less or would you prefer the people who use over/underrated spoke less? Because I would guess that some chunk of those 50ish karma are from people who don't like the vibe rather than some epistemic thing. And if that's the case, I think we should have a different discussion.
I guess I think that might come from a frustration around jargon or rationalists in general. And I'm pretty happy to try and broaden my answer from over/underrated - just as I would if someone asked me how big a star was and I said "bigger than an elephant". But it's worth noting it's a bandwidth thing and often used because giving exact sizes in status is hard. Perhaps we shouldn't have numbers and words for it, but we don't.