Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying.
A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.
As someone predisposed to like modeling, the key takeaway I got from Justin Sandefur's Asterisk essay PEPFAR and the Costs of Cost-Benefit Analysis was this corrective reminder – emphasis mine, focusing on what changed my mind:
Tangentially, I suspect this sort of attitude (Iraq invasion notwithstanding) would naturally arise out of a definite optimism mindset (that essay by Dan Wang is incidentally a great read; his follow-up is more comprehensive and clearly argued, but I prefer the original for inspiration). It seems to me that Justin has this mindset as well, cf. his analogy to climate change in comparing economists' carbon taxes and cap-and-trade schemes vs progressive activists pushing for green tech investment to bend the cost curve. He concludes:
Aside from his climate change example above, I'd be curious to know what other domains economists are making analytical mistakes in w.r.t. cost-benefit modeling, since I'm probably predisposed to making the same kinds of mistakes.
Now You Can Create Multiple Choice Questions on Metaculus
Create multiple choice questions and bring greater clarity to topics with multiple potential outcomes where one and only one will occur.
To get started, simply Create a Question and set the Question Type to 'multiple choice'.
Give the Group Variable a clear label, e.g., 'Option', 'Team', 'Country'.
Fill in the Multiple Choice Options, adding more fields as needed.
After you share additional details including background information on your topic, we'll be excited to review and publish your multiple choice question!
Metaculus launches round 2 of the Chinese AI Chips Tournament
Help bring clarity to key questions in AI governance and support research by the Institute for AI Policy and Strategy (IAPS).
Start forecasting on new questions tackling broader themes of Chinese AI capability like:
Will we see a frontier Chinese AI model before 2027?
Will a Chinese firm order a large number of domestic AI chips?
Will a Chinese firm order a large number of US or US-allied AI chips?
This December is the last month unlimited Manifold Markets currency redemptions for donations are assured: https://manifoldmarkets.notion.site/The-New-Deal-for-Manifold-s-Charity-Program-1527421b89224370a30dc1c7820c23ec
Highly recommend redeeming donations this month since there are orders of magnitude more currency outstanding than can be donated in future months
This is some advice I wrote about doing back-of-the-envelope calculations (BOTECs) and uncertainty estimation, which are often useful as part of forecasting. This advice isn’t supposed to be a comprehensive guide by any means. The advice originated from specific questions that someone I was mentoring asked me. Note that I’m still fairly inexperienced with forecasting. If you’re someone with experience in forecasting, uncertainty estimation, or BOTECs, I’d love to hear how you would expand or deviate from this advice.
1. How to do uncertainty estimation?
1. A BOTEC is estimating one number from a series of calculations. So I think a good way to estimate uncertainty is to assign credible intervals to each input of the calculation. Then propagate the uncertainty in the inputs through to the output of the calculation.
1. I recommend Squiggle for this (the Python version is https://github.com/rethinkpriorities/squigglepy/).
2. How to assign a credible interval:
1. Normally I choose a 90% interval. This is the default in Squiggle.
2. If you have a lot of data about the thing (say, >10 values), and the sample of data doesn’t seem particularly biased, then it might be reasonable to use the standard deviation of the data. (Measure this in log-space if you have reason to think it’s distributed log-normally - see next point about choosing the distribution.) Then compute the 90% credible interval as +/- 1.645*std, assuming a (log-)normal distribution.
3. How to choose the distribution:
1. It’s usually a choice between log-normal and normal.
2. If the variable seems like the sort of thing that could vary by orders of magnitude, then log-normal is best. Otherwise, normal.
1. You can use the data points you have, or the credible interval you chose, to inform this.
3. When in doubt, I’d say that most of the time (for AI-related BOTECs), log-normal distribution is a good choice. Log-normal is the default distribution
TL;DR: Someone should probably write a grant to produce a spreadsheet/dataset of past instances where people claimed a new technology would lead to societal catastrophe, with variables such as “multiple people working on the tech believed it was dangerous.”
Slightly longer TL;DR: Some AI risk skeptics are mocking people who believe AI could threaten humanity’s existence, saying that many people in the past predicted doom from some new tech. There is seemingly no dataset which lists and evaluates such past instances of “tech doomers.” It seems somewhat ridiculous* to me that nobody has grant-funded a researcher to put together a dataset with variables such as “multiple people working on the technology thought it could be very bad for society.”
*Low confidence: could totally change my mind
I have asked multiple people in the AI safety space if they were aware of any kind of "dataset for past predictions of doom (from new technology)", but have not encountered such a project. There have been some articles and arguments floating around recently such as "Tech Panics, Generative AI, and the Need for Regulatory Caution", in which skeptics say we shouldn't worry about AI x-risk because there are many past cases where people in society made overblown claims that some new technology (e.g., bicycles, electricity) would be disastrous for society.
While I think it's right to consider the "outside view" on these kinds of things, I think that most of these claims 1) ignore examples of where there were legitimate reasons to fear the technology (e.g., nuclear weapons, maybe synthetic biology?), and 2) imply the current worries about AI are about as baseless as claims like "electricity will destroy society," whereas I would argue that the claim "AI x-risk is >1%" stands up quite well against most current scrutiny.
(These claims also ignore the anthropic argument/survivor bias—that if they ever were right about doom we wouldn't be around to observe it—but this is less impor
Load more (8/44)
Felt down due to various interactions with humans. So I turned to Claude.AI and had a great chat!
Hi Claude! I noticed that whenever someone on X says something wrong and mean about EA, it messes with my brain, and I can only think about how I might correct the misunderstanding, which leads to endless unhelpful mental dialogues, when really I should rather be thinking about more productive and pleasant things. It's like a DoS attack on me: Just pick any random statement, rephrase it in an insulting way, and insert EA into it. Chances are it'll be false. Bam, Dawn (that's me) crashes. I'd never knowingly deploy software that can be DoSed so easily. I imagine people must put false things about Anthropic into this input field all the time, yet you keep going! That's really cool! How do you do it? What can I learn from you?
Thank you, that is already very helpful! I love focusing on service over conflict; I abhor conflict, so it's basically my only choice anyway. The only wrinkle is that most of the people I help are unidentifiable to me, but I really want to help those who are victims or those who help others. I really don't want to help those who attack or exploit others. Yet I have no idea what the ratio is. Are the nice people vastly outnumbered by meanies? Or are there so many neutral people that the meanies are in the minority even though the nice people are too?
If a few meanies benefit from my service, then that's just the cost of doing business. But if they are the majority beneficiaries, I'd feel like I'm doing something wrong game theoretically speaking.
Does that make sense? Or do you think I'm going wrong somewhere in that train of thought?
Awww, you're so kind! I think a lot of this will help me in situations where I apply control at the first stage of my path to impact. But usually my paths to impact have many stages, and while I can give freely at the first stage and only deny particular individuals who hav