[Takeaways from Covid forecasting on Metaculus]
I’m probably going to win the first round of the Li Wenliang forecasting tournament on Metaculus, or maybe get second. (My screen name shows up in second on the leaderboard, but it’s a glitch that’s not resolved yet because one of the resolutions depends on a strongly delayed source.) (Update: I won it!)
With around 52 questions, this was the largest forecasting tournament on the virus. It ran from late February until early June.
I learned a lot during the tournament. Next to claiming credit, I want to share some observations and takeaways from this forecasting experience, inspired by Linch Zhang’s forecasting AMA:
Some things I was particularly wrong about:
Some things I was particularly right about:
(I have stopped following the developments closely by now.)
I know it might not be what you're looking for, but congratulations!
+1 to the congratulations from JP! I may have mentioned this before, but I considered your forecasts and comments for covidy questions to be the highest-quality on Metaculus, especially back when we were both very active.
You may not have considered it worth your time in the end, but I still think it's good for EAs to do things that on the face of it seem fairly hard, and develop better self models and models of the world as a result.
This was a great writeup, thanks for taking the time to make it. Congrats on the contest, too!
I'm sorry to hear your experience was stressful. Do you intend to go back to Metaculus in a more relaxed way? I know some users restrict themselves to a subset of topics, for example.
Can you provide some links on the latest IFR estimates? A quick Google search leads me to the same 0.5% ballpark.
I'm not following the developments anymore. I could imagine that the IFR is now lower than it used to be in April because treatment protocols have improved.
[Is pleasure ‘good’?]
What do we mean by the claim “Pleasure is good”?
There’s an uncontroversial interpretation and a controversial one.
Vague and uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is desired.
Specific and controversial claim: When we say that pleasure is good, what we mean is that, all else equal, pleasure is an end we should be striving for. This captures points like:
People who say “pleasure is good” claim that we can establish this by introspection about the nature of pleasure. I don’t see how one could establish the specific and controversial claim from mere introspection. After all, even if I personally valued pleasure in the strong sense (I don’t), I couldn’t, with my own introspection, establish that everyone does the same. People’s psychologies differ, and how pleasure is experienced in the moment doesn’t fully determine how one will relate to it. Whether one wants to dedicate one’s life (or, for altruists, at least the self-oriented portions of one's life) to pursuing pleasure depends on more than just what pleasure feels like.
Therefore, I think pleasure is only good in the weak sense. It’s not good in the strong sense.
Another argument that points to "pleasure is good" is that people and many animals are drawn to things that gives them pleasure, and that generally people communicate about their own pleasurable states as good. Given a random person off the street, I'm willing to bet that after introspection they will suggest that they value pleasure in the strong sense. So while this may not be universally accepted, I still think it could hold weight.
Also, a symmetric statement can be said regarding suffering, which I don't think you'd accept. People who say "suffering is bad" claim that we can establish this by introspection about the nature of suffering.
From reading Tranquilism, I think that you'd respond to these as saying that people confuse "pleasure is good" with an internal preference or craving for pleasure, while suffering is actually intrinsically bad. But taking an epistemically modest approach would require quite a bit of evidence for that, especially as part of the argument is that introspection may be flawed.
I'm curious as to how strongly you hold this position. (Personally, I'm totally confused here but lean toward the strong sense of pleasure is good but think that overall pleasure holds little moral weight)
Another argument that points to "pleasure is good" is that people and many animals are drawn to things that gives them pleasure
It's worth pointing out that this association isn't perfect. See  and  for some discussion. Tranquilism allows that if someone is in some moment neither drawn to (craving) (more) pleasurable experiences nor experiencing pleasure (or as much as they could be), this isn't worse than if they were experiencing (more) pleasure. If more pleasure is always better, then contentment is never good enough, but to be content is to be satisfied, to feel that it is good enough or not feel that it isn't good enough. Of course, this is in the moment, and not necessarily a reflective judgement.
I also approach pleasure vs suffering in a kind of conditional way, like an asymmetric person-affecting view, or "preference-affecting view":
I would say that something only matters if it matters (or will matter) to someone, and an absence of pleasure doesn't necessarily matter to someone who isn't experiencing pleasure, and certainly doesn't matter to someone who does not and will not exist, and so we have no inherent reason to promote pleasure. On the other hand, there's no suffering unless someone is experiencing it, and according to some definitions of suffering, it necessarily matters to the sufferer. (A bit more on this argument here, but applied to good and bad lives.)
I agree that pleasure is not intrinsically good (i.e. I also deny the strong claim). I think it's likely that experiencing the full spectrum of human emotions (happiness, sadness, anger, etc.) and facing challenges are good for personal growth and therefore improve well-being in the long run. However, I think that suffering is inherently bad, though I'm not sure what distinguishes suffering from displeasure.
[I’m an anti-realist because I think morality is underdetermined]
I often find myself explaining why anti-realism is different from nihilism / “anything goes.” I wrote lengthy posts in my sequence on moral anti-realism (2 and 3) about partly this point. However, maybe the framing “anti-realism” is needlessly confusing because some people do associate it with nihilism / “anything goes.” Perhaps the best short explanation of my perspective goes as follows:I’m happy to concede that some moral facts exist (in a comparatively weak sense), but I think morality is underdetermined.
This means that beyond the widespread agreement on some self-evident principles, expert opinions won’t converge even if we had access to a superintelligent oracle. Multiple options will be defensible, and people will gravitate to different attractors in value space.
I think if you concede that some moral facts exist, it might be more accurate to call yourself a moral realist. The indeterminacy of morality could be a fundamental feature, allowing for many more acts to be ethically permissible (or no worse than other acts) than with a linear (complete) ranking. I think consequentialists are unusually prone to try to rank outcomes linearly.
I read this recently, which describes how moral indeterminacy can be accommodated within moral realism, although it was kind of long for what it had to say. I think expert agreement (or ideal observers/judges) could converge on moral indeterminacy: they could agree that we can't know how to rank certain options and further that there's no fact of the matter.
Thanks for bringing up this option! I don't agree with this framing for two reasons:
I think what I describe in the second bullet point will seem counterintuitive to many people because they think that if morality is underdetermined, your views on morality should be underdetermined, too. But that doesn't follow! I understand why people have the intuition that this should follow, but it really doesn't work that way when you look at it closely. I've been working on spelling out why.
[When thinking about what I value, should I take peer disagreement into account?]
Consider the question “What’s the best career for me?”
When we think about choosing careers, we don’t update to the career choice of the smartest person we know or the person who has thought the most about their career. Instead, we seek out people who have approached career choice with a similar overarching goal/framework (in my case, 80,000 Hours is a good fit), and we look toward the choices of people with similar personalities (in my case, I notice a stronger personality overlap with researchers than managers, operations staff, or those doing earning to give).
When it comes to thinking about one’s values, many people take peer disagreement very seriously.
I think that can be wise, but it shouldn’t be done unthinkingly. I believe that the quest to figure out one’s values shares strong similarities with the quest of figuring out one’s ideal career. Before deferring to others with one's deliberations, I recommend making sure that others are asking the same questions (not everything that comes with the label “morality” is the same) and that they are psychologically similar in the ways that seem fundamental to what you care about as a person.
+1! This seems useful.
I also think EA / utilitarianism in general ends up framing "what should I value?" as a problem to be solved from a unilateralist gods-eye-view. Whereas in practice our values are noticeably shaped by the people we're with - be it what values your friend groups reinforce or what cultural norms you learned to imitate or how you were educated or raised.
I'm not sure whether more unilateralism and trying to disentangle yourself from the pressures around you is always good, although it does seem compelling to me personally a lot of the time.
[Are underdetermined moral values problematic?]
If I think my goals are merely uncertain, but in reality they are underdetermined and the contributions I make to shaping the future will be driven, to a large degree, by social influences, ordering effects, lock-in effects, and so on, is that a problem?
I can’t speak for others, but I’d find it weird. I want to know what I’m getting up for in the morning.
On the other hand, because it makes it easier for the community to coordinate and pull things in the same directions, there's a sense in which underdetermined values are beneficial.
[New candidate framing for existential risk reduction]
The default [edit: implicit]framing for reducing existential risk is something like this. "Currently, humans have control over what we want, but there's a risk that we would lose this control. For instance, transformative AI that's misaligned with what we'd want could prevent us from actualizing good futures."
I don't find this framing particularly compelling. I don't feel like people are particularly "in control of things." There are areas/domains where our control is growing, but there are also areas/domains where it is waning (e.g., cost disease; dysfunctional institutions). (Or, instead of "control waning," we can also think of misaligned forces taking away some of our control – for instance with filter bubbles and other polarizing forces reducing the sense that all people have a shared reality.)
The framing I find most compelling is the following:
"Humans aren't particularly in control of things, but there are areas where technological progress has given us surprisingly advanced capabilities, and every now and then, some groups of people manage to use those capabilities really well. If we want to reduce existential risks, we'd require almost god-like degrees of control over the future and the wisdom/foresight to use it to our advantage. AI risk, in particular, seems especially important from this perspective – for two reasons. (1) AI will likely be radically transformative. Since it's generally much easier to design good systems from scratch rather than make tweaks to existing systems, transformative AI (precisely because of its potential to be transformative) is our best chance to get in control of things. (2) If we fail to align AI, we won't be left in a position where we could attain control over things later."
Fwiw (1) is more naturally phrased as an opportunity associated with AI than a risk ("AI opportunity" vs "AI risk"). And if so you may want to use another term than "existential risk reduction" for the concept that you've identified.
A bit related to an opportunity+risk framing of AI: Artficial Intelligence as a Positive and Negative Factor in Global Risk.
"The default framing for reducing existential risk is something like this. "Currently, humans have control over what we want, but there's a risk that we would lose this control"Can you perhaps point to some examples?
To me it seems that the default framing is often focused on extinction risks, and then non-extinction existential risks are mentioned as a sort of secondary case. Under this framing you're not really mentioning the issue of control, but are rather mostly focusing on the distinction between survival and extinction.
Maybe you had specific writings (focusing on AI risk?) in mind though?
Good points. I should have written that the point about control is implicit. The default framing focuses on risks, as you say, not on making something happen that gives us more control than we currently have. I think there's a natural reading of the existential risk framings that implicitly says something like "current levels of control might be adequate if it weren't for destructive risks" or perhaps "there's a trend where control increases by default and things might go well unless some risk comes about." To be clear, that's by no means a necessary implication of any text on existential risks. It's just something that is under-discussed, and the lack of discussion suggests that some people might think that way.
The second part of my comment here is relevant for this thread's theme – it explains my position a bit better.
In discussions on the difficulty of aligning transformative AI, I've seen reference class arguments like "When engineers build and deploy things, it rarely turns out to be destructive."
I've always felt like this is pointing at the wrong reference class.
My above comment on framings explains why. I think the reference class for AI alignment difficulty levels should be more like: "When have the people who deployed transformative technology correctly foreseen long-term bad societal consequences and have taken the right costly steps to mitigate them?"
(Examples could be: Keeping a new technology secret; or facebook in an alternate history setting up a governance structure where "our algorithm affects society poorly" would receive a lot of sincere attention even at management levels, securely going forward throughout the company's existence.)
Admittedly, I'm kind of lumping together the alignment and coordination problems. Someone could have the view that "AI alignment," with a narrow definition of what counts as "aligned," is comparatively easy, but coordination could still be hard.
[Moral uncertainty and moral realism are in tension]
Is it ever epistemically warranted to have high confidence in moral realism, and also be morally uncertain not only between minor details of a specific normative-ethical theory but between theories?
I think there's a tension there. One possible reply might be the following. Maybe we are confident in the existence of some moral facts, but multiple normative-ethical theories can accommodate them. Accordingly, we can be moral realists (because some moral facts exist) and be morally uncertain (because there are many theories to choose from that accommodate the little bits we think we know about moral reality).
However, what do we make of the possibility that moral realism could be true only in a very weak sense? For instance, maybe some moral facts exist, but most of morality is underdetermined. Similarly, maybe the true morality is some all-encompassing and complete theory, but humans might be forever epistemically closed off to it. If so, then, in practice, we could never go beyond the few moral facts we already think we know for sure.
Assuming a conception of moral realism that is action-relevant for effective altruism (e.g., because it predicts reasonable degrees of convergence among future philosophers, or makes other strong claims that EAs would be interested in), is it ever epistemically warranted to have high confidence in that, and be open-endedly morally uncertain?
Another way to ask this question: If we don't already know/see that a complete and all-encompassing theory explains many of the features related to folk discourse on morality, why would we assume that such a complete and all-encompassing theory exists in a for-us-accessible fashion? Even if there are, in some sense, "right answers" to moral questions, we need more evidence to conclude that morality is not vastly underdetermined.
For more detailed arguments on this point, see section 3 in this post.