Just finished Semple (2021), 'Good Enough for Government Work? Life-Evaluation and Public Policy,' which I found fascinating for its synthesis of philosophy + economics + public policy, and potential relevance to EA (in particular, improving institutional decisionmaking).
The premise of the paper is essentially, "Normative policy analysis—ascertaining what government should do—is not just a philosophical exercise. It is (or should be) an essential task for people working in government, as well as people outside government who care about what government does. Life-evaluationist welfare-consequentialism is a practical and workable approach."
Some things that are potentially EA-relevant
For my 'Psychology of Negotiation' (PSYC 25700) class, I'm going to write up one/two-line summaries for research articles that feel plausibly relevant to policy, community-building, or really any interpersonal-heavy work. These are primarily informed by behavioral science.
Hopefully, this will also allow me to better recall these studies for my final.
I'm (still!!!) thinking about my BA thesis research question and I think my main uncertainty/decision point is what specific policy debate to investigate. I've narrowed it down to two so far - hopefully I don't expand - and really welcome thoughts.
Context: I am examining the relationship between narratives deployed by experts on Twitter and the Biden Administration's policymaking process re: COVID-19 vaccine diplomacy. Specifically, I want to examine a debate on an issue wherein EA-aligned experts have generally coalesced around one stance.
Motivating questions/insights:
The two debates below, including general thoughts
*edited for clarity - was in a rush when I posted!
These both seem like great options! Of the two, I think the first has more to play with as there is a pretty clear delineation between the epistemic vs. moral elements of the second, whereas I think debates about the first have those all jumbled up and it's thus more interesting/valuable to untangle them. I don't totally understand your hesitation so I'm afraid I can't offer much insight there, but with respect to long-term policymaking/shared beliefs, it does seem like the fault lines mapped onto fairly clear pro-free-market vs. pro-redistributive ideologies that drew the types of advocates one would have predicted given that divide.
*edit 3: After reading more on Epistemic Communities, I think I'm back where I started.
*edit 4: I am questioning, now, whether I need a framework of how experts influence policymaking at all ... Maybe I should conceptualize my actors more broadly but narrow the topic to, say, the use of evidence in narratives?
I really appreciate your response, Ian! I think it makes sense that the more convoluted status of the first debate would make it a more valuable question to investigate.
My hesitation was not worded accessibly or clearly - it was too grounded in the specific frameworks I'm struggling to apply - so let me reword: it doesn't seem accurate to claim that there was one expert consensus (i.e. primarily pro-/anti-waiver). Given that, I am not sure a) how to break down the category of 'expert' - although you provide one suggestion, which is helpful - and b) how strongly I can justify focusing on experts, given that there isn't a clear divide between "what experts think" and "what non-experts think."
Non-TL;DR:
My main concern with investigating the debate around the TRIPS waiver is that there doesn't seem to be a clear expert consensus. I'm not even sure there's a clear EA-aligned consensus, although the few EAs I saw speak on this (e.g. Rob Wiblin) seemed to favor donating over waiving IP (which seems like a common argument from Europe). Given that, I question
Suggestion: use an expert lens, but make the division you're looking at [experts connected to/with influence in the Biden administration] vs. ["outside" experts].
Rationale: The Biden administration thinks of and presents itself to the public as technocratic and guided by science, but as with any administration politics and access play a role as well. As you noted, the Biden administration did a clear about-face on this despite a lack of a clear consensus from experts in the public sphere. So why did that happen, and what role did expert influence play in driving it? Put another way, which experts was the administration listening to, and what does that suggest for how experts might be able to make change during the Biden administration's tenure?
Hmm! Yes, that's interesting - and aligns with the fact that many different policy influencers weighed in, ranging from former to current policymakers. Thank you very much for this!
I think something I'm worried about is how I can conceptualize [inside experts] vs. [outside experts] ... It seems like a potentially arbitrary divide and/or a very complex undertaking given the lack of transparency into the policy process (i.e. who actually wields influence and access to Biden and Katherine Tai, on this specific issue?).
It also complicates the investigation by adding in the element of access as a factor, rather than purely thinking about narrative strategies - and I very much want to focus on narratives. On one hand, I think that could be interesting - e.g. looking at narrative strategies across levels of access. On the other, I'm uncertain that looking at narrative strategies would add much compared to just analyzing the stances of actors within the sphere of influence.
What do you think of this alternate RQ: "How did pro/anti-waiver coalitions use evidence in their narratives?"
Moves away from the focus on experts but still gets to the scientific/epistemic component.
(I'm also wondering whether I am being overly concerned with theoretically justifying things!)
(I'm also wondering whether I am being overly concerned with theoretically justifying things!)
I think I would agree with this. It seems like you're trying to demonstrate your knowledge of a particular framework or set of frameworks through this exercise and you're letting that constrain your choices a lot. Maybe that will be a good choice if you're definitely going into academia as a political scientist after this, but otherwise, I would structure the approach around how research happens most naturally in the real world, which is that you have a research question that would have concrete practical value if it were answered, and then you set out to answer it using whatever combination of theories and methods makes sense for the question.
Thanks! I'll take a break from thinking about the theory - ironically, I am fairly confident I don't want to go into academia.
Again, appreciate your thoughts on this. Hope I'll hear from you again if I post another Shortform about my thesis!
A big concern that's cropped up during my current work trial is whether I'm actually just not agentic/strategic/have-good-judgment-enough to take on strategy roles at EA orgs.
I think part of this is driven by low self-confidence, but part of this is the very plausible intuition that not everyone can be in the heavy tail and maybe I am not in the heavy tail for strategy roles. And this feels bad, I guess, because part of me thinks "strategy roles" are the highest-status roles within the meta-EA space, and status is nice.
But not nice enough to sacrifice impact! It seems possible, though, that I actually could be good at strategy and I'm bottlenecked by insecurity (which leads me to defer to others & constantly seek help rather than being agentic).
My current solution is to flag this for my future manager and ensure we are trialling both strategy and operations work. This feels like a supportive way for me to see where my comparative advantage lies - if I hear, "man, you suck at strategy, but your ops work is pretty good!" Then I would consider this a win!
My brain now wants to think about the scenario where I'm actually just bad at both. But then I'll have to take the advice I give my members: "Well, then you got really valuable information - you just aren't a great fit for these specific roles, so now you get to explore options which might be great fits instead!"
One approach I found really helpful in transitioning from asking a manager to making my own strategic decisions was going to my manager with a recommendation and asking for feedback on it (or, failing that, a clear description of the problem and any potential next steps I can think of, like ways to gain more information).
This gave me the confidence to learn how my organisation worked and know I had my manager's support for my solution, but pushed me to develop my own judgment.
Thanks, this is a good tip! Unfortunately, the current options I'm considering seem more hands-off than this (i.e., the expectation is that I would start with little oversight from a manager), but this might be a hidden upside because I'm forced to just try things. : )
Thing I should think about in the future: is this "enough" question even useful? What would it even mean to be "agentic/strategic enough?"
edit: Oh, this might be insidiously following from my thought around certain roles being especially important/impactful/high-status. It would make sense to consider myself as falling short if the goal were to be in the heavy tail for a particular role.
But this probably isn't the goal. Probably the goal is to figure out my comparative advantage, because this is where my personal impact (how much good I, as an individual, can take responsibility for) and world impact (how much good this creates for the world) converges. In this case, there's no such thing as "strategic enough" - if my comparative advantage doesn't lie in strategy, that doesn't mean I'm not "strategic enough" because I was never 'meant to' be in strategy anyway!
So the question isn't, "Am I strategic enough?" But rather, "Am I more suited for strategy-heavy roles or strategy-light roles?"
Optionality cost is a useful reminder that option value consists not only of minimising opportunity cost but also increasing your options (which might require committing to an opportunity).
This line in particular feels very EA:
As Ami Vora writes, “It’s not prioritization until it hurts.”
I know that carbon offsets (and effective climate giving) are a fairly common topic of discussion, but I've yet to see any thoughts on the newly-launched Climate Vault. It seems like a novel take on offsetting: your funds go to purchasing cap-and-trade permits which will then be sold to fund carbon dioxide removal (CDR).
I like it because it a) uses (and potentially improves upon) a flawed government program in a beneficial way, and b) I can both fund the limitation of carbon emissions and the removal, unlike other offsets which only do the latter.
However, I recognize that I have a blind spot because I respect Michael Greenstone. Some doubts:
If anyone has thoughts, would appreciate them!
"How do you convert a permit into CO2 removal using CDR technologies without selling them back into the compliance market – in effect negating the offset?
We will sell the permits back into the market, but only when we’re ready to use the proceeds to fund carbon removal projects equivalent to the number of permits we’re selling, or more. So, in effect, the permits going back onto the market are negated by the tons of carbon we are paying to remove."
Once credible CDR is so cheap (now > USD 100/t, most approaches over USD 600, cf Stripe Climate) that this works (current carbon prices around USD 20), the value of additional CDR tech support is pretty low because the learning curve has already been brought down.
Am I missing something?
It seems like a good way to buy allowances which is, when the cap is fixed (also addressed in the FAQ, though not 100% convincingly) better than buying most offsets, but it seems unlikely to work in the way intended.
Hmm okay! Thanks so much for this. So I suppose the main uncertainties for me are
Really appreciate you helping clarify this for me!
Realizing that what drove me to EA was largely wanting to "feel like I could help people" and not, "help the most beings." This leads me to, for example, really be into helping as many people as I can individually help flourish (at the expense of selecting for people who might be able to make the most impact)*.
This feels like a useful specification of my "A" side and how/why the "E" side is something I should work on!
*A more useful reframing of this is to put it into impact terms. Do I think the best way to make impact is to
(1) find the right contexts/problems wherein a given person can have an outsized impact
(2) focus on specific people that I think have the highest chance of having an outsized impact?
I wonder if anyone has read these books here? https://www.theatlantic.com/books/archive/2022/04/social-change-books-lynn-hunt/629587/?utm_source=Sailthru&utm_medium=email&utm_campaign=Future%20Perfect%204-19-22&utm_term=Future%20Perfect
In particular, 'Inventing Human Rights: A History' seems relevant to Moral Circle Expansion.
edit: I should've read the list fully! I've actually read The Honor Code. I didn't find it that impressive but I guess the general idea makes sense. If we can make effective altruism something to be proud of - something to aspire to for people outside the movement, including people who currently denigrate it as being too elitist/out-of-touch/etc. - then we stand a chance at moral revolution.
Two thoughts inspired by UChicago EA's discussion post-Ben-Todd's-talk at EAG:
On #1: There has been a large-scale EA-themed debate tournament targeting debaters (mainly undergraduates I believe) organized by Dan Lahav from EA Israel, talked about here!
Very useful, thank you! Apparently they did a trial with high schoolers, so I've reached out : )
At work so have no mental space to read this carefully right now, but wonder if anyone has thoughts - specifically about whether there's any EA-relevant content: MIT Predicted in 1972 That Society Will Collapse This Century. New Research Shows We’re on Schedule. (vice.com)
These models predicted growth followed by collapse. The first part has been proven correct, but there is little evidence for the second. Acting like past observations of growth are evidence of future collapse seems like an unusual example of Goodman's New Riddle of Induction in the wild.
Thank you, so helpful!
To clarify - "little evidence" implies that you think observations of current conditions aligning with model predictions, e.g. "Previous studies that attempted to do this found that the model’s worst-case scenarios accurately reflected real-world developments," are weak?
Would it be useful to compile EA-relevant press?
Inspired by me seeing this Vice article on wet-bulb conditions (a seemingly unlikely route for climate change to become an existential risk): Scientists Studying Temperature at Which Humans Spontaneously Die With Increasing Urgency
If so, what/how? I don't think full-time monitoring makes sense (first rule of comms: do everything with a full comms strategy in mind!) but I wonder if a list or Airtable would still be useful for organizations to pull from or something...
I think David Nash does something similar with his EA Updates (here is the most recent one). While most of the links are focused on EA Forum and posts by EA/EA-adj orgs, he features occasional links from other venues.
My hope is that people who see EA-relevant press will post it here (even in Shortform!).
I also track a lot of blogs for the EA Newsletter and scan Twitter for any mention of effective altruism, which means I catch a lot of the most directly relevant media. But EA's domain is the entire world, so no one person will catch everything important. That's what the Forum is for :-)
I'm not sure whether you're picturing a project specific to stories about EA or one that covers many other topics. In the case of the former, me and others at CEA know about nearly everything (though we don't have it in a database; no one ever asks). In the case of the latter, the "database" in question would probably just be... Google? I'm having trouble picturing the scenario where an org needs to pull from a list of articles they wouldn't find otherwise. (But I'm open to being convinced!)
One approach I found really helpful in transitioning from asking a manager to making my own strategic decisions was going to my manager with a recommendation and asking for feedback on it (or, failing that, a clear description of the problem and any potential next steps I can think of, like ways to gain more information).
This gave me the confidence to learn how my organisation worked and know I had my manager's support for my solution, but pushed me to develop my own judgment.