John_Walker

Research Assistant @ Blavatnik School of Government, Oxford University
19 karmaJoined Working (0-5 years)

Bio

I'm a research assistant with the Centre for the Study of African Economies at the Blavatnik School of Government. My research interests are in computational econometrics and machine learning for causal inference with applications to development and environmental economics. 

In my spare time, I teach statistics through the Data, Economics and Development Policy Program at MIT and am putting together a MOOC focused on introductory EA ideas.

How others can help me

I'm currently uncertain about whether I should pursue a PhD in economics, or refocus on causal ML applications and understanding AI causal reasoning. If you have insights here I'd be very keen to chat!

How I can help others

I've done the circuit of economics research assistantships (1 at the University of Sydney, 2 at MIT and 1 at Oxford University) so can talk people through the RA market and the PhD application process if this is something they are seriously considering. Please feel free to reach out to me here.

Comments
2

Not sure if this is the best forum for feedback, so please direct me elsewhere and happy to delete my comment if not. 

A few suggestions on the explanation of EV. While the examples are clear, I found the definition of expected value confusing. 

It is written as "expected value = likelihood of option x value of option", and "The expected value is the probability multiplied by the value of each outcome".

I read this as: , which doesn't capture the need to sum across outcomes.

Pitched at the same level of technicality, I think a clearer definition is: "The expected value of an uncertain decision is the sum across all outcomes of the value of each outcome multiplied by its probability." 

Or some other wording that captures that this is a weighted average. This properly implies the necessary summation across outcomes: .

It might also be worth:

  • Giving detail to the calculation of the expected value of "If you have a 50% chance of winning a coin flip for a $1 coin, the expected value is 50 cents". Because it has a 0-weighted outcome, it isn't obvious that you need to summate across outcomes here, which is confusing if you don't see the application of the formula: .
  • Giving EV an interpretation, e.g. "This has the interpretation of a long-run average, or what we would expect if we were able to repeat our plan many times." This would help to ground the motivation for using EV in the "what should you do" section.
  • Noting that this only holds in the discrete case

Thanks for sharing this! It touched on a lot of the topics that I've recently found most challenging in thinking about the use of RCTs in development economics and offered some helpful perspectives. Below I've outlined a few challenges.

EAs creating impact through marginal grants

To me, it seems highly improbable that a new intervention is in expected value terms:

  1. 5x+ as effective as cash transfers
  2. Plausibly scalable / a potential mega project

yet is still struggling to find funds in the current RCT funding landscape.

So I think it is unlikely that EA funds for RCTs will precipitate new, high-quality opportunities that generate giving opportunities at the same cost-benefit as GiveWell’s current recommendations on the margin

One might think that in the “market place” of development economics grants, grants tend to be allocated to the most promising programs on which we have the least information. The marginal grant is probably far less impactful than the average, in a heavily right-tailed impact area. For this reason, the ROI calculation looks very optimistic to me. 

In fact, I think that currently too many RCTs are happening in benefit-cost terms and that if EA funds are held to their traditionally high ROI standard, there will be few opportunities to fund new RCT research. Actually, I’m somewhat pessimistic that many interventions that are both 5x+ as effective as cash transfers and plausibly scalable exist and have not been found (the low-hanging fruit has probably been exhausted).  However, if these exist and the recipient couldn't have accessed a grant but for EA support, we should definitely spend money on them. I am just unconvinced that 1) we should treat this as a an area for substantive EA engagement and 2) that if we think like Bayesians who are interested in medium-long-term impact, RCTs  are necessarily cost-effective relative to other forms of evidence-generating activities such as pseudo-experiments / looking at correlation evidence. 

Of course, the RCT funding landscape is far from a perfect market. My personal experience (at least in the development economics space) is that RCTs tend to be led by relatively senior development economists who have the record, partnerships and experience in-country to foster government buy-in and successfully manage an RCT. These same attributes mean that they are well-placed to find or create funding opportunities with partner governments, their home government and with NGOs so the marginal EA grant is probably barely impactful.

By contrast, I am more optimistic that grants to PhD or early researchers and researchers from small institutions/independent researchers might be more impactful, as these researchers have fewer signals of their capacity to bring to the grant market (so they might have good ideas that are unfunded).

EAs creating impact through changing the research agenda

I am sympathetic to the argument that EA grants in this space might cause better research to happen. For example, I think that funding Kremer’s meta-analysis was probably worthwhile as it helped to normalise the use of non-RCT methodologies in the RCT-crowded development economics space. 

That said, I actually think that the literature is very responsive to critiques such as those raised about reporting cost-effectiveness and general equilibrium effects, and is rapidly self-correcting. For example, the norm in development economics RCTs is transitioning to expecting cost-effectiveness analyses (or that is the case in the papers, workshops and review processes of which I am part). Studies on general equilibrium effects and spillovers are also gaining traction in the top 5 econ journals, for example this recent paper in  Econometrica. Indeed, my personal view is that economists have gotten better at rigorously evaluating spillovers due the to the robust critiques of epidemiologists (see the worm wars). I’m not sure, for this reason, that EA should be allocating many funds to trying to push the development economics literature in a given direction, as it seems to already be quite responsive to internal critiques. 

EA alignment

More importantly, I am pretty convinced that development economics is quite EA-aligned, and probably relatively more aligned than the academic disciplines concerned with longtermist cause areas (most notably AGI). With Esther Duflo, Abhijit Banerjee and Michael Kremer winning the Nobel Prize in Economics, I think it is pretty clear that the field aspires to have a large impact on improving lives in the developing world through extremely rigorously evaluated approaches (and the peer review system for microeconometric evaluations seems to expect an extremely high degree of rigour). Hence, I'm not sure that EA  has much room to create impact by using funds to re-align the field to focus on areas of interest to EAs, as it seems to be pretty close to what we would hope for.

In fact, I think that the primary point of EA non-alignment is that it has a much greater focus on randomista development strategies than macroeconomic development strategies. That is, relative to the EA median, it probably focuses too much on the use of RCTs and rigorous evidence at the expense of evaluating macro policy which is almost impossible to evaluate using RCTs. Therefore, I think there are probably more promising cases for research grants in the relatively neglected space of macroeconomic development policy, rather than new RCTs.