Cross posted from: https://open.substack.com/pub/gamingthesystem/p/a-lot-of-ea-orientated-research-doesnt?r=9079y&utm_campaign=post&utm_medium=web 

NB: This post would be clearer if I gave specific examples but I’m not going to call out specific organisations or individuals to avoid making this post unnecessarily antagonistic. 

Summary: On the margin more resources should be put towards action-guiding research instead of abstract research areas that don’t have a clear path to impact. More resources should also be put towards communicating that research to decision-makers and ensuring that the research actually gets used. 

Doing research that improves the world is really hard. Collectively as a movement I think EA does better than any other group. However, too many person-hours are going into research that doesn’t seem appropriately focused on actually causing positive change in the world. Soon after the initial ChatGPT launch probably wasn’t the right time for governments to regulate AI, but given the amount of funding that has gone into AI governance research it seems like a bad sign that there weren’t many (if any) viable AI governance proposals that were ready for policymakers to take off-the-shelf and implement. 

Research aimed at doing good could fall in two buckets (or somewhere inbetween):

  1. Fundamental research that improves our understanding about how to think about a problem or how to prioritise between cause areas
  2. Action-guiding research that analyses which path forward is best and comes up with a proposal

Feedback loops between research and impact are poor so there is a risk of falling prey to motivated reasoning as fundamental research can be more appealing for a couple of different reasons:

  1. Culturally EA seems to reward people for doing work that seems very clever and complicated, and sometimes this can be a not-terrible proxy for important research. But this isn’t the same as doing work that actually moves the needle on the issues that matter. Academic research far worse for this and rewards researchers for writing papers that sound clever (hence why a lot of academic writing is so unnecessarily unintelligible), but EA shouldn’t be falling into this trap of conflating complexity with impact.
  2. People also enjoy discussing interesting ideas, and EAs in particular enjoy discussing abstract concepts. But intellectually stimulating work is not the same as impactful research, even if the research is looking into an important area.

Given that action-guiding research has a clearer path to impact, arguably the bar should be pretty high to focus on fundamental research over action-guiding research. If it’s unlikely that a decision maker would look at the findings of a piece of research and change their actions as a result of it then there should be a very strong alternative reason why the research is worthwhile. There is also a difference between research that you think should change the behaviour of decision makers, and what will actually influence them. While it might be clear to you that your research on some obscure form of decision theory has implications for the actions that key decision makers should take, if there is a negligible chance of them seeing this research or taking this on board then this research has very little value. 

This is fine if the theory of change for your research having an impact doesn’t rely on the relevant people being convinced of your work (e.g. policymakers), but most research does rely on important people actually reading the findings, understanding them, and being convinced that they should take an alternative action to what they would have taken otherwise. This is especially true of research in areas like AI governance where the research findings being implemented requires governments to take action.

Doing this successfully doesn’t just rely on doing action-guiding research, you also have to communicate it to the relevant people. Some groups do this very well, others do not. It might not be very glamorous work to try to fight for the attention of politicians, but if you want legislative change this is what you have to do. It therefore seems odd to spend such a high proportion of time on research and then not put effort into making the research actionable for policymakers and communicating it to them.

Some counterarguments in favour of fundamental research:

  1. We are so far away from knowing having recommendations for decision makers that we need to do fundamental research that will then let us work towards more action-guiding recommendations in the future. This is necessary in some areas, but the longer a casual chain to impact is the more you should discount the likelihood of it occurring.
  2. Fundamental research is more neglected in some areas so you can have more impact by trying to cover new ground than by trying to compete for the attention of decision-makers. The counter-counterpoint to this is that there are plenty of areas where there just isn’t much good action-guiding research so there is a wealth of action-relevant pieces of research to choose from that are neglected. 
  3. Fundamental research has a longer time to payoff but it can become relevant in the future, and by that point an individual who has focused on this area will be the expert who gets called upon by decision makers. This is a fair justification, but in these cases you should still have a preference for  a research area that is likely to become mainstream.

Putting more resources into fundamental research made sense when EA cause areas were niche and weird, although I think funding and talent were still too skewed towards fundamental research than was optimal. But now that multiple cause areas have become more mainstream, decision makers are more likely to be receptive to research findings in these areas.

It seems like EA think tanks are becoming more savvy and gradually moving in the direction of action-guiding research and focusing on communicating to decision makers, especially in AI governance. There is some inertia here and I would argue groups have been too slow to respond. If you can’t clearly articulate why someone would look at your research and take a different set of actions, you probably shouldn’t be doing it.

32

8
2

Reactions

8
2

More posts like this

Comments6
Sorted by Click to highlight new comments since:

I sympathise with your NB at the beginning, but to be honest in the absence of specific examples or wider data it's hard for me to ground this criticism or test its validity. Ironically, it's almost as if this post is too fundamental instead of action-guiding for me.

Doesn't mean you're wrong per se, but this post is almost more of a hypothesis than an argument.

I agree that in the absence of specific examples the criticism is hard to understand. But I would go further and argue that the NB at the beginning is fundamentally misguided and that well-meaning and constructive criticism of EA orgs or people should very rarely be obscured to make it seem less antagonistic.

Came here to comment this. It's the kind of paradigmatic criticism that Scott Alexander talks about, which everyone can nod and agree with when it's an abstraction.

Right now it's impossible to argue with this post- who doesn't want research to be better? Even positive examples with specific pointers to what they did well would help.

Thank you for making this interesting post. It's certainly something that pops up in forum discussions so it's useful to see in a single post. Obviously without concrete examples it's hard to delve into the details but I think it's worth engaging on the discussion on an, ironically, more abstract level.

I think a lot of this comes down to how individual people define 'impact', which you do mention in your post. For some, increasing academic knowledge of a niche topic is impact. Other people might perceive citations as impact. For others, publishing a research paper that only gets read by other EA orgs but increases their social standing and therefore likelihood for further funding or work is impact. For some career capital is the intended impact. Some people measure impact only by the frontline change it elicits. This seems like the focus of your post unless I am mistaken, so it sounds like your post boils down to 'EA-centric research doesn't cause real-world, measurable change often enough'.

If that is the measure of impact you think is important, I think your post has some merit. That's not to say the other two are any lesser, or deserve less attention, but I think you are correct that there's an 'impact gap' near the end of the research-to-change pipeline. 

I can only speak to AI Governance as that is my niche. As fortune would have it, my career is in AI Governance within organisational change - that is to say my role is to enter private or public sector organisations to a greater or lesser extent and then help create new AI governance and policy on either a project or org-wide basis. So my feedback/thoughts here come with that experience but also that bias. I'll also take the opportunity to point out that AI governance isn't just about lobbying politicians but there's lots of wider organisational work there too, though I understand the oversight was likely word-count related.

Generally I think the problem you describe isn't so much one within EA as it is one within wider academia. During my PhD I got declined travel funding to present my research to government decision-makers at a government-sponsored event because it wasn't an 'academic conference' and therefore failed their 'impact' criteria. I was accepted by the same fund the year previous to that to present a (hardly ground-breaking) poster at a 35-person conference. I was very upset at the time because I had to miss that meeting and that opportunity passed me by, and I was frustrated that they gave me money to attend a conference that changed nothing and didn't give me the money I needed to make a big impact the year later. It was only later that I realised they just wanted different outcomes than me.

The problem there was that the university's definition of 'impact' differed from mine, so by their metric presenting a poster at an academic conference to 35 people was more impactful to their criteria than my meeting with government officials to show my research. It's a handy example of the fact that impact maps to goals.

So I think what it boils down to is how much this concept of goal-related impact bleeds into EA.
 

 

There is also a difference between research that you think should change the behaviour of decision makers, and what will actually influence them. While it might be clear to you that your research on some obscure form of decision theory has implications for the actions that key decision makers should take, if there is a negligible chance of them seeing this research or taking this on board then this research has very little value.
 


This point features partly in a post I am currently writing for Draft Amnesty Week, but essentially I think you're correct that in my work in more 'frontline' AI governance I've found that anecdotally roughly 0% of decision-makers read academic research. Or know where it is published. Or how to access it. That's a real problem when it comes to using academic research as an influence lever. That's not to say the research is pointless, it's just that there's extra steps between research and impact that are woefully neglected. If end-user change is something that is important to you as a researcher, it would be understandably frustrating for this hurdle to reduce that impact.

This isn't an EA issue but a field issue. There's plenty of fantastic non-EA AI governance research which lands like a pin-drop, far from the ears of decision-makers, because it wasn't put in the right hands at the right time. The problem is many decision-makers where it counts (particularly in industry) get their knowledge from staff, consultants, dedicated third-party summary organisations, or field-relevant newsletters/conferences. Not directly from academia.

One caveat here is that some fields, like Law, have a much greater overlap of 'people reading/publishing' and 'decision-makers'. This is partly because publishing and work in many legal sectors are designed for impact in this way. So the above isn't always ironclad, but it largely tracks for general decision-making and AI governance. I find the best EA orgs at generating real-world impact are the orgs in the legal/policy because of the larger than normal amount of legal and policy researchers there coupled with the fact they are more likely to measure their success by policy change.

A further complicating factor that I think contributes to the way you feel is that unfortunately some AI Governance research is undertaken and published by people who don't always have lots of experience in large organisations. Perhaps they spent their entire career in academia, or have worked only in start-ups, or via different paths, but that's where you see different 'paths to impact' which don't translate well to larger-scale impact like the type you describe in your post. Again the reason here is that each of these spheres have their own definition of what constitutes 'impact' and it doesn't always translate well.

As a partial result of this I've seen some really good AI governance ideas pitched really badly, and to the wrong gatekeeper. Knowing how to pitch research to an organisation is a skillset curated by experience, and the modern academic pathway doesn't give people the opportunity to gain much of that experience. Personally, I just learned it by failing really hard a lot of times early in my career. For what it's worth, I'd 100% recommend that strategy if there's any early careers folks reading this.

I will disagree with you on one point here:

 

Soon after the initial ChatGPT launch probably wasn’t the right time for governments to regulate AI, but given the amount of funding that has gone into AI governance research it seems like a bad sign that there weren’t many (if any) viable AI governance proposals that were ready for policymakers to take off-the-shelf and implement. 

 

I'll be pedantic and point out that governments already do regulate AI, just to different extents than some would like, and that off-the-shelf governance proposals don't really exist because of how law and policy works. So not sure this is a good metric to use for your wider point. Law and policy of AI is literally my career and I couldn't create an off-the-shelf policy that was workable just because of how many factors are required to be considered.
 



It seems like EA think tanks are becoming more savvy and gradually moving in the direction of action-guiding research and focusing on communicating to decision makers, especially in AI governance.
 


Taking a leaf from your vagueness book, I'll say that in my experience some of the EA or EA-adjacent AI governance orgs are really good at engaging external stakeholders, and some are less good. I say this as an outsider because I don't work for and nor have I ever worked for an EA org, but I do follow their research. So take this with appropriate pinches of salt.

I think part of the disparity is that some orgs recruit people with experience in how internal government decision-making works - ie people who have worked in the public sector or have legal or policy backgrounds. Some others don't. I think that translates largely to their goal. It's not random that some are good at it and some not so much, it's just some value that and some don't - therefore effort is invested in change impact or it isn't.

If an EA research org defines 'impact' as increasing research standing within EA, or amount of publications per year, or amount of conferences attended, then why would they make effort to create organisational change? Likewise, I don't publish that much because it's just not directly related to how effective my measurements of my own impact are. Neither is better, it just relates to how goals are measured.

If, as I think your post details, your criticism is that EA research doesn't create more frontline change often enough, then I think that there are some relatively simple fixes.

EA research has something of a neglect of involving external stakeholders which I think links back to the issues you explore in your post. Stakeholder engagement can be quite easily and well integrated into AI Governance research as this example shows, and that's quite an easy (and often non-costly) methodology to pick up that can result in frontline impact.

Stakeholder-involved research must always be done carefully, so I don't blame EA funding orgs or think tanks for being very careful in approaching it, but they need to cultivate the right talent for this kind of work and use it because it's very important.

I think a solution would be to offer grants or groups for this specific kind of work. Even workshops for people might work. I'd volunteer some of my experience for that, if asked to do so. Just something to give researchers who want the kind of impact you describe, but don't know how to do it, a head-start.

I think impact-centric conferences would also be a good idea. Theoretical researchers do fantastic work, and many of us more involved in the change side of things couldn't do our jobs without them, so creating a space where those groups can exchange ideas would be awesome. EAGs are good for that, I find. I often get a lot of 1-1s booked, and I get a lot from them too.

I think making the point of AI governance is quite valid here. And so is the progress of AI applications by OpenAI tools like GPTs, Sora, and the upcoming multimodal AI application. It is important to note that a majority of exposure affects large groups of un-savvy individuals in both the common workforce as well as the decision-makers. As someone who has a significant background in videos- what goes into the making and how it exists in the open- I can say there was sufficient time to implement a basic structural provision in the sector of generative media. It is/was obvious that the unregulated use of massive data is in play and only giants can try to sue giants, and as of now, that's just it. There should have been strict rules formed before marketing platforms like Sora. It is not hard to see that tech companies are not focused on AI alignment right now. New research aligns toward making the most in a competitive setting to deliver products. That's just a fact, and it is sad to be excited about the future of it. Of course, we can not blame the entire field of research for forecasting and making readings. But, there are massive layoffs regularly in the name of just Futureproofing. We must figure out how to move in the direction of action-guiding research fast.

More from jamesw
Curated and popular this week
Relevant opportunities