This is a link post for How Life Sciences Actually Work: Findings of a Year-Long Investigation. I added a special section on the relevance to the EA Community after the conclusion.
See discussion on twitter, reddit.
Summary: academia has a lot of problems and it could work much better. However, these problems are not as catastrophic as an outside perspective would suggest. My (contrarian, I guess) intuition is that scientific progress in biology is not slowing down. Specific parts of academia that seem to be problematic: rigid, punishing for deviation, career progression; peer review; need to constantly fundraise for professors. Parts that seem to be less of a problem than I initially thought: short-termism; lack of funding for young scientists.
Introduction
In his Asymmetric Weapons Gone Bad (a), Scott Alexander notes that for some areas of inquiry, studying them a little bit leads you astray and only studying them a lot makes it possible to understand what's really going on:
Maybe with an unlimited amount of resources, our investigations would naturally converge onto the truth. Given infinite intelligence, wisdom, impartiality, education, domain knowledge, evidence to study, experiments to perform, and time to think it over, we would figure everything out.
But just because infinite resources will produce truth doesn’t mean that truth as a function of resources has to be monotonic. ...
Some hard questions might be epistemic traps – problems where the more you study them, the wronger you get, up to some inflection point that might be further than anybody has ever studied them before.
I think that this observation is a general one -- true for almost all areas of study -- and moreover lends itself to a much simpler explanation: almost everyone has an agenda. Almost no one has the exact right picture of what's going on in their mind.
Thus, when starting to study something and reading one or two books or talking to a couple of people, you will inevitably overload on their agenda, personal biases, personal ignorance, and all the other factors that do not lead to the correct model of the subject.
Then, if you read a few more books or talk to several more people, you will see some of the biases that influenced your model-building. However, this also will not be enough. When picking the books to read or people to talk to, your sample will be either too narrow (i.e. people you talk to will be very similar to each other) or too broad (i.e. their experiences will be so discordant that it will too difficult for you to be able to cross-reference them and piece them into a coherent picture).
When I talked to the first 5 people about how life sciences research works, I had a perfect picture in my mind, fully in accordance with all the rumors about academia's sclerosis, risk-aversion, and short-termism, I'd been hearing for the last few years.
When I talked to 10 more people, I realized that something was off, but couldn't quite piece it all together.
In the end, I ended up interviewing about 60 people involved with life sciences research -- mostly grad students, postdocs, and PIs (principal investigators, heads of laboratories), but also, for example, with people from philanthropic organizations and venture funds investing in life sciences.
You could view this essay as a culmination of a year of my pursuit of applied progress studies (a). I feel like I'm finally able to converge on something, even though this something seems to be somewhat different from what almost any single person I talked to has as their model of what's going on in the field.
[Life] science is not slowing down
It's tempting to look at the negative side of academia. Bureaucratization, seeming risk-aversion, loss of freedom... and conclude that therefore science is working poorly. This is a mistake. Yes, funding agencies are risk-averse; yes, academia now selects for things you probably don't want it to select for, like conformity and high conscientiousness; yes, an average scientist is not in academia for the love of science (and maybe the productivity of an average scientist is decreasing. [sidenote 1: would argue that comparing an average scientist today and in 1973 is a bit like comparing an average high-school drop-out today and in 1973. (a)]
However, all of this does not mean that science is stagnating or even that it is slowing down. The pace of discovery in all areas of biology I looked at is astounding. And not only the pace is astounding, but many researchers are indeed working on the highest expected value projects they have with little restrictions and are taking in the smartest people they can find and throwing money at them (the way it works in biology is that PIs fundraise and manage/mentor, while graduate students and postdocs do the work).
If you look at Harvard or MIT or Stanford or many other universities -- they are all filled with amazing researchers working on great budgets, truly passionate about discovery and invention. HHMI (a) is actually giving people unrestricted grants; CZI (a) is funding people specifically to work on software (which is usually hard to get funded) and to work on their riskiest, usually unfundable ideas; and many other foundations are trying to correct the inefficiencies they see in resource allocation.
I think that the perception of stagnation in science -- and in biology specifically -- is basically fake news, driven by technological hedonic treadmill and nostalgia. We rapidly adapt to technological advances -- however big they are -- and we always idealize the past -- however terrible it was.
I mean -- we can just go to Wikipedia's 2018 in science (a) and see how much progress we made last year:
- first bionic hand with a sense of touch that can be worn outside a laboratory
- development of a new 3D bioprinting technique, which allows the more accurate printing of soft tissue organs, such as lungs
- a method through which the human innate immune system may possibly be trained to more efficiently respond to diseases and infections
- a new form of biomaterial based delivery system for therapeutic drugs, which only release their cargo under certain physiological conditions, thereby potentially reducing drug side-effects in patients
- an announcement of human clinical trials, that will encompass the use of CRISPR technology to modify the T cells of patients with multiple myeloma, sarcoma and melanoma cancers, to allow the cells to more effectively combat the cancers, the first of their kind trials in the US
- a blood test (or liquid biopsy) that can detect eight common cancer tumors early. The new test, based on cancer-related DNA and proteins found in the blood, produced 70% positive results in the tumor-types studied in 1005 patients
- a method of turning skin cells into stem cells, with the use of CRISPR
- the creation of two monkey clones for the first time
- a paper which presents possible evidence that naked mole-rats do not face increased mortality risk due to aging
Doesn't seem like much? Here's the kicker: this is not 2018. This is January 2018.
If you actually go and look at the major discoveries made in any single year in the first half of the 20th century and compare it to 2018, you'll probably conclude that the pace of scientific progress is only getting faster.
After a year of studying how life science research actually works and progresses, I'm way more optimistic about it.
Nothing works the way you would naively think it works (for better and for worse): 3 Examples
1. "NIH's risk-aversion makes it very hard to fund innovative science"
Observation: NIH is risk-averse to the point that even when it explicitly asks for radical proposals, nobody believes it will actually fund anything radical, so people don't even bother applying with high-risk research [sidenote 2: Open Philanthropy evaluating NIH Director's Transformative Research Award proposals (a): 'we thought many of the proposals we reviewed were similar to proposals in more typical RFPs in terms of their novelty and potential impact. In other words, we considered many of the submitted proposals to be a bit on the conventional side. This surprised us given the “transformative” premise and focus of the TRA program. We speculate that this may be due to the constraints within which applicants feel they must work to get through panel reviews.']
Naive conclusion: it's impossible to get high-risk projects funded by NIH, so scientists end up working on incremental science
Reality: NIH doesn't force scientists to adhere 100% to their submitted plans, so what ends up happening is that scientists apply for a grant with a project for which they have supporting preliminary data (so that NIH doesn't consider it high-risk) and spend most of the funds on that project. The left-over funds then can produce preliminary data for their high-risk ideas, making them appear less risky and more easily fundable by the NIH in the future.[ sidenote 3:Sometimes this takes a more extreme form where researchers actually promise NIH good results from experiments they already performed but have not publicized yet.]
So, in reality, distortion is still present because you have to figure out rather arcane ways to get anything interesting funded, but the amount of distortion is way less than we would naively expect.
2. "NIH's biases make it very hard to fund methodological research"
Observation: NIH doesn't like to fund purely methodological studies (e.g. development of better software)
Naive conclusion: it's impossible to get funded for methods development by NIH
Reality: you can dress up methods development for NIH, e.g. by providing a concrete biological goal for which insufficient methods are the bottleneck and show NIH that you will be able to achieve concrete progress on something that matters to them using your better methods
Again, yes, doing this introduces obvious frictions and inefficiencies and skews what scientists work on towards things that are easy to dress up for NIH, rather than things that they believe are most important. But the amount of frictions and inefficiencies is way less than we would naively expect.
3. "NIH severely underfunds young researchers"
Observation:
the median age of first-time recipients of R01 grants, the most common and sought-after form of N.I.H. funding, is 42, while the median age of all recipients is 52. More people over 65 are funded with research grants than those under age 35. (NYT (a))
Naive conclusion: young scientists lack the resources to pursue their research. NIH should allocate more funds to investigators under age 35 and 40.
Reality: Many extremely capable people end up working in well-funded labs whose PIs are pretty hands-off and who allow their grad students and postdocs to work on whatever they want and not really think about money. In statistics, this registers as "more money than ever goes to older/famous PIs" and "young investigators can't start their independent careers". On the ground this means that until age 35 (i.e. when your creativity is the highest) you are isolated from management and fundraising (and endless administrative responsibilities bestowed on any tenure-track professor) and can 100% focus on doing science and publishing papers, while getting mentoring from your senior PI and while being helped by all the infrastructure established labs have (administrative assistants, lab technicians, etc.).
One could make an argument that due to all of these factors, the present funding structure is beneficial to production of original research by those most capable of it, and that letting more young people set up their labs will mostly result in more people attending useless faculty meetings and more people writing grants, instead of doing research.
When you see "X's [where X is a famous scientist] lab discovers Y", chances are it was Z -- X's grad student or a postdoc -- who came up with the idea, did all the work (with help of other students), and wrote the paper. X's contribution was probably in providing the research environment in which all of these were possible, money, and talking to Z every other week about the progress with the idea. I'm not saying that X's contribution is not important -- it usually is -- but press releases focused on the PI miss the fact that money is not actually spent directly by them, systematically biasing our perception of how funds are allocated.
In the end, yes, distortion is there. It is indeed difficult to become independent and establish your own personal research agenda if you're young, but the end negative effect on science done by young researchers is way less than we would naively expect.
Two side points
- When I write "naive conclusion", it usually means that it took me months of research to figure out why this is actually not true
- While NIH provides the majority of funding for life science research in the US, there are many foundations that are aware of its biases and that are actively trying to fund people and research that NIH misses, support grad students, postdocs, and early-career independent researchers, and so on.
If you're smart and driven, you'll find your way in
There are many labs in biology that have good funding and open and risk-loving PIs. This means that if you're very smart, ambitious, and driven, you'll likely be able to get in and work on cool stuff, even with a questionable background, although you might have to spend a year or so working as a Research Assistant to prove your worth. PIs are usually pretty open to getting Research Assistants and are very open to getting thoughtful personalized cold emails (this applies to scientists you would consider famous as well).
I'm fairly confident that if someone capable of producing Nobel-level research decided to do a PhD in biology or neuroscience, they would be able to enter great programs with great advisers and funding in less than 2 years.
The problem here is that almost everybody for whom understanding the stuff I wrote above would be useful is misinformed. People outside of biology generally think that doing a PhD means spending 6 years at the bench performing your advisor's experiments and is only possible with perfect undergrad GPA, not realizing that neither of these are true of you're truly capable, and as a result form their decision to avoid grad school on misinformation.
Nobody cares if you're a genius
This may sound like I'm contradicting the previous section, but note that there I only wrote "get in", not "stay in".
In order to get in, you only have to convince one PI that you're worth taking a small chance on. In order to stay in, you have to convince many professors, many study sections who will assess your grant applications, various charitable foundations, hiring committee at the university, tenure panel, and so on, and so on.
Every one of these entities will be less open and less forgiving than the PI who might decide to take you in.
I know many brilliant researchers.
- some work in a field that is currently not fashionable
- some are not very likable or are bad at networking
- some are of unwelcome demographics
- some are not good at hyping up, explaining or putting the right spin on their research
- some are too passionate about their subject (Church (a), Woodward (a), Ramanujan (a), Lippmann (a))
- some are just not very good at anything else (Einstein (a))
- "Einstein took the entrance examinations for the Swiss Federal Polytechnic in Zürich ... He failed to reach the required standard in the general part of the examination, but obtained exceptional grades in physics and mathematics"
- some are too disagreeable
- some are bad at writing grants, selling, and fundraising
- some are too truth-seeking and are retaliated against due to of this (case of Hellinga (a))
- some are not good at finishing their papers (Bohr (a))
- some are bad at academic politics
- some are recluses
- some got unlucky in their early research results or advisors
- some are bad at policing their speech to current moral fashions (1 (a), 2 (a))
- some work on research that is too novel and are therefore not understood (genetics (a), DNA Synthesis (a), public key cryptography (a), many "crackpots" (a))
- some are not good at pretending and molding themselves for every committee and every panel
- some are bad at dealing with bullshit and bureaucracies in general
- some have low self-esteem and critically undersell themselves
- some are good at coming up with stuff but are bad at formalizing it and putting it on paper (see next section)
A great many skills are required of a successful scientist in academia (a). This is what Eric Weinstein means when he talks about the perils of pursuit of excellence (a):
So genius and excellence are both worthwhile but they are distinct modalities, and not recognizing that it is a serious problem to take somebody in the genius idiom and to push them into a different idiom, which is to reduce their variance, is going to be very destructive and it’s going to keep us from founding the industries that will allow us to change paradigms and move forward.
Almost all biologists are solo founders. This is probably suboptimal
Many of the listed above ailments would be solved if the researcher in question had a champion or a co-founder who would complement them at whatever they're bad at.
In Jessica Livingston's (co-founder of Y Combinator) Founders at Work (a), Max Levchin (PayPal co-founder) says:
Try to have a good cofounder. I think it’s all about people, and, if you are doing it completely alone, it’s really hard. It’s not impossible, in particular if you are a loner and introverted type, but it’s still really hard ... I had run a company before PayPal, alone, and I thought it was fine. I could deal with it. But, you only can count on energy sources and support sources from yourself. There’s really no one else who you can go to and say, “Hey, this thing is going to fall apart any minute now. What the hell are we going to do?”
The thing that kept us going in the early days was the fact that Peter and I always knew that both of us would not be in a funk together.
YC itself is famous for its preference for teams rather than individuals. There should be a CEO (visionary, salesman, manager) and a CTO (builder and designer). In light of this, it's puzzling why universities seem to virtually never hire two people to run a lab jointly. There's a single PI who has to both be excellent at being the CEO and at being the CTO and who moreover has to run the lab essentially alone -- everybody else in the lab will be an employee with negligible amount of responsibility for the long-term future of the lab.
This is very weird and reminds me of Feynman and Dyson. Freeman Dyson:
[Feynman] was working on these problems of quantum electrodynamics, and he had done a great deal which was very beautiful, but which nobody else understood. And he loved to talk, and I loved to listen. So within six months I had pretty well mastered his language. And within one year, I actually was able to translate his ideas into mathematics, so it became more accessible to the world. And as a result, I became famous, but it all happened within about six months.
How many scientists never reach their potential because they fail to find their co-founder and because the way academia is structured today seems to be very unwelcoming of such arrangements?
There's insufficient space for people who just want to be researchers and not managers
Many people who always wanted to become scientists do not pursue or leave academia because they see how PIs work and think that they do no want to just manage people and fundraise/write grants. This is a great tragedy. Very few labs have permanent Research Scientist positions and for some reason there's a path "PhD-->Postdoc-->PI" that is almost impossible to avoid (there are institutions that experiment with permanent pure researcher positions (Broad, CZI, Wyss, Calico) but there seem to be very few of them).
There was a famous problem in large companies where if you were a really good software engineer, you'd have to become a manager to continue to advance your career but I hear that these days, companies have introduced "Individual Contributors" where you can grow while still being primarily a technical contributor. This is something that academia still has to figure out.
My intuition is that the biggest unnecessary attrition from academia happens due to the absence of an established "pure researcher" career track.
Peer review is a disaster
In The Double Helix (a), James Watson (co-discoverer of the structure of DNA along with Francis Crick) writes:
[I]t would take two or three years to set up a new research group primarily devoted to using X rays to look at the DNA structure. Moreover, such a decision would create an awkward personal situation. At this time molecular work on DNA in England was, for all practical purposes, the personal property of Maurice Wilkins, a bachelor who worked in London at King’s College.
Several people I talked to describe a similar dynamic and tell me that there are niches dominated by a particular research group that guards that niche almost as its fiefdom. It takes a lot of courage and determination to move in and try to upend the dominant methodologies and research directions of the field.
Peer review exacerbates this dynamics. Peer reviewers in your field are your competitors, who have not themselves solved the problem you claim to be able to solve. They have both personal and professional interest (especially so if funding is limited) in giving low scores to grant applications of competing teams and to recommend rejection of their journal submissions. Further, since they’re experts in the grant application topic, while rejecting your paper or grant application, they can lift your research ideas and then pursue them themselves. This happens more frequently than you would expect (a).
Further still, committees reviewing grants have all sorts of weird interpersonal dynamics that make funding anything unconventional even more difficult than it would be in their absence, because, while being on a committee people are usually averse to be publicly approving towards anything that seems weird.
An anonymous redditor writes (a):
In my field virtually 100% of the papers in “top” journals come from the same 5-10 senior authors, and they can just about get away with murder.
The referees completely fail to understand ideas we've adapted to the meanest understanding, they display astonishing gaps in their knowledge, and lots of them can't (as my mother puts it) think their way out of a wet paper bag. Even if you discard these as mere dregs, far too many of the rest seem to miss the point, even points which we've especially labored to sharpen. Really good, valuable referee reports exist, but they are vanishingly rare.
Almost everyone I talked to shared a broadly negative sentiment about peer review.
Nobody agrees on whether big labs are good or bad
Big labs tend to have
- more cross-pollination of ideas in-house and less friction for collaboration
- more unrestricted funds and to therefore be more open to work on whatever is interesting, rather than a concrete project the PI has to finish to get their single grant extended
- this allows for more risk and exploration in big labs
- administrative assistants and more lab technicians, meaning that scientists have more time to do science
On the other hand, scientists are not at all always good managers. In a big lab, students frequently receive insufficient mentorship from the PI and have to be very independent and look for outside mentors and support (this is worsened by the fact that few people care to hire middle managers for big labs). The result is that often people are left to their own devices and are sometimes lost in research, meaning that strictly speaking the amount of waste is increased.
People who have big labs continue to fight for more funding and feel that what they create is unique and must be protected (If you have a big lab, you probably have a lot of people trying to get in, meaning that you feel there's always plenty of opportunities to grow), while people who are barely scraping by suggest strict limits on lab size (and other ways to make distribution of money more equitable) and point out that sometimes big established labs continue to get funded almost by inertia. People have very strong feelings about this.
Many scientists seem to measure their professional success by the number and the size of their grants and do everything they can to maximize their funding and my impression is that simply giving them much more money (like HHMI does) won't change how they spend their time too much -- they'll just continue writing grants.
Senior scientists are bound by their students' incentives
As I noted before, in biology, PIs mostly manage people -- all the real work is done by grad students and postdocs. Grad students and postdocs have to graduate and to look for faculty jobs, meaning that however long the PI's horizon is, the people who do the work will be bound by their short-term need to publish. And given that grad students and postdocs typically make about 25-50% of their market rate, however passionate they are about science, they want to graduate and move on as fast as possible.
(This also applies to those who have fellowships, as they always have a pre-determined maximum length, usually 3-5 years)
Universities seem to maximize their profits, with good research being a side-effect
When I started investigating how biology works, I believed that universities spend their own money to run labs and enable research, because this is how it works in economics, where I come from. I soon learned this is not the case.
In biology, it's the scientist who brings money to the university. University provides affiliation/credibility, space, and administrative assistance. But then, to pay for its costs it takes a cut ("overhead") from every NIH grant the scientist gets. University usually gets about 1/3 of the size of the grant.
Because university's cut is linear but actual expenses are sublinear, pretty soon "overhead" turns into "profit" for the university.
As a result, in hiring decisions, the amount of money the researcher is able to bring sometimes effectively becomes the measure of quality of research.
One of the funnier side-effects of this is that now universities have not only reputational incentive to cover up fraudulent or questionable research by scientists but also a financial incentive to protect scientists who bring in a lot of money. [sidenote 4: Analogous to how journals these days are thinking much harder before retracting well-cited but "questionable" papers -- their impact factor depends on them, after all.]
The anonymous redditor from section about peer review further notes (a):
In my department, the main standards [for getting tenure] are to a) win two RO1 research grants, then b) renew them both. If you’re bringing in hundreds of thousands in indirects every year, the tenure committee doesn’t much care whether you publish in Nature or Diabetes.
Also see The Uncharity of College: The Big Business Nobody Understands. (a)
Large parts of modern scientific literature are wrong
I am confident that somewhere between 10% and 50% of papers published in good journals are wrong, meaningless or fraudulent. This figure is based on my expertise in parts of economics, psychology, neuroscience, and genetics. Unless you invested significant time studying the subject matter, you will have very bad intuition about which papers are good and which ones are bad. See Replication crisis (a).
Further, my guess is that papers published in top journals (Cell, Nature, Science) are more wrong on average than papers from top journals from specific fields of science. It seems that CNS chase hype and have rather lax standards on the methodology. Plus, chasing hype and being interested only in "large update" papers means that we would expect more of these papers to be wrong just because of regression to the mean.
People I talked to say that the papers CNS publish from their fields of expertise would never get published in more specialized journals and that CNS casually disregard reviewers' criticisms of methodology.
This matches my observations, where sometimes catchy but bad papers that would be rejected on the grounds of poor methodology from a field journal get published in CNS.
A paper (a) published in Frontiers in Human Neuroscience finds that:
methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank
You should reflect on whether a typical study you hear about is selected more on the ability to propagate itself across researchers, news, and social media or on sound methodology.
Sometimes, virtually entire swaths of the literature turn out to be meaningless. Recent examples from life sciences are candidate gene studies (a) and small-n cognitive neuroscience (a). [sidenote 5: A recent example not from life science is small-n social psychology (a).]
How is it possible that entire research fields turn out to be meaningless? Some people (a) blame the incentive structure of modern academia. I disagree.
Any area of study -- however meaningless -- can produce a body of established facts, practices, and experts. Newton studied (a) theology, alchemy, and physics. 25% of people in the US believe (a) in astrology and some of them go to astrology experts.
If one can convince other people to give them money to study something, they will gain a financial motive to tell everybody else about the importance of the subject and become an "expert" on it. If there's a group of "experts" they will all amplify each other's voices and will try to legitimatize whatever it is that they study. As a different redditor astutely notes (a):
[N]early every expert relies on the valuation of their expertise for money: therefore every expert has a strong case to oversell their expertise/the state of knowledge in their discipline
(as an aside -- the interaction between personal rivalries in the field and between the common interest of its practitionaires to inflate its perceived importance ought to produce some truly fascinating interpersonal dynamics)
Finally, some papers are just fraudulent. Elisabeth Bik looked (a) at more than 20,000 papers that contained a particular type of easily examinable picture published in good biology journals between 1995 and 2014 and found that 3.8% -- about 1/25 -- contained "problematic figures" and at least half of these had evidence of deliberate manipulation.
Here's what one of the VCs investing in life sciences I talked to told me:
If you see a picture, this is the best picture the authors had. If you see a statistic, there's probably something wrong with it. 5% of the companies we do due diligence on -- and we only do due diligence on excellent companies -- edit their pictures.
Raising money is very difficult even for famous scientists
Collectively, scientists I talked to interacted with more than 20 billionaires but I'm not aware of any of them raising any significant amount of money as a result. I'm very surprised by this because if I were a billionaire, I would probably fund at least one Boyden-like scientist in perpetuity and it seems that there are many more billionaires interested in science than there are Boydens.
Perhaps this is the result of the widespread (and not entirely unfounded) perception that science is broken and that scientists are effectively conning the society into giving them money, resulting in people believing that
- an average scientist is not worth giving more money to
- while famous scientists like Church and Boyden don't need financial support (even if in reality, the reason they're well-funded is because they spend a ton of time fundraising and not having to write grants every year would greatly increase their research output)
In an interview to Tyler Cowen, Boyden says (a):
For me, it became personal because when we proposed this expansion microscopy technology, where we blow up brain specimens and other specimens a hundred times in volume to map them, people thought it was nonsense. People were skeptical. People hated it. Nine out of my first ten grants that I wrote on it were rejected.
If it weren’t for the Open Philanthropy Project that heard about our struggles to get this project funded — through, again, a set of links that were, as far as I can tell, largely luck driven — maybe our group would have been out of business. But they came through and gave us a major gift, and that kept us going.
Conclusion
If you take away one thing from this essay, let it be this: academia has a lot of problems but it's less broken than it seems from the outside.
Everything is much more complicated than everyone thinks and it's very easy to try to improve something only to have second-order effects mess everything up. Because of all of this, I'm wary of proposing any drastic changes to how grants are distributed, how researchers are trained, and so on.
Instead, I have several smaller-scale recommendations:
- foundations should try to be more like venture funds and actively seek out researchers who need more money, instead of just responding to applications, and tailor their funding to individual scientists more
- foundations and NIH should experiment more with how they give and should not be afraid to spend significant resources on evaluation (i.e. hire economists and do good RCTs)
- for example, by implementing double blind application reviews or using golden tickets (a)
- or by trying to fund small teams (like Abudayyeh and Gootenberg (a))
- foundations and philanthropists with $195,000 of free money lying around should consider supporting Life Sciences Research Foundation. LSRF is a non-profit that has been awarding post-doctoral fellowships for more than 35 years. They have a stellar track record, but are only able to fund about half of around 50 people they believe to be deserving their fellowships every year. They fully depend on external donors to fund individual fellowships. Their current donors include HHMI, Open Philanthropy, and Amgen.
- you might like the postdoc you end up supporting and become their supporter in the future
- foundations and philanthropists should not be afraid to commit themselves to specific scientists and fund them long-term
Finally, if you have upwards of a $50,000 available for charitable giving, consider reaching out to a scientist you like, and seeing if you can help them financially. They will be grateful for support and will probably be happy to grab an occasional dinner with you and tell you all about the latest developments in their field.
Relevance to the EA Community
I'm not sure whether this need to be justified, but trying to [safely] speed up scientific progress seems to be one of the highest leverage activities, in terms of improving global well-being. The EA community can help uniquely by e.g. setting up an EA Fund dedicated to funding science and providing short-term or long-term support to members of the community pursuing academic career-track or to identify scientists outside whose goals are aligned with those of the community. As far as I'm aware, currently there are no EA organisations focused on funding science (Open Philanthropy is focused on science that has relation to existential or global health risks) and while there are a lot of private foundations that are trying to correct misallocation of resources, they end up overlooking quite a few scientists who don't fit their cleanly defined funding theses.
Acknowledgements
I would like to thank (in reverse alphabetic order) Andrew Zuckerman, Guy Wilson, Brian Timar, Sasha Targ, Nabeel Qureshi, Manjari Narayan, Adam Marblestone, Anastasia Kuptsova, Bonnie Kavoussi, William Eden.
Thanks to Emergent Ventures (a) for financial support.
Appendix
Risk-aversion might be overstated
I think that it might be the case that the degree of risk-aversion in academia is overstated. My research suggests that those who want to pursue high-risk research usually find ways to do it, while those who don't want to, blame the system.
Grad students and postdocs in particular are usually able to pursue 2-3 projects simultaneously and can typically work on 1 project with high chance of success and 1-2 projects with low chance of success. It does definitely seem that you cannot pursue one project full-time and have this project be high-risk, because, if it fails completely, you'll have no publications and, without publications, you'll just not be able to get a job and stay in academia.
The alternative explanation of my observations about risk-taking is survivorship bias: I only see those for whom taking risks panned out. I can't figure out how much survivorship bias is present here, which is why I do not make any firm conclusions about the degree of risk-aversion in academia.
Studying people and animals is really hard
My IRB Nightmare (a) by Scott Alexander:
I feel like a study that realistically could have been done by one person in a couple of hours got dragged out into hundreds of hours of paperwork hell for an entire team of miserable doctors. I think its scientific integrity was screwed up by stupid requirements like the one about breaking blinding, and the patients involved were put through unnecessary trouble by being forced to sign endless consent forms screaming to them about nonexistent risks.
I feel like I was dragged almost to the point of needing to be in a psychiatric hospital myself, while my colleagues who just used the bipolar screening test – without making the mistake of trying to check if it works – continue to do so without anybody questioning them or giving them the slightest bit of aggravation.
I feel like some scientists do amazingly crappy studies that couldn’t possibly prove anything, but get away with it because they have a well-funded team of clerks and secretaries who handle the paperwork for them. And that I, who was trying to do everything right, got ground down with so many pointless security-theater-style regulations that I’m never going to be able to do the research I would need to show they’re wrong.
See discussion (a) on Andrew Gelman's blog + IRB nightmares (a). Several people I talked to say very similar things about IRB.
More funding ideas
Funding undergrads
I know several cases of brilliant undergrads who did not pursue scientific career because of external circumstances but for whom giving, say $50k would allow them to deal with these circumstances, fund exploring and pursuing science for a while and let them pursue the academic track.
High-quality clinical trials of generic medicines, nootropics, etc.
See An Iron Curtain Has Descended Upon Psychopharmacology (a) by Scott Alexander
Large-scale interventions
Mostly just fun thought experiments:
- forbid PIs from attaching their names to papers they didn't substantively contribute to and see how this changes their incentive structures
- e.g. in economics, advisors rarely put their name on their students' papers
Peer review
"So who decides which people get the grants? It's their peers, who are all working on exactly the same things that everybody is working on. And if you submit a proposal that says "I'm going to go off and work on this crazy idea, and maybe there's a one in a thousand chance that I'll discover some of the secrets of the universe, and a 99.9% chance that I'll come up with bubkes," you get turned down." (a)
"A researcher made up whole trials, his co-authors at the very least were complicit by putting their names on his papers without checking anything, peer review failed to spot numerous problems with the studies, journals failed to react to red flags raised after publication" (a)
Miscellanea
“while switching topics may be easy in theory, certain fields are jealously guarded by more established Pis” (a)
"A European Research Council report suggests 79% of projects they fund “achieved a major scientific advance”, & only 1% make no contribution. Also, that they fund mostly “high risk” work" (a)
Chargaff (a famous biochemist) about meeting future discoverers of DNA structure:
"Once upon a time I ran across an interesting RFA from the NIH that seemed highly targeted for one particular lab. Oh, don't get me wrong, many of them do seem this way. But this one was particularly.....specific. The funding was generous, we're talking BigMech territory here." (a)
"I suspect that peer review actually causes rather than mitigates many of the “troubling trends” recently identified by @zacharylipton and Jacob Steinhardt ... It’s very common for reviewers to read empirical papers and complain that there is no “theory”. But they don’t ask for theory to address any specific question. I think they are just looking for an easy reason to reject—-they skim and don’t see scary equations. … Reviewers seem to hate “science” papers, but it’s possible to sneak science in the door if add some token amount of new method engineering" (a)
“I spoke recently to a member of the panel that awarded me my first research grant on quantum computers (1985). He said it was a close-run thing. I asked if I'd have got it under today's criteria. He said: no chance; basically I could tick none of the boxes.” (a)
"Ioannidis and Trepanowski say that without RCTs, nutrition science can never credibly answer important questions about nutrition. That Harvard department's response is that RCTs are essentially impossible because you cannot get people to meaningfully change their diets over long periods of time. It seems clear to me that both sides are correct." (a)
90% of all claims about the problems with medical studies are wrong (a)
Very interesting post! I have worked in life science up to the postdoc level and think that is generally a reasonable summary of how life sciences research works (disclosure, Guzey interviewed me for this study).
One question is I have is how generalizable is this description geographically and across Universities? Based on the Universities/funders referenced I'd assume your thinking about Tier 1 Research Universities in the US. But did the demographics of your interviewee demographics suggest this could be situation more broadly?
A few other comments to e on some of the points:
Role of PIs
Agreed that senior PIs with large labs tend not to do very much bench work themselves. However, they aren't solely managing and writing grants - I think one of the most important things PIs do is knowledge synthesis through writing literature reviews. I haven't really met any postdocs that have the depth and breadth of knowledge of their lab head, which allows the later to both provide a high-level summary of their fields in reviews and also propose new ways forward in their grants.
A counterpoint I've come across is in mixed labs runs by a PI with a computational background who has postdocs and PhDs doing lab work while he works on using their biological results for computational modelling. From my perspective, these types of labs seem to function quite well as the PI usually relies on people coming into the lab to be well trained in the biological assays they'll use, but then teaches them computational techniques that end up using themselves by the end of their project.
Peer review
One of the big drawbacks of peer review is the hugely variable quality of reviews that are provided. As an example simply in terms of the level of detail provided, I have had comments of one paragraph and three pages for the same article.
I think a key reason for this is there isn't really any standardized format or expectations for reviews nor is there much training or feedback for reviewers. One thought I've had is that paying peer-reviewers would allow journals to both enforce review consistency and quality - although publishers have such large profit margins that it this could be feasible, they have no incentive to do so as scientists accept the status quo. In the absence of paid peer-review, I think that disclosing reviewer names and comments helps prevent 'niche guarding' and encourage reviewers to provide a useful and honest review (eLife does this currently, not sure if any other journals do so).
Permanent researchers
Agreed that letting postdocs move into staff scientist/researcher positions would be helpful - this has been discussed a bit in the Nature and Sciences career sections over the last few years (such as here). I've usually heard from postdocs who moved into staff scientist or lab/facility manager positions that they wanted to stop relying on grants for their employees and to get some job stability. But some then later regretted the move after finding the positions didn't have many options for career advancement relative the professor track. The staff scientists role is a relatively new academic position (although it has been around for a long time in government and private research labs) that doesn't yet have a lot of consistency between Universities - it would probably help to have more discussion and even formalize the roles expectations before a lot of people move into it.
Solo founders
This is an interesting observation and I hadn't thought about the individual lab head model in this way. I'd actually like to take this a step further and say that academia has a habit of breaking up good pairs of biologists. How so? In a few cases, I've seen two senior postdocs or a postdoc and junior PI (so essentially two researchers quite closely matched their level of experience and with complementary skills) work really well together and produce outstanding results over a few years, which will usually lead to one of the duo getting a permanent position. The other may be able to continue on as a postdoc for a while, but as their research speciality will overlap heavily with their colleague's field and it's unlikely that the hiring/promoting institution will open another position in a similar area for a few years, the postdoc will probably have to move elsewhere to continue their career. Although the two may continue to collaborate, the second person to be hired often starts working on different topics to show their intellectual independence (although the new topics may be less impactful than what they were working on as a pair). I only know of a few cases where duos separated in this way and I haven't really followed their outcomes, but I'd assume that the productivity of both researchers declined afterwards. Allowing one to move into a staff researcher position would help in this respect.
Big labs vs. small labs
Another option is a cluster of small labs working on a similar theme (I was in one in Lund that worked on Vision, another in the department worked on Pheromones). This seems to be more common in Northern Europe where high salaries tend to limit the group sizes that are possible (often PI, 1-2 postdocs, 1-2 PhDs). Clusters seemed to have the benefits noted for larger labs, but meant there were a lot of PIs around to mentor students, and also allowed the cost of lab facilities and support staff to be shared.
Research niches
Territorial PIs seem quite common, and as noted, the publication/grant review process allows them to be quite effective at delaying/blocking and even stealing ideas that encroach on their topic. A link was recently posted here to an economics paper taht even suggested new talent entering a field after the death of a gatekeeping PI could speed up research progress. If it seems that a gatekeeping PI is holding back research in an important field, I think that a confrontational grantmaking strategy could be used - whereby a grant agency offers to fund research on the topic but explicitly excludes the PI and his existing collaborators from applying and reviewing proposals.
Differing risk-aversion between PIs and students
Although a PI may seem risk-loving, he benefits from being able to diversify his risk across all of his students and may only need one to get a great result to keep the funding coming. He's unlikely to get all of his students working together on one hard problem, just like a student can't spend all his time on a high-risk problem.
I tend to think that developing the ability to judge a project's risk is an important skill during a PhD, and a good supervisor should be able to make sure student has at least one 'safe' project that they can write up. Realistically it is possible to recover from a PhD where nothing worked well during a postdoc, but it is a setback (particularly in applying for ECR fellowships).
I feel that postdocs are possibly where the highest risk projects get taken on at the individual level, both because they have the experience to pick an ambitious but achievable goal, and also because they want to publish something great to have a good chance at a faculty position.
Another comment about the failings of peer-review and convoluted ways to circumvent them. It's quite common that reviewers will suggest extra experiments, and often these can improve the quality of the paper.
However, a Professor in Cognitive Psychology once told me that reviewers in his field seem to feel obliged to suggest extra experiments and almost always do. Even if the experiments in the paper are already quite complete, the reviewer will usually suggest an unnecessary control or a tangential experiment. So this Professor's strategy to speed things up was to do, but then leave out, a key control experiment when he wrote up his papers. Reviewers would then almost always pick up on this and only request this additional experiment, and so then he could easily include it and resubmit quickly.
I think this description generally falls in line with what I've experienced and heard secondhand and is broadly true. However, there are some differences between my impression of it and yours. (But it sounds like you've collected more accounts, more systematically, and I've actually only gone up to the M.A. level in grad school, so I'm leaning towards trusting your aggregate)
I think we can get at better ways than peer review, but also, don't forget that people will sort of inevitably have Feelings about getting peer reviewed, especially if the review is unfavorable, and this might bias them to say that it's unfair or broken. I wouldn't expect peer review is particularly better or worse than what you'd expect from what is basically a group of people with some knowledge of a topic and some personal investment in the matter having a discussion - it can certainly be a space for pettiness, both by the reviewer and from the reviewed, as well as a space for legitimate discussion.
I think this is sometimes true, but I would not consider this a default state of affairs. I think some, but not all, grad students and post docs can conceive of and execute a good project from start to finish (more, in top universities). However, I think most successful PIs are constantly running projects of their own as well. Moreover, a lot of grad students and post docs are running projects that either the PI came up with, or independently created projects that are ultimately a small permutation within a larger framework that the PI came up with. I do think it sometimes happens that some people believe they are doing all the work and sort of forget the degree of training and underestimate how much the PI is behind the scenes.
My impression was actually that grant writing, management, and setting up infrastructure is the bulk of Doing Science, properly understood. (Whereas, I get the impression that this write up sort of frames it as some sort of side show to the Real Work of Doing Science). With "fundraising", the writer of the grant is the one who has to engage in the big picture thinking, make the pitch, and plan the details to a level of rigor sufficient to satisfy an external body. With "infrastructure", one must set up the lab protocols so that they're actually measuring what they are meant to. It's easy to do this wrong, and what's worse, it's easy to do this wrong and not even realize you are doing it wrong and have those mistakes make it all the way up to a nonsensical and wrong publication. I think there is a level of fairly deep expertise involved in setting up protocols. And "management" in this context also involves a lot of teaching people skills and concepts, including sometimes a fair bit of hand-holding during the process of publishing papers (students' first drafts aren't always great, even if the student is very good).
Very true in one sense - I agree that academia is very forgiving about credentials and gpa relative to other forms of post-graduate education, and people are definitely excited and responsive to being cold contacted by motivated students who will do their own projects. However, keep in mind that if you're planning to work on whatever you want, rather than your adviser's experiments, you will have more trouble fully utilizing the adviser's management/infrastructure/expertise and to a lesser extent grants.
For a unique and individual project, you might have to build some of your infrastructure on your own. This means things may take much longer and are more likely not to work the first few times - all of which is a wonderful learning experience, but this does not always align with the incentive of publishing papers and graduating quickly. I think some fields (especially the ones closer to math) have the sort of "pure researcher" track you have in mind, but it's rare in social and biological sciences in part because the most needed people are in fact those with scientific expertise who can train and manage a team and build infrastructure/protocol as well s fund raise and set an agenda- i think it would be tough to realistically delegate this to anyone who doesn't know the science.
(But - again, this is only my impression from doing a masters and from conversations I've had with other people. Getting a sense of a whole field isn't really easy and I imagine different regions and so on are very different.)
Thanks so much for the feedback! Especially the point about writing grants being real science. I completely agree and I should add this in the post -- planning and thinking in detail about your research and expectations in the process of writing a grant application is indeed very much science.
Added this quote to the Appendix:
This is really well written. I especially liked how impressed I was from 2018's progress, only to realize that was only January's work! Thanks for writing and posting this article.
Very enlightening and useful post for understanding not only life sciences, but other areas of science funding as well.
The raising money for famous scientists part seems at odds with some of the optimism in the early sections. Any further comment on this?
I'm optimistic because the impression I had was that everything is just terrible. What I ended up concluding is that things are okay but there are still a lot of problems. The fact that even famous scientists have troubles raising money for interesting projects is one such problem.
Just wanted to say, I appreciated this article immensely (both in topic choice and execution). I found it well-distilled, well-researched, and worth reading for the citations and other articles you linked alone. 10/10, will read again.
Thank you!
Thanks-I found this quite useful. But I think in order to evaluate the claim that life sciences are not slowing down, it would be good to have some long time series. I think it would be productive to engage with the research that is looking at productivity per researcher. This may be difficult to quantify fairly, because individual papers may have gotten less impactful. For instance, people have claimed that since per capita GDP has grown more slowly in recent decades in developed countries and yet we have many more people doing research, that the productivity per researcher is much lower now. But if you look at specific capabilities, like genomes sequenced, you would say that the productivity per person has gotten much higher because of the strong exponential growth.
I see this study was funded by Emergent Ventures (at least in part). I recently applied for funding from them for my own project to create a sort of 'grassroots' interdisciplinary think tank in a rural area where science and other academic subjects are scarce. I see it as a low budget version of the Santa Fe Institute, Princeton's Institute for Advanced Studies (where Einstein and Godel worked), or Canada's Perimeter Institute. But my project is not aimed at the 'cream of the crop' nor the kind of fancy (or what i call 'gee whiz' ) science/arts done at places like MIT's media lab. It would have computers and WWW , but likely not supercomputers, 3D printers, particle accelerators, astronomical observatories).
I also have some of my own research projects (in biology/ecology/behavioral genetics, economics, applied math, logic etc.). I can relate to Ramanujan (whose biography I read and some of whose theorems I refer to in some of my much less advanced applied math research)---his results were hand written, not published (he just sent letters to mathematicians, only one of whom responded---G H Hardy (one of the top ones of his era). I have found it impossible to really finish anything, or get it into publishable forum (even my resume). (I did submit a few things various places but missed deadlines, had improper formatting etc. --could barely figure out how to get it into PDF document form). I long ago decided to go my own way due to health and other issues so I stayed out of PhD programs---probably a bad decision. I did get into one PhD program but got cold feet and didn't show up. (Not everyone is a Darwin, Ramanujan, Einstein, or J Barbour and can be independent until they get recognized.)
Also, my stuff in many ways is redundant, somewhat out of the mainstream in approach (i'm basically trying find a very simple math formalism that can approximate what AI does with computers---ie a 'Fermi calculation' (back of the envelope calculations which can sort of generically estimate what a supercomputer will find near exactly), and not as useful as other approaches which are in the literature.
As far as funding, I do think places like NSF, NIH, NIMH, SFI, AAAS, and even universities etc should have programs and provide support --both financial and technical--for small projects like what I have proposed. Especially given the somewhat low level of cultural and scientific literacy in this country and the world. (The big institutions will build new libraries, labs and arts places (not to mention sports facilities , and fund conference travel) while outside them its a sort of an intellectual desert (lie 'food deserts' which exist i areas I know---nothing but liquor , junk food and and conveniance stores (eg 'family dollar')). Its no different than funding a local library and giving it some books and computers. Its also possible given the large number of university graduates at all levels, some of these could be similar to community health workers (as opposed to doctors) and provide advice and help to people working on what are basically 'citizen science' projects. (Citizen science already exists, but all of what I have seen is professional scientists in insitutions asking for volunteers to work on their projects---they never ask if any citizen might want to work on their own project as well, usually because they are considered unqualified. )
The above article primarily focuses on biology (which I follow a bit---but mostly focus on theoretical and mathematical biology---while biotech and molecular biology seem to be the biggest fields (PNAS (Proceedings of Natl Academy of Science) if i recall when i used to read it had maybe 10-30 pages devoted to fields i studied, while like 1000 pages to laboratory based biology.)
My view is biology research is moving along at the same pace it always has; and same with physics (computer science moved faster, but that was because it was a new field theoretically and technologically).
I personally think some fields in biology are possibly overfunded and others are neglected. Some reminds me of Baroque architecture or current megaprojects in architecture (eg new World Trade center, mile high skyscrapers in the mideast, when there is squalor nearby).
I also read psychology and social since papers, and some history, philosophy and political science. Alot of these to me seem to be a bit like the work of smart elementary school kids--not particularily profund or important (eg there are already say 50 books on Lincoln or Reagen but someone writes another one.) Psychology papers seem to deal often with very trivial things -- eg someone observes human shopping behavior in a store and comes up with a theory for that. )
I don't think alot of academic work is much more profound than what I read on non-academic blogs, or even hear in conversation --its just published in professional journals, and leads to credentials and cred. .
While the highly technical research (eg mathematical) is not trivial, I sometimes think some of it is redundant, and also when people learn a few techniques they can publish many papers on the same theme, while neglecting approaches based on other techniques and ideas. (Its also known that its common practice for people to break up one paper into 4 smaller ones, and published them in different places--which makes them hard to find at times.) But its hard for me to judge (due to lack of competence) to really say if a technical paper is just an excercize at a much more advanced level but analogous to all the papers produced by a 6th grade class.
I actually sort of support a UBI or basic income (which can be phrased as a 'Guaranteed Job' as in the Green New Deal deal). So everyone for example can get funded to do scientific research, art, etc like a professional but at a basic level. If it turns out they produce something of high value, then they get paid for that. I read alot of papers and see books by academics which do not appear to be of higher value than what i do (and some appear to have very low value , unless they are viewed as poetry, creative writing, theology, or mythology.
Also, I think some people who do not get funded nor accepted into academic programs is (as noted above) partly because people find them too eccentric, different, and/or unpleasant . I remember when I was checking into some programs I was turned off because I went there to discuss science, and all the people were talking about who is going to some conference, or how some (possibly trivial) paper they published is a 'hit' and 'talk of the town' (eg they counted how many plastic bags are the sea, a neccesary chore but not too interesting, or how their fallacious paper on social biology or psychoogy is generating alot of discussion and might even lead to a book contract--because it repeats popular myths.) . Also if your research interests (or paradigm, and even social and political views) conflict with the ones held by prominent people in various institutions , in general you may as well hit the door. Its not like Galileo or Darwin were recognized and canonized as Saints.
Also, especially in my academic field, alot of research is torture both mentally and physically (ie in my case endless debugging of computer programs, which were on interesting problems, but basically discussing the theory of why we were even writing these programs was not encouraged. It was like being told you have to fight a war without being allowed to discuss why one is fighting the war. They say we can discuss that when we have time---after we win. This is also discouraged because if you discuss the theory for the war or computer program, you may realize that is not the optimal approach.)
p.s. I just read the article (and comments) in more detail. Its 'spot on', though primarily appears be focused on lab researchers in biology. (its also a fairly long and detailed article with many references). the field of biology I was in is much smaller, though i think just about every biology department in a research university has a theoretical/mathematical biologist, or a few---especially in ecology and genetics. however, alot of people have never even heard of this area; and some biologists thinks it mostly irrelevant.
While i cannot claim to be (and am almost am not a genius) Every single 'bad habit' or 'trait' described in the section 'nobody cares if you are a genius' applies to me . Sometimes it only takes me once sentence to get 'downvoted' (or banned) by people i'm around---sometimes this is because i grew up and often am around people who speak 'dialect' (or non-pc speech---though my values are basically pc) and i say something that others find offensive. Also among some academics (and musicians--since that is another of my interests) if you mention the name of a scientist who actually is a sort of arch-rival to the scientist you are talking to, that's often the end of the conversation (the same is true in music--you better not say you like some musician that the musician you are talking to hates.)
last, to a certain extent its possible my own 'research program' is partly meaningless or useless. I know some projects i worked on but never published on (in biology and economics) were basically correct, and the problems were generating many confused papers at the time (and actually still do), but many people have published papers basically saying my points to a large degree (and sometimes with more technical detail--my math skills are not wht they should be for me to say what i want to say). my current project (which is on multiobjective optimization) may be a 'tangent' to the main work in that area (most of which involves algorithms). so it may just be a curiosity (i've seen papers in math and physics entitled things like 'my failed proof of the reimman conjecture', or fermat's last theorem , or 4 color problem. ) There are whole books on such failed proofs.
Another example of 'teams' (eg Feynman and Dyson) is Einstein--who needed someone expert in differential geometry--who he found. Ramanujan needed someone who knew how to turn notes into accepted mathematics and could deal with beurocracy (and he found G H Hardy to help write it up).
(I need someone who knows how to do some basic computer work (eg show me how to use google docs, create PDFs...) and ideally someone who knows a bit more number theory and computer programming than me (i learned C++, but python/R/netlogo may be more relevant and i'm too lazy and incomeptnt to learn them). While theoretical biology projects were mathematical, eventually you had to put them on a computer. My current project is like that.
From an EA view, one is really talking about 'transaction costs' and 'barriers and bounds to rationality' (name of a book by D Foley and P Albin). If one was a good mathematician this could be phrased in those terms (and Foley has affiliations with SFI and a few people there are familiar with some of the math formalism required.)
While i applied to Tyler Cowan's grant program (visions) and read some of his blog, papers and other things from Mercatus center (one person there collaborates with a person at SFI who i contacted but got no reply) i think my politics means they would never fund me. (Same with Templeton, and even some more 'left leaning ' organizations like IPS and ones in economics. They do fund redundant and incomplete, second rate work so long as the people have credentials, and can pack their books and papers with alot of data (numbers) which are basically meaningless to anyone who is not fluent in things like all the masses of the atomic elements and elementary particles, or exchange rate between dollars, pounds,s euros, yen and bitcoin).
If i was organized one project i have is to write a a reply to a paper from Mercatus .