note: I think this applies much less or even not all in domains where you’re getting tight feedback on your models and have to take actions based on them which you’re then evaluated on.
I think there’s a trend in the effective altruist and rationality communities to be quite trusting of arguments about how social phenomena work that have theoretical models that are intuitively appealing and have anecdotal evidence or non-systematic observational evidence to support them. The sorts of things I’m thinking about are:
- The evaporative cooling model of communities
- My friend’s argument that community builders shouldn't spend [edit: most] of their time talking to people they consider less sharp than them to people less sharp than them because it’ll harm their epistemic
- Current EA community is selecting for uncritical people
- Asking people explicitly if they’re altruistic will just select for people who are good lairs (person doing selections for admittance to an EA thing)
- The toxoplasma of rage
- Max Tegmark’s model of nuclear war
- John Wentworth’s post on takeoff speeds
I think this is a really bad epistemology for thinking about social phenomena.
Here are some examples of arguments I could make that we know are wrong but seem reasonable based on arguments some people find intuitive and observational evidence:
- Having a minimum wage will increase unemployment rates. Employers hire workers up until the point that the marginal revenue generated by each worker equals the marginal cost of hiring workers. If the wage workers have to be paid goes up then unemployment will go up because marginal productivity is diminishing in the number of workers.
- Increasing interest rates will increase inflation. Firms set their prices as a cost plus a markup and so if their costs increase because the price of loans goes up then firms will increase prices which means that inflation goes up. My friend works as a handyman and he charges £150 for a day of work plus the price of materials. If the price of materials went up he’d charge more
- Letting people emigrate to rich countries from poor countries will increase crime in rich countries. The immigrants who are most likely to leave their home countries are those who have the least social ties and the worst employment outlooks in their home countries. This selects people who are more likely to be criminals because criminals are likely to have bad job opportunities in their home countries and weak ties to their families. If we try and filter out criminals we end up selecting smart criminals who are good at hiding their misdeeds. If you look at areas with high crime rates they often have large foreign immigrant populations. [Edit - most people wouldn't find this selection argument intuitive but I thought it was worth including because of how common selection based arguments are in the EA and rationality communities. I'm also not taking aim at arguments that are intuitively obvious rather arguments that those making find intuitively appeal, even if they're counterintuitive in some way. i.e some people think that adverse selection is a common and powerful force even though adverse selection is a counter-intutive concept.]
- Cash transfers increase poverty, or at least are unlikely to reduce more than in-kind transfers or job training. We that people in low-income countries often spend a large fraction of their incomes on tobacco and alcohol products. By giving these people cash they have more money to spend on tobacco and alcohol meaning they’re more likely to suffer from addiction problems that keep them in poverty. We also know that poverty selects people who make poor financial decisions, so giving people cash gives people greater ability to take out bad loans because they have more collateral.
- Opening up a country to immigration increases the unemployment of native-born workers. If there are more workers in a country then it becomes harder to find a job so unemployment goes up.
- Building more houses increases house prices. The value of housing is driven by agglormation effects. When you build more housing agglomeration effects increase, increasing the value of the housing, thereby increasing house prices. House building also drives low-income people out of neighbourhoods. When you see new housing being built in big cities it’s often expensive flats that low-income people won’t be able to afford. Therefore if you don’t have restrictions on the ability to build housing low income people won’t be able to live in cities anymore.
Many people find these arguments very intuitively appealing - it was the consensus opinion amongst economists until the 2000s that having a minimum wage did increase unemployment. But we know that all of these arguments are wrong. I think all of the arguments I listed as examples of intuitively appealing arguments made in the EA and rationality communities have much less evidence behind them that having a minimum wage. The evidence for minimum wage increasing prices was both theoretical - standard supply and demand, a very successful theory, says that this is what happens - and statistical - you can do regressions showing minimum wages are associated with unemployment. But it turned out to be wrong because social science is really hard and our intuitions are often wrong.
I'm pretty sceptical of macroeconomic theory. I think we mostly don't understand how inflation works, DSGE models (the forefront of macroeconomic theory) mostly don't have very good predictive power, and we don't really understand how economic growth works for instance. So even if someone shows me a new macro paper that proposes some new theory and attempts to empirically verify it with both micro and macro data I'll shrug and eh probably wrong.
We have thousands of datapoints for macro data and tens (?) of millions of micro data, macro models are actively used by commercial and central banks so get actual feedback on their predictions and they're still not very good.
This is a standard of evidence way way higher than what’s used to evaluate a lot of the intuitive ideas that people have in EA, especially about community building.
All the examples I gave of intuitively appealing ideas are probably (>80%) wrong and come from economics and all but one from microeconomics. This is in part because my training is as an economist and so economics examples are what come to mind. I think it’s probably also because it takes the rigour of modern microeconomics - large datasets with high-quality causal inference - to establish that we can be confident that ideas are wrong, and even so I have like 15% credence that any minimum wage meaningfully increases unemployment. It's often intractable to do high-quality causal inference for the questions in which EAs are interested in, but this means that we should have much more uncertainty for our models, rather than adjusting the standards of evidence we need to belive something.
My argument is that if we have these quite high levels of uncertainty even for the question of whether or not having a minimum wage increases unemployment, maybe the social science question which has had the most empirical firepower thrown at it, we should be way, way more sceptical of intuitive observational models of social phenomena we come up with.
This comment is both in response to this post, and in part to a previous comment thread (linked below, as the continued discussion seemed more relevant here than in the evaporative cooling model post here: https://forum.effectivealtruism.org/posts/wgtSCg8cFDRXvZzxS/ea-is-probably-undergoing-evaporative-cooling-right-now?commentId=PQwZQGMdz3uNxnh3D).
To start out:
For this post/general:
What I feel is lacking here is some indication of base rates, i.e. how often are people completely/largely without questioning trusting of these models, as opposed to being aware that all models have their limitations and that this should influence how they are applied. And of course 'people' is in itself a broad category, with some people being more or less questioning/deferential or more or less likely to jump to conclusions. What I am reading here is a suggestion of 'we should listen less to these models without question' without knowing who and how frequently people are doing that to begin with.
Out of the examples given, the minimum wage one was strong (given that there was a lot of debate about this) and I would count the immigration one as a valid example (people again have argued this, but often in a very politically charged way that how intuitive it is depends on the political opinions of the person reading), but many of the other ones seemed less intuitive or did not follow perhaps to the point of being a straw man.
I do believe you may be able to convince some people of any one of those arguments and make it be intuitive to them, if the population you are looking at it for example a typical person on the internet. I am far less convinced that this is true for a typical person within EA, where there is a large emphasis on e.g. reasoning transparency and quantitative reasoning.
There does appear to be a fair bit of deferral within EA, and some people do accept the thoughts of certain people within the community without doing much of their own evaluation (but given this is getting quite long, I'll leave that for another comment/post). But a lot of people within EA have similar backgrounds in education and work, and the base rate seems to be quantitative reasoning not qualitative, nor accepting social models blindly. In the case of 'evaporative cooling', that EA Forum post seemed more like 'this may be/I think it is likely to be the case' not 'I have complete and strong belief that this is the case'.
"even if someone shows me a new macro paper that proposes some new theory and attempts to empirically verify it with both micro and macro data I'll shrug and eh probably wrong." Read it first, I hope. Because that sounds like more of a soldier than a scout mindset, to use the EA terminology.
Even if a model does not apply in every situation also does not mean the model should not exist, nor that qualitative methods or thought exercises should not be used. You cannot model human behaviour the same way as you can model the laws of physics, human emotions do not follow mathematical formulas (and are inconsistent between people), creating a model of how any one person will act is not possible unless you know that particular person very well and perhaps not even then. But generally, trying to understand how a population in general could react should be done - after all, if you actually want to implement change it is populations that you need to convince.
I agree with 'do not assume these models are right on the outset', that makes sense. But I also think it is unhelpful and potentially harmful to go in with the strong assumption that the model will be wrong, without knowing much about it. Because not being open to potential benefits of a model, or even going as far as publicly dismissing entire fields, means that important perspectives of people with relevant expertise (and different to that of many people within EA) will not be heard.
What I think Nathan Beard is trying to say is EAs/LWers give way too much credence to models that are intuitively plausible and not systematically tested, and generally assume way too much usefulness of an average social science concept or paper, let alone intuition.
And given just how hard it is to make a useful social science model in economics, arguably one of the most well evidenced sciences, I think this is the case.
And I think this critique is basically right, but I think it's still worth funding, as long as we drastically lower our expectations of the average usefulness of social sciences .