This is a special post for quick takes by FJehn. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Are the potato famine and the revolutions of 1848 an example for the fragility of the modern world?

Recently I came across the potato famine and how it contributed or even caused the revolutions of 1848. I wondered if this is an good example to show how cascading failures lead from an natural event to an agricultural crisis, to an economic crisis, to an financial crisis and finally resulting in a political crisis.

 So what happened?

In the 19th century potatoes became a staple crop in Europe, because they were easy to plant and harvest, cheap and filled you up quite nicely. However, there were very few varieties at that time and this made them vulnerable to disease. In 1845 a new potato disease spread all over Europe and destroyed much of the yearly harvest. This was especially a problem in Ireland (because they almost exclusively used potatoes), but most parts of Central Europe were at least a bit affected. This basically left Europe without potatoes until new varieties could be developed.

In 1846 bad weather also affected the wheat and rye harvest. This lead to rising prices all over Central Europe, as now all major food crops had considerably lower yields. These food shortages forced people to kill most of their livestock, as they did not have any feed for it. But as many people slaughtered their animals at the same time, prices for meat plummeted (though they were still way to high for poor people).

This agricultural crisis lead to an economic crisis, as everybody had to use most of their money for food. Therefore, there wasn't anything left to buy other consumer goods. This in turn increased unemployment considerably, as many people in the consumer goods industry lost their jobs. Especially in cities this was a problem, as many people had moved their in the last decades and could not find any jobs to sustain themselves.

So, after the agricultural crisis in 1845 and 1846 were followed with an economic crisis in 1846 and 1847, next came an financial crisis in 1847. The financial crisis was mainly driven by the bursting of a bubble around building railroads. In the 1830s and  1840s many railroad projects were started, but most were crap. The bubble burst in 1847 after states started to rise interest rates to consolidate their finances in the economic crisis. In addition, the food crisis diverted funds away from the railroads and this showed that most of the projects could only continue if they got more money continuously. When this did not happen they crashed and with them everyone who had invested their money. This again led to more unemployment as all the railroad companies closed and due to a lack of available loans many smaller businesses went bankrupt, making even more people lose their job.

So in 1848 you had a crashed economy, a debt crisis, still some famine and massive unemployment. Many people all over Europe faces several years of fear, hardship and poverty. They looked for someone to blame. This brought many people to politics. And finally in 1848 we can see revolutions in most states of Central Europe. Some being successful (France), while others failed (Germany). Still, it seems like an new potato disease basically started a chain of events that led to a drastic change of the political landscape in Central Europe.

This comment was mainly inspired by the revolutions podcast: https://thehistoryofrome.typepad.com/revolutions_podcast/2017/08/707-the-hungry-forties-.html

Knowledge is fractal. Every time I wander into a new field of knowledge I am fascinated that it has its own tales, language, heroes, secrets and traditions. However, it does not stop at this scale. Every field had its own subfields and again we find exactly the same thing. You could spend your whole life trying to understand a field and you would still be completely surprised by what you might find in its subfields if you venture out. And again you would just find more subfields of the subfield you started exploring. It seemingly never ends. I like this personally as it means I will never run out of fascinating thing to find, but I wonder how long humanity can continue to amass knowledge until we get lost in the depth of our own fractal. Or will we just find new ways to cope with that?

Is it an important research topic to explore the availability of flammable materials in major NATO cities to assess the effects of nuclear war?

Today I read "Examining the Climate Effects of a Regional Nuclear Weapons Exchange Using a Multiscale Atmospheric Modeling Approach". It models the effect of a regional nuclear war between Pakistan and India. One quote stood out to me:

"The assumed 16 g cm−2 fuel loading and 100% burn rate for the fire is actually uncertain, and in fact, Reisner et al. (2018) assume only ∼1 g cm−2 fuel loading. Reisner et al. (2018) points out that Indian and Pakistani cities are built of concrete, and there fore, firestorms that erupted in fuel-rich Hiroshima and Hamburg would not occur. Our simulations, using 1 g cm−2, cause no global radiative forcing, because the BC emitted into the lower and middle troposphere is quickly removed by EAM."

This means the effects of a nuclear war are mainly determined by how bad the firestorms will become and this in turn is determined by how much flammable material is available in the bombed cities. However, the possible range for this parameter seems to be from "there is too little fuel to cause a nuclear winter" to "nuclear winter is basically certain". This seems like a pretty big research gap to me. 

[comment deleted]1
0
0
Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg