This is a special post for quick takes by KR. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
KR
13
0
0

Thought experiment for longtermism: if you were alive in 1920 trying to have the largest possible impact today, would the ideas you came up with without the benefit of hindsight still have an effect today?

I find this a useful intuition pump in general. If someone says "X will happen in 50 years" I think of myself looking at 2020 from 1970, asking how many of that sort of prediction I made then would have been accurate now. The world in 50 years is going to be at least as hard to imagine (hopefully more, given exponential growth) to us as the world of today would have from 1970. What did we know? What did we completely miss? What kinds of systematic mistakes might we be making?

I may have misunderstood your question, so there's a chance that this is a tangential answer.

I think one mistake humans make is overconfidence in specific long-term predictions. Specific would mean like predicting when a particular technology will arrive, when we will hit 3 degrees of warming, when we will hit 11 billion population, etc.

I think the capacity of even smart humans to reasonably (e.g. >50% accuracy) predict when a specific event would occur is somewhat low; I would estimate around 20-40 years from when they are living.

You ask: "if you were alive in 1920 trying to have the largest possible impact today" what would you do? I would acknowledge that I cannot (with reasonable accuracy) predict the thing that will "the largest possible impact in 2020" (which is a very specific thing to predict) and go with broad-based interventions (which is a more sure-shot answer) like improving international relations, promoting moral values, promoting education, promoting democracy, promoting economic growth, etc (these are sub-optimal answers; but they're probably the best I could do).

I'd be interested to see a list of what kinds of systematic mistakes previous attempts at long-term forecasting made.

Also, I think that many longtermists (eg me) think it's much more plausible to successfully influence the long run future now than in the 1920s, because of the hinge of history argument.

KR
2
0
0

My understanding of the hinge of history argument is that the current time has more leverage than either the past or future. Even if that's true, it doesn't necessarily mean that it's any more obvious what needs to be done to influence the future.

If I believed that e.g. AI is obviously the most important lever right now, and think I know which direction to push that lever, I would ask myself "using the same reasoning, which levers would I be trying to push where in 1920". As far as I can tell this is pretty agnostic about how easy it is to push these levers around, just which you would want to be pushing.

KR
5
0
0

An argument in favor of slow takeoff scenarios being generally safer is that we will get to see and experiment with the precursor AIs before they become capable of causing x-risks. Even if the behavior of this precursor AI is predictive of the superhuman AI’s, our ability to use it depends on the reaction to the potential dangers of this precursor AI. A society confident that there is no danger from increasing the capabilities of the machine that has been successfully running its electrical grid gains much less of an advantage from a slow takeoff (as opposed to the classic hard takeoff) than one with an awareness of its potential dangers.

Personally, I would expect a shift in attitudes towards AI as it becomes obviously more capable than humans in many domains. However, whether this shift involves being more careful or instead abdicating decisions to the AI entirely seems unclear to me. The way I play chess with a much stronger opponent is very different from how I play chess with a weaker or equally matched one. With the stronger opponent I am far more likely to expect obvious-looking blunders to actually be a set-up, for instance, and spend more time trying to figure out what advantage they might gain from it. On the other hand, I never bother to check my calculator’s math by hand, because the odds that it’s wrong is far lower than the chance that I will mess up somewhere in my arithmetic. If someone came up with an AI-calculator that gave occasional subtly wrong answers, I certainly wouldn’t notice.

Taking advantage of the benefits of a slow takeoff also requires the ability to have institutions capable of noticing and preventing problems. In a fast takeoff scenario, it is much easier for a single, relatively small project to unilaterally take off. This is, essentially, a gamble on that particular team’s capabilities. In a slow takeoff, it will be rapidly obvious that some project(s) seem to be trending in that direction, which makes it more likely that if the project seems unsafe there will be time to impose external control on it. How much of an advantage this is depends on how much you trust whichever institutions will be needed to impose those controls. Humanity’s track record in this respect seems to me to be mixed. Some historical precedents for cooperation (or lack thereof) in controlling dangerous technologies and their side-effects are the Asilomar Conference, nuclear proliferation treaties, and various pollution agreements. Asilomar, which seems to me the most successful of these, involved a relatively small scientific field voluntarily adhering to some limits on potentially dangerous research until more information could be gathered. Nuclear proliferation treaties reduce the cost of a zero-sum arms race, but it isn’t clear to me if they significantly reduced the risk of nuclear war. Pollution regulations have had very mixed results, with some major successes (eg acid rain) but on the whole failing to avert massive global change. Somewhat closer to home, the response to Covid-19 hasn’t been particularly encouraging. It is unclear to me which, if any, of these present a fair comparison, but our track record in cooperating seems decidedly mixed.

I found this interesting, and I think it would be worth expanding into a full post if you felt like it! 

I don't think you'd need more content: just a few more paragraph breaks, maybe a brief summary, and maybe a few questions to guide responses. If you have questions you'd want readers to tackle, consider including them as comments after the post.

KR
3
0
0

Thanks! I ended up expanding it significantly and posting the full version here.

KR
1
0
0

EA-style discussion about AI seems to dismiss out of hand the possibility that AI might be sentient. I can’t find an example, but the possibility seems generally scoffed at in the same tone people dismiss Skynet and killer robot scenarios. Bostrom’s simulation hypothesis, however, is broadly accepted as at the very least an interestingly plausible argument.

These two stances seem entirely incompatible - if silicon can create a whole world inside of which are sentient minds, why can’t it just create the minds with no need for the framing device? It is plausible that sentience does not emerge unless you very precisely mimic natural (or “natural”) evolutionary pressures, but this seems unlikely. It’s likewise possible that something about the process by which we expect to create AI doesn’t allow for sentience, but in that case I think the burden of proof is on the people making the argument to identify this feature and argue for their reasons.

The strongest argument I can think off of the top of my head is that, if we expect a chance of future-AI created by something resembling modern machine learning methods to have a chance at sentience, we should likewise expect, say, worm-equivalent AIs to have it to. Is c. elegans sentient? Is OpenWorm? If you answered yes to the first and no to the second, what is OpenWorm missing that c. elegans has?

Buck
14
0
0

I think there are many examples of EAs thinking about the possibility that AI might be sentient by default. Some examples I can think of off the top of my head:

I don't think people are disputing that it would be theoretically possible for AIs to be conscious, I think that they're making the claim that AI systems we find won't be.

KR
1
0
0

Thanks for the links, I googled briefly before I wrote this to check my memory and couldn't find anything. I think what formed my impression was that even in very detailed conversations/writing on AI, as far as I could tell by default there was no mention or implicit acknowledgement of the possibility. On reflection I'm not sure if I would expect it to be even if people did think it was likely, though.

Many years ago, Eliezer Yudkowsky shared a short story I wrote (related to AI sentience) with his Facebook followers. The story isn't great -- I bring it up here only as an example of people being interested in these questions.

Curated and popular this week
 ·  · 7m read
 · 
This is a linkpost for a paper I wrote recently, “Endogenous Growth and Excess Variety”, along with a summary. Two schools in growth theory Roughly speaking: In Romer’s (1990) growth model, output per person is interpreted as an economy’s level of “technology”, and the economic growth rate—the growth rate of “real GDP” per person—is proportional to the amount of R&D being done. As Jones (1995) pointed out, populations have grown greatly over the last century, and the proportion of people doing research (and the proportion of GDP spent on research) has grown even more quickly, yet the economic growth rate has not risen. Growth theorists have mainly taken two approaches to reconciling [research] population growth with constant economic growth. “Semi-endogenous” growth models (introduced by Jones (1995)) posit that, as the technological frontier advances, further advances get more difficult. Growth in the number of researchers, and ultimately (if research is not automated) population growth, is therefore necessary to sustain economic growth. “Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed. (We do this, in the model, because each innovator needs a monopoly on his or her invention in order to compensate for the costs of developing it.) Improvements in process efficiency are called “vertical innovations” and increases in good variety are called “horizontal innovations”. Variety is desirable, so the one-off increase in variety produced by an increase to the population size increases real GDP, but it does not increase the growth rate. Likewise exponential population growth raise
 ·  · 14m read
 · 
As we mark one year since the launch of Mieux Donner, we wanted to share some reflections on our journey and our ongoing efforts to promote effective giving in France. Mieux Donner was founded through the Effective Incubation Programme by Ambitious Impact and Giving What We Can. TLDR  * Prioritisation is important. And when the path forward is unclear, trying a lot of different potential priorities with high productivity leads to better results than analysis paralysis. * Ask yourself what the purpose of your organisation is. If you are a mainly marketing/communication org, hire people from this sector (not engineers) and don’t be afraid to hire outside of EA. * Effective altruism ideas are less controversial than we imagined and affiliation has created no (or very little) push back * Hiring early has helped us move fast and is a good idea when you have a clear process and a lot of quality applicants Summary of our progress and activities in year 1 In January 2025, we set a new strategy with time allocation for our different activities. We set one clear goal - 1M€ in donations in 2025. To achieve this goal we decided: Our primary focus for 2025 is to grow our audience. We will experiment with a variety of projects to determine the most effective ways to grow our audience. Our core activities in 2025 will focus on high-impact fundraising and outreach efforts. The strategies where we plan to spend the most time are : * SEO content (most important) * UX Optimization of the website * Social Media ; Peer to Peer fundraising ; Leveraging our existing network The graphic below shows how we plan to spend our marketing time: We are also following partnership opportunities and advising a few high net worth individuals who reached out to us and who will donate by the end of the year. Results: one year of Mieux Donner On our initial funding proposal in June 2024, we wrote down where we wanted to be in one year. Let’s see how we fared: Meta Goals * Spendi
 ·  · 4m read
 · 
Sometimes working on animal issues feels like an uphill battle, with alternative protein losing its trendy status with VCs, corporate campaigns hitting blocks in enforcement and veganism being stuck at the same percentage it's been for decades. However, despite these things I personally am more optimistic about the animal movement than I have ever been (despite following the movement for 10+ years). What gives? At AIM we think a lot about the ingredients of a good charity (talent, funding and idea) and more and more recently I have been thinking about the ingredients of a good movement or ecosystem that I think has a couple of extra ingredients (culture and infrastructure). I think on approximately four-fifths of these prerequisites the animal movement is at all-time highs. And like betting on a charity before it launches, I am far more confident that a movement that has these ingredients will lead to long-term impact than I am relying on, e.g., plant-based proteins trending for climate reasons. Culture The culture of the animal movement in the past has been up and down. It has always been full of highly dedicated people in a way that is rare across other movements, but it also had infighting, ideological purity and a high level of day-to-day drama. Overall this made me a bit cautious about recommending it as a place to spend time even when someone was sold on ending factory farming. But over the last few years professionalization has happened, differences have been put aside to focus on higher goals and the drama overall has gone down a lot. This was perhaps best embodied by my favorite opening talk at a conference ever (AVA 2025) where Wayne and Lewis, leaders with very different historical approaches to helping animals, were able to share lessons, have a friendly debate and drive home the message of how similar our goals really are. This would have been nearly unthinkable decades ago (and in fact resulted in shouting matches when it was attempted). But the cult