This is a special post for quick takes by Cipolla. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

There should be more talk about concentration of power, decoupling from the masses, and social fragmentation around here.

Disruptive technologies amplify the economic inequality gap within countries, and between countries.

  • New technologies, like automation or farming, quickly benefit a few number of people, basically the owners
    • As new technology emerges, the general population will see wage stagnation, job displacements, and will be able to get some benefit from the new technology only after a long time
  • New disruptive technologies tend to directly benefit fewer and fewer people, while a larger side of the population has less to offer
    • While it is true that often new technologies will increase the benefits for everybody, e.g. more food, on the other hand the gap between people is humongous (e.g. food quality is way different), and who owns the food can steer the choices of the many
  • At some point, the few people will not need anymore the general population to reach certain goals, and this can go on and on, especially with an exponential increase in technology (just look for the fact that now tech companies basically dwarf most of the others, and in the past century we had manufacturing companies, so these has never reached the power of the tech companies)
  • Coupled to the fact that human individuals in the most developed countries are becoming more and more individualistic, with capillary technology it will be very easy for powerful players with amazing technology to steer humanity

 

Which are the consequences of this? I do not see many people discussing it on the forum. 

While it is true that the AI itself might doom humanity, I consider human dynamics to be a greater threat (just for the fact that investing in AI can increase the likelihood of these fewer people to own the world, and this might increase the likelihood of humanity AI doom).

 

This is a better defined problem to the usual AI safety talk. I think it should be something that needs to be addressed asap with public discourse. We need to focus on the right things in my opinion. 

It might also have some implicit benefits on usual AI safety (and to be sincere, I do not see current AI research level at the level of research in math or fundamental physics, where first principles approaches and taking your own time to deeply think about problems are preferred, but this is another topic for another day).

I noticed the most successful people, in the sense of advancing their career and publishing papers, I meet at work have a certain belief in themselves. What is striking, no matter their age/career stage, it is like they are already taking certain their success and where to go in the future.

 

I also noticed this is something that people from non-working class backgrounds manage to do.

 

Second point. They are good at finishing projects and delivering results in time.

 

I noticed that this was somehow independent from how smart is someone.

 

While I am very good at single tasks, I have always struggled with long term academic performance. I know it is true for some other people too.

 

What kind of knowledge/mentality am I missing? Because I feel stuck.

Practice is helpful. Is there a way you can repeatedly practice finishing projects? Having the right tools/frameworks is also helpful. Maybe reading about about personal productivity and breaking large tasks down into smaller pieces would help? I also find kanban boards to be very helpful, and you can set one up in a program like Asana, or you can do it on your wall with sticky notes.

Perhaps you could describe a bit more how your failures have happened with longer-term efforts? That might allow people to give you more tailored recommendations.

I am not sure why https://80000hours.org/ suggests hedge funds or banking as potential career paths, if you wanna have a positive impact on the world (short/long term).

See for example here https://80000hours.org/career-reviews/front-office-finance/ or https://80000hours.org/career-reviews/trading-in-quantitative-hedge-funds/ .

If I understand, the main rationale is:

  • we need to fund "good causes"
  • such jobs are high earning -> potentially good earnings to give more
  • plus, advantages for personal life/career

I agree that these are very alluring and mentally stimulating jobs (I would also would like to have fun while earning lots of money in a short amount of time). Though, I am not sure about their net positive impact.

Even if mathematicians/physicists working in hedge funds/banks do not cause direct market crashes (assumption), there is a big potential for harm: someone very rich is becoming even richer. 

There is no reason for which these extremely wealthy individuals/institutions are aligned with EA goals / doing good.

It just increases the probability of wealth inequality, and skewing government level policies, as we already know such people have big power to influence regulators.

In practice, the assumption above falls, and the reality is scarier.

The links above mention about possible societal harms, but then, why keep the pages?

Just my two cents.

In my opinion the good done via donations significantly outweighs any marginal impact to making rich people richer (you can also filter for hedge funds where you agree more with the politics of the top people). Why do you think the harms you highlight outweigh the donations?

A more cynical defense is that someone is going to do the hedge fund job anyway, and the downsides of that job existing will be basically the same (maybe a tiny bit less if the replacement were ever so slightly less qualified).

On the other hand, the replacement in the role of "wealthy person" will likely spend little on charitable endeavors, and even then often less effective ones.

So one could argue that a job could be net negative on the whole -- and yet that the most positive thing a person could do is fill it, to at least extract the positives it offers effectively! (Not opining here on whether hedge funds are net positive or negative, too far out of my knowledge base).

Hi @Neel Nanda , @Jason , I am sure it has been discussed several times in the community. I was actually checking that website as I am into a moment of my life where I would like to purse something I really like, and was looking for ways to do this.

Now.

Let's say.

Average hedge fund guy ~ ten billion dollars in donations (say I have 10^5 people, with 10% donation per year, with 10^6 million dollars per year). Mostly go to developing countries, as they would affect more people in need (globally).

Investor ~ order of magnitude higher than the above (even if wrong), in profits. This is not marginal. Mostly go to invest more, shaping governments, etc...

Assumption 1: just from profits, typical investor has an "exponential" advantage in shaping governments and the world in general.

Assumption 2: this is for two reasons. First, you want societal stability. Second, you want appreciating assets. The two have to be in balance. Focusing on one can negatively influence the other.

Assumption 3: in the last decades too focus on a mono-dimensional metric, piggy bank size, instead of organic growth (hedge fund guy too, they are focus on organic growth, but that 10% is becoming bigger in their eyes).  

Assumption 4: giving too much power, in this case bought with money, to a handful of people is very dangerous. No reason to have them aligned with "good" goals. Literally no reason.

In my opinion, all of these points imply a higher chance of the unintended(?) consequences.

One example is obviously profits going to an oil company with a business in Africa, and no interest into making sure that the money goes to the locals.

Another one, a bit scarier: Western societal instability (of particular relevance, UK and US).

Western country increased likelihood of instability, implies more poverty and social problems. More poverty -> people are less willing to donate (I am assuming most of the donations individuals make do not come from just hedge fund folks) -> millions of brothers and sisters in less developed countries count on that.

If this happens, your average Joe quant's action of working implied not just a harm to a Western country, but also to their good cause (as instability in Western stable countries makes more difficult to reach EA goals).

I think quant jobs are cool and have nice perks. I just wonder how much thought did 80000 hours folks put before advising those jobs given that the usual target are young ambitious people/experienced science researchers.

Perhaps I misunderstood the goal of the 80000 hours website, and maybe they refer to general, and not "good", impact.

I don't want to imply that you should go work for a hedge fund, only that I suspect 80K has likely thought about it. So figuring out why they may have reached the conclusion they did should hopefully help you to evaluate the pros and cons and make a decision that aligns best with your values. 

The take on medical careers in this post is the flipside of the cynical defense I described, and maybe it is easier to see the flipside. Basically, the argument is that if you decide to go to med school, you replaced someone who would have otherwise been a physician. So the counterfactual difference in the world is limited to the difference in the quality of the medical services you would have provided vs. the quality your replacement would have provided.

I am less utilitarian than the average person here, and would personally not work for an organization as harmful as you believe hedge funds to be. (I never had any interest in working for one, so never took the time to investigate their net social impact.)

Not a proper quick take and perhaps off-topic on this forum. But given that I know some people here are into health I give it a shot.

I would be very grateful if someone could point me to some excellent doctors around Europe, website, or some sort of diet that can be good to improve my health.

I have been having some health issues:

  • getting sick frequently, feeling feverish most of the time
  • Difficulty breathing, with some pain sometime when I am also sick
    • It turns out I have recently developed asthma too. Initially a couple of doctors thought it was anxiety, and after a year and a half I really pushed to get a proper test.
  • A burning sensation on my left chest, since I took the covid vaccine
  • Slight hypertension
  • And most importantly a continuous head confusion (similar to when I used to get a fever) since two years ago! Sometimes it has head tingling, with hands and feet tingling too, and sometimes a headache.
  • I am fit, and try to workout when I can.[1]

 

All the doctors that I have met do not have a single clue, and do not seem interested into solving the problem or investigate.[2] I am not even looking for a cure now. Just a diagnosis.

I am not rich so my budget is limited.  But life has become so difficult. 

  1. ^

    With my breathing problems I can do a 10km at around 4:40 km/min, but suddenly not on my top of my game (4:10 km/min).

  2. ^

    If it is not an easy ibuprofen or similar, they quickly give up. I have a big suspicion that doctors are not trained well, so they might be all effectively incompetent at dealing with situations that are not solved by the usual medicines they prescribe. I plan one day to write a post, as methods of rationality and AI might help in diagnosing situations.

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 1m read
 · 
Hi everyone! I’m Caitlin, and I’ve just kicked off a 6-month, full-time career-transition grant to dive deep into AI policy and risk. You can learn more about my work here. What I’m Building I’m launching a TikTok and Instagram channel, @AICuriousGirl, to document my journey as I explore AI governance, misalignment, and the more tangible risks like job displacement and misuse. My goal is to strike a tone of skeptical optimism[1], acknowledging the risks of AI while finding ways to mitigate them. How You Can Help My friends and roommates have already given me invaluable feedback and would be thrilled to hear I'm not just relying on them anymore to be reviewers. I’m now seeking: * Technical and policy experts or other communicators who can * Volunteer 10-15 minutes/week to review draft videos (1-2 minutes in length) and share quick thoughts on: * Clarity * Accuracy * Suggestions for tighter storytelling First Drafts Below are links to my first two episodes. Your early feedback will shape both my content style and how I break down complex ideas into 1- to 2-minute TikToks. 1. Episode 1: What is this channel? 2. Episode 2: What jobs will be left? (Please note: I’ll go into misuse and misalignment scenarios in future videos.) Why TikTok? Short-form video platforms are where many non-technical audiences spend their time, and I’m curious whether they can be a vehicle for thoughtful discussion about AI policy.   If you’re interested, please reply below or DM me, and thank you in advance for lending your expertise! — Caitlin   1. ^ This phrase is not good, please help me think of a better one and I will buy you a virtual coffee or sth.