This is a special post for quick takes by ABishop. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Mentioned in
Sorted by Click to highlight new quick takes since:

While AI value alignment is considered a serious problem, the algorithms we use every day do not seem to be subject to alignment. That sounds like a serious problem to me. Has no one ever tried to align the YouTube algorithm with our values? What about on other types of platforms?

You might be interested in Building Human Values into Recommender Systems: An Interdisciplinary Synthesis as well as Jonathan Stray's other work on alignment and beneficence of recommender systems.

Since around 2017, there has been a lot of public interest in how youtube's recommendation algorithms may affect individuals and society negatively. Governments, think tanks, the press/media, and other institutions have pressured youtube to adjust its recommendations. You could think of this as our world's (indirect & corrupted) way of trying to instill humanity's values into youtube's algorithms.

I believe this sort of thing doesn't get much attention from EAs because there's not really a strong case for it being a global priority in the same way that existential risk from AI is.

I would like to estimate how effective free hugs are. Can anyone help me?

Haha. Well, I guess I would first ask effective at what? Effective at giving people additional years of healthy & fulfilling life? Effective at creating new friendships? Effective at making people smile?

I haven't studied it at all, but my hypothesis that it is the kind of intervention that is  similar to "awareness building," but it doesn't have any call to action (such as a donation). So it is probably effective in giving people a nice experience for a few seconds, and maybe improving their mood for a period of time, but it probably doesn't have longer-lasting effects. From a cursory glance at Google Scholar, it looks like there hasn't been much research on free hugs.

Hmm, I'm a little confused. If I cook a meal for someone, it doesn't seem to mean much. But if no one is cooking for someone, it is a serious problem and we need to help. Of course, I'm not sure if we're suffering from that kind of "skinship hunger."

I'd also re-focus on effective at what? What is the goal or objective of these free hugs? Once you know that, then you can more easily estimate how effective free hugs are compared to other interventions.

Using the analogy of hunger, here is one way that I am currently thinking about it: giving a willing stranger a hug is like giving a willing stranger a candy bar; they get some nourishment, but if they are chronically food insecure this won't solve that longer-term problem. It won't help them get regular/consistent access to meals that they can afford. So in that sense it is like a band-aid: it is treating the symptom, but it is not addressing the cause.

If someone is suffering from a consistent and pervasive lack of human touch, such as "skinship hunger," a hug might feel nice for a few seconds, but when the hug is finished that person's situation (lacking human touch) remains unchanged. I suppose you could create some kind of program in which they spend 60 minutes with a professional cuddler every week, but I honestly don't see that as being cost competitive if the goal is to get QALYs at the best price.

But if you just want to estimate it then you could put together a simple Fermi estimate: what are the costs to giving free hugs, and what are the benefits, and then figure out how much value do you please on each of those.

It is like a seed. Basic trust and support are provided. It is doubtful whether long-term, indefinite provision is necessary. Wouldn’t it be similar to UBI? I don’t know because there is no research. I believe you are begging the question. I can't agree or disagree with the claim that it will soon return to its initial state without any long-term effects. As for the estimate... I'm not sure. I can't think of a good measure or anything yet. I might need a psychologist to help me. Perhaps an estimate for mental health or well-being, but I doubt QALYs or DALYs. But as an initial estimate, it seems like a good measure. Alternatively, it could be expressed as pain relief or social support. I confess I had no intention of doing any serious research, as I was simply asking for an idea. It's more of a question of whether it's worth it.

Do you believe that altruism actually makes people happy? Peter Singer's book argues that people become happier by behaving altruistically, and psychoanalysis also classifies altruism as a mature defense mechanism. However, there are also concerns about pathological altruism and people pleasers. In-depth research data on this is desperately needed.

Good question I also think about! 

After being only for a few months deeply into EA I already realise that discussing with non EA-people makes me emotional, since I "cannot understand" why they are not getting easily convinced of it as well. How can something so logical not being followed by everyone? At least by donating? I think there is the danger to become pathetic if you don't reflect on it and be aware that you cannot convince everybody. 

On the other side EA is already having a big impact on how I donate and how I act in my job - so in this regards I do feel much more impactful which certainly makes me happier and more relaxed in other parts of my life as ambitions shifted. Does that make any sense? 

Would also be interested on research if anyone has!

Thoughts on project or research auction. It is very cumbersome to apply for funds one by one from Openphil or EA fund. Wouldn't it be better for a major EA organization to auction off the opportunity to participate in a project and let others buy it? It will be similar to a tournament, but you will be able to sell a lot more projects at a lower price and reduce the amount of resources wasted on having many people competing for the same project.

I think this requires more elaboration how exactly the suggested system is supposed to work.

I wrote the post

나는 Brian Caplan의 기사 중 하나에서 비슷한 뉘앙스를 읽은 적이 있습니다. 공리주의자라면 신경증적인 사람들을 선호하는 사회를 만들 것입니다. 이 문제를 해결할 필요가 없다면 그 이유는 무엇입니까? 이 문제를 해결해야 한다면 어떻게 해결해야 할까요?

I assume the argument is that neurotic people suffer more when they don't get resources, so resources should go to more neurotic people first?

I think that's correct in an abstract sense but wrong in practice for at least two reasons:

  1. Utilitarianism says you should work on the biggest problems first. Right now the biggest problems are (roughly) global poverty, farm animal welfare, and x-risk.
  2. A policy of helping neurotic people encourages people to act more neurotic and even to make themselves more neurotic, which is net negative, and therefore bad according to utilitarianism. Properly-implemented utilitarianism needs to consider incentives.

1. If pain is somehow an essential part of consciousness or well-being, then even if the x-risk is resolved, the s-risk may be a more serious problem.
2. Neuroticism is to some extent hereditary. Incentives can solve some problems, but not all.

I am planning to write post about happiness guilt. I think many of EA would have it. Can you share resources or personal experiences?

Detach the grim-o-meter comes to mind. I think that post helped me a little bit.

[comment deleted]3
1
0
[comment deleted]2
0
0
[comment deleted]1
0
0
[comment deleted]1
0
0
[comment deleted]0
0
0
Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3