Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Two sources of human misalignment that may resist a long reflection: malevolence and ideological fanaticism (Alternative title: Some bad human values may resist idealization[1]) The values of some humans, even if idealized (e.g., during some form of long reflection), may be incompatible with an excellent future. Thus, solving AI alignment will not necessarily lead to utopia. Others have raised similar concerns before.[2] Joe Carlsmith puts it especially well in the post “An even deeper atheism”: > “And now, of course, the question arises: how different, exactly, are human hearts from each other? And in particular: are they sufficiently different that, when they foom, and even "on reflection," they don't end up pointing in exactly the same direction? After all, Yudkowsky said, above, that in order for the future to be non-trivially "of worth," human hearts have to be in the driver's seat. But even setting aside the insult, here, to the dolphins, bonobos, nearest grabby aliens, and so on – still, that's only to specify a necessary condition. Presumably, though, it's not a sufficient condition? Presumably some human hearts would be bad drivers, too? Like, I dunno, Stalin?” What makes human hearts bad?  What, exactly, makes some human hearts bad drivers? If we better understood what makes hearts go bad, perhaps we could figure out how to make bad hearts good or at least learn how to prevent hearts from going bad. It would also allow us better spot potentially bad hearts and coordinate our efforts to prevent them from taking the driving seat. As of now, I’m most worried about malevolent personality traits and fanatical ideologies.[3] Malevolence: dangerous personality traits Some human hearts may be corrupted due to elevated malevolent traits like psychopathy, sadism, narcissism, Machiavellianism, or spitefulness. Ideological fanaticism: dangerous belief systems There are many suitable definitions of “ideological fanaticism”. Whatever definition we are going to use, it should describe ideologies that have caused immense harm historically, such as fascism (Germany under Hitler, Italy under Mussolini), (extreme) communism (the Soviet Union under Stalin, China under Mao), religious fundamentalism (ISIS, the Inquisition), and most cults.  See this footnote[4] for a preliminary list of defining characteristics. Malevolence and fanaticism seem especially dangerous Of course, there are other factors that could corrupt our hearts or driving ability. For example, cognitive biases, limited cognitive ability, philosophical confusions, or plain old selfishness.[5] I’m most concerned about malevolence and ideological fanaticism for two reasons.    Deliberately resisting reflection and idealization First, malevolence—if reflectively endorsed[6]—and fanatical ideologies deliberately resist being changed and would thus plausibly resist idealization even during a long reflection. The most central characteristic of fanatical ideologies is arguably that they explicitly forbid criticism, questioning, and belief change and view doubters and disagreement as evil.  Putting positive value on creating harm Second, malevolence and ideological fanaticism would not only result in the future not being as good as it possibly could—they might actively steer the future in bad directions and, for instance, result in astronomical amounts of suffering.  The preferences of malevolent humans (e.g., sadists) may be such that they intrinsically enjoy inflicting suffering on others. Similarly, many fanatical ideologies sympathize with excessive retributivism and often demonize the outgroup. Enabled by future technology, preferences for inflicting suffering on the outgroup may result in enormous disvalue—cf. concentration camps, the Gulag, or hell[7]. In the future, I hope to write more about all of this, especially long-term risks from ideological fanaticism.  Thanks to Pablo and Ruairi for comments and valuable discussions.  1. ^ “Human misalignment” is arguably a confusing (and perhaps confused) term. But it sounds more sophisticated than “bad human values”.  2. ^ For example, Matthew Barnett in “AI alignment shouldn't be conflated with AI moral achievement”, Geoffrey Miller in “AI alignment with humans... but with which humans?”, lc in “Aligned AI is dual use technology”. Pablo Stafforini has called this the “third alignment problem”. And of course, Yudkowsky’s concept of CEV is meant to address these issues.  3. ^ These factors may not be clearly separable. Some humans may be more attracted to fanatical ideologies due to their psychological traits and malevolent humans are often leading fanatical ideologies. Also, believing and following a fanatical ideology may not be good for your heart. 4. ^ Below are some typical characteristics (I’m no expert in this area): Unquestioning belief, absolute certainty and rigid adherence. The principles and beliefs of the ideology are seen as absolute truth and questioning or critical examination is forbidden. Inflexibility and refusal to compromise.  Intolerance and hostility towards dissent. Anyone who disagrees or challenges the ideology is seen as evil; as enemies, traitors, or heretics. Ingroup superiority and outgroup demonization. The in-group is viewed as superior, chosen, or enlightened. The out-group is often demonized and blamed for the world's problems.  Authoritarianism. Fanatical ideologies often endorse (or even require) a strong, centralized authority to enforce their principles and suppress opposition, potentially culminating in dictatorship or totalitarianism. Militancy and willingness to use violence. Utopian vision. Many fanatical ideologies are driven by a vision of a perfect future or afterlife which can only be achieved through strict adherence to the ideology. This utopian vision often justifies extreme measures in the present.  Use of propaganda and censorship.  5. ^ For example, Barnett argues that future technology will be primarily used to satisfy economic consumption (aka selfish desires). That seems even plausible to me, however, I’m not that concerned about this causing huge amounts of future suffering (at least compared to other s-risks). It seems to me that most humans place non-trivial value on the welfare of (neutral) others such as animals. Right now, this preference (for most people) isn’t strong enough to outweigh the selfish benefits of eating meat. However, I’m relatively hopeful that future technology would make such types of tradeoffs much less costly.  6. ^ Some people (how many?) with elevated malevolent traits don’t reflectively endorse their malevolent urges and would change them if they could. However, some of them do reflectively endorse their malevolent preferences and view empathy as weakness.  7. ^ Some quotes from famous Christian theologians:  Thomas Aquinas:  "the blessed will rejoice in the punishment of the wicked." "In order that the happiness of the saints may be more delightful to them and that they may render more copious thanks to God for it, they are allowed to see perfectly the sufferings of the damned".  Samuel Hopkins:  "Should the fire of this eternal punishment cease, it would in a great measure obscure the light of heaven, and put an end to a great part of the happiness and glory of the blessed.” Jonathan Edwards:  "The sight of hell torments will exalt the happiness of the saints forever."
46
Linch
2d
11
My default story is one where government actors eventually take an increasing (likely dominant) role in the development of AGI. Some assumptions behind this default story: 1. AGI progress continues to be fairly concentrated among a small number of actors, even as AI becomes percentage points of GDP. 2. Takeoff speeds (from the perspective of the State) are relatively slow. 3. Timelines are moderate to long (after 2030 say).  If what I say is broadly correct, I think this may have has some underrated downstream implications For example, we may be currently overestimating the role of values or insitutional processes of labs, or the value of getting gov'ts to intervene(since the default outcome is that they'd intervene anyway). Conversely, we may be underestimating the value of clear conversations about AI that government actors or the general public can easily understand (since if they'll intervene anyway, we want the interventions to be good). More speculatively, we may also be underestimating the value of making sure 2-3 are true (if you share my belief that gov't actors will broadly be more responsible than the existing corporate actors). Happy to elaborate if this is interesting.
Mini EA Forum Update We’ve updated our new user onboarding flow! You can see more details in GitHub here. In addition to making it way prettier, we’re trying out adding some optional steps, including: 1. You can select topics you’re interested in, to make your frontpage more relevant to you. 1. You can also click the “Customize feed” button on the frontpage - see details here. 2. You can choose some authors to subscribe to. You will be notified when an author you are subscribed to publishes a post. 1. You can also subscribe from any user’s profile page. 3. You’re prompted to fill in some profile information to give other users context on who you are. 1. You can also edit your profile here. I hope that these additional optional steps help new users get more out of the Forum. We will continue to iterate on this flow based on usage and feedback - feel free to reply to this quick take with your thoughts!
Y-Combinator wants to fund Mechanistic Interpretability startups "Understanding model behavior is very challenging, but we believe that in contexts where trust is paramount it is essential for an AI model to be interpretable. Its responses need to be explainable. For society to reap the full benefits of AI, more work needs to be done on explainable AI. We are interested in funding people building new interpretable models or tools to explain the output of existing models." Link https://www.ycombinator.com/rfs (Scroll to 12) What they look for in startup founders https://www.ycombinator.com/library/64-what-makes-great-founders-stand-out
I have written 7 emails to 7 Politicians aiming to meet them to discuss AI Safety, and already have 2 meetings. Normally, I'd put this kind of post on twitter, but I'm not on twitter, so it is here instead. I just want people to know that if they're worried about AI Safety, believe more government engagement is a good thing and can hold a decent conversation (i.e. you understand the issue and are a good verbal/written communicator), then this could be an underrated path to high impact. Another thing that is great about it is you can choose how many emails to send and how many meetings to have. So it can be done on the side of a "day job".

Popular comments

Recent discussion

Cynthia Schuck-Paim; Wladimir J. Alonso; Cian Hamilton (Welfare Footprint Project) 

Overview

In assessing animal welfare, it would be immensely beneficial to rely on a cardinal metric that captures the overall affective experience of sentient beings over a period of interest or lifetime. We believe that the concept of Cumulative Pain (or Pleasure, for positive affective states), as adopted in the Welfare Footprint framework, aligns closely with this ideal. It quantifies the time spent in various intensities of pain and has proven operationally useful, providing actionable insights for  guiding cost-effective interventions aimed at reducing animal suffering.

However, it does not yet offer a unified metric of suffering, as it measures time spent in four categories of pain intensity. While we anticipate this complexity will persist for some time—given the current challenges in equating...

Continue reading

In this video, we describe a speculative but promising idea to boost economic growth: giving cities the ability to write some of their own laws and design their own institutions, independently from their parent countries.   Jackson Wagner, the first author of this ...

Continue reading

I think that's an interesting and very open-minded reply. But I think the problem with the proposed model in practice isn't just that competition limits the ability to coordinate to prevent negative externalities, it's that the specific type of competition that "charter cities" are designed to stimulate (making undeveloped land in a poor country with severe governance problems into an attractive opportunity for business investment) is naturally geared towards lower taxes than elsewhere because it's one of the few things they can credibly commit to. Pretty ... (read more)

As an active member from Mexico and partially based there and as the director of an organization in Latin America, I would like to initiate a discussion on the factors to consider when establishing coworking spaces in LMICs, particularly in Mexico. I am convinced of the...

Continue reading

However, given the process of the recent coworking space proposal in Mexico

This link just points directly back to this post - what did you have in mind?

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

This is the latest of a theoretically-three-monthly series of posts advertising EA infrastructure projects that struggle to get and maintain awareness (see original advertising post for more on the rationale).

I italicise organisations added since the previous post was originally...

Continue reading
4
Amber Dawn
2h
Thanks for the shout-out! I just want to add that I also offer writing coaching, for those who want to learn how to make their own writing clearer and more effective. 

No worries :) I've added 'and writing coach' to the OP, so it will also show up if I C&P the copy for future such advertising posts.

1
SummaryBot
2h
Executive summary: A post advertising free or discounted services and resources aimed at effective altruists, as well as organizations in the EA community that urgently need funding. Key points: 1. Lists free/subsidized coworking spaces, accommodations, professional services, coaching, tools, and financial support available to EAs. 2. Includes organizations like CEEALAR, EA Poland, and AI Safety Camp that urgently need donations to continue operating. 3. Relaxed criteria to include some fee-based services and resources specifically aimed at helping individual EAs. 4. Requests readers spread awareness of these organizations and strongly vote on the post. 5. Invites suggestions for additional organizations to include that provide valuable services, face funding issues, or deserve more exposure in the EA community.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

TLDR: My wife and I tried to help three children with severe illness cost-effectively. We paid for their medical care, followed up on what happened and reflected on the process.

"Hearing about her death hurt badly – as it should. But not only had I done what I could...

Continue reading

Thank you for writing and doing this. It moved me to read how putting cold calculations next to your warm personal connection with them is inhumanly strange and absurd. To me it means that I discovered a new priviledge of living in a rich country. Where we can abstract donating behaviour from the decision process of who gets the help.

I kind of look at the EA-principles as a good way of sharpening my intuitions of impact in a far away country. But for you its different: as a medical professional, your intuition of where the most good can be done will be spo... (read more)

15
Ulrik Horn
13h
I have for a time wanted something like a list of "personal" interventions likely to be ~GiveWell cost effective. This means when I or people in my social circle encounter someone struggling to pay for e.g. fixing a blocked aortic valve, I can feel somewhat confident about donating money directly to the afflicted person. I imagine many of us EAs know several people who live or travel in poor countries where they every now and then encounter people in desperate situations. I myself am somewhat frequently contacted by friends about paying for someones surgery or school fees.  Basically I understand our current EA global health initiatives to require a high density of people to help. But this requirement might be bypassed if instead we have networks of people we trust that can identify people in situations we know are likely to be cost-effective to support financially. I agree with other commenters that this is not scalable, but it might make EAs have better standing in their respective social circles and could have other positive effects. This is almost like a personal re-granting program. I love your post - I think it goes a long way towards understanding if such a list might be useful. And I like the other commenters feel admiration and sadness after reading your piece.
6
Karthik Tadepalli
13h
I love this, thank you for pushing the frontiers of doing good!

This Friday, Probably Good is hosting a virtual conversation and Q&A with Alec Stapp, co-founder of the Institute for Progress (IFP). This event is part of a series of “career conversations” designed to give our readers and advisees a chance to interact with and learn from experts across high impact career paths. 

Event Details:

  • Topic: Impactful Careers in US Policy
  • Date & Time: Feb 23 at 3 pm ET 
  • Guest Speaker: Alec Stapp, Institute for Progress
  • RSVP here (Don’t forget to add the event to calendar after you RSVP) 

About the webinar 

In the first half, we'll engage Alec with prepared questions around IFP’s founding story, the paradox of tractability in DC and career advice for different functional paths within the policy world. 

The second half of the webinar will be audience Q&A, and people with questions can ask Alec their questions directly. If you wish...

Continue reading

In: Journal of Moral Philosophy
Author: Gary David O’Brien
Online Publication Date: 19 Jan 2024
License: Creative Commons Attribution 4.0 International


Abstract

Longtermism is the view that positively influencing the long-term future is one of the key moral priorities of our...

Continue reading

Thank you so much for this article! (BTW: Are Gary David O’Brien and BrownHairedEevee the same person? If not, also thanks to the latter for sharing it.)

I was wondering myself for years why longtermists seemingly didn´t really account for (non-human) animals, since it seems quite obvious that in a lot (if not most) of the possible futures, humanity will be nothing but an insignificant minority of all the sentient beings. In fact, my personal motivation for caring about X-risks is mostly due to the instrumental value humanity might have in regards to non-hu... (read more)

Thank you to the many MEARO leaders who provided feedback and inspiration for this post, directly and through their work in the space.

Introduction

In the following post, I will introduce MEAROs—Meta EA Regional Organizations—a new term for a long-established segment of the...

Continue reading

Executive summary: The post introduces Meta Effective Altruism Regional Organizations (MEAROs), which support EA within geographical regions, and argues they are currently underutilized despite significant potential for impact.

Key points:

  1. MEAROs promote EA within specific regions through translation, media, events, advising, infrastructure, and more.
  2. MEAROs provide unique value but are early-stage, small, and underfunded, reaching just a fraction of potential.
  3. With more funding, MEAROs could greatly expand current functions and pioneer new ones like research,
... (read more)
2
Ulrik Horn
6h
Has it been considered to use MEAROs strategically to address EA causes? I am thinking e.g.: -EA Taiwan for semiconductor work on AI safety -EA in the Global South for local talent pipelines where interventions are delivered (I think EA Nigeria has posted about this) -Probably other causes that have particular geographical focal points, e.g. EA China or similar I am sure people have thought of this, but it struck me as a potential use of MEAROs that naively seem high impact. Excellent post, it offers a really nice overview!
1
Chris Leong
18h
I’d imagine the natural functions of city and national groups to vary substantially.

Share your information in this thread if you are looking for full-time, part-time, or limited project work in EA causes[1]!

We’d like to help people in EA find impactful work, so we’ve set up this thread, and another called Who's hiring? (we did this last in 2022[2]).

Consider...

Continue reading

TLDR: AI and moral psychology researcher looking for full-time or contract-based work.

Skills & background: I'm currently running studies on moral judgments of AI at the Uehiro Centre. I'm interested in perceptions of AI and related judgments, such as those of risk (e.g., how different are perceptions of risk from AI to risks from nuclear weapons?), the potential harms from AI misuse (and the variety of misuses that will emerge) and how our moral circle will expand towards digital beings. I have a background in experimental psychology and 3+ years of ex... (read more)