Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
I haven't had time to read all the discourse about Manifest (which I attended), but it does highlight a broader issue about EA that I think is poorly understood, which is that different EAs will necessarily have ideological convictions that are inconsistent with one another.  That is, some people will feel their effective altruist convictions motivate them to work to build artificial intelligence at OpenAI or Anthropic; others will think those companies are destroying the world. Some will try to save lives by distributing medicines; others will think the people those medicines save eat enough tortured animals to generally make the world worse off. Some will think liberal communities should exclude people who champion the existence of racial differences in intelligence; others will think excluding people for their views is profoundly harmful and illiberal.  I'd argue that the early history of effective altruism (i.e. the last 10-15 years) has generally been one of centralization around purist goals -- i.e. there're central institutions that effective altruism revolves around and specific causes and ideas that are the most correct form of effective altruism. I'm personally much more a proponent of liberal, member-first effective altruism than purist, cause-first EA. I'm not sure which of those options the Manifest example supports, but I do think it's indicative of the broader reality that for a number of issues, people on each side can believe the most effective altruist thing to do is to defeat the other. 
If you’re seeing things on the forum right now that boggle your mind, you’re not alone. Forum users are only a subset of the EA community. As a professional community builder, I’m fortunate enough to know many people in the EA community IRL, and I suspect most of them would think it’d be ridiculous to give a platform to someone like Hanania. If you’re like most EAs I know, please don’t be dissuaded from contributing to the forum. I’m very glad CEA handles its events differently.
I quit. I'm going to stop calling myself an EA, and I'm going to stop organizing EA Ghent, which, since I'm the only organizer, means that in practice it will stop existing. It's not just because of Manifest; that was merely the straw that broke the camel's back. In hindsight, I should have stopped after the Bostrom or FTX scandal. And it's not just because they're scandals; It's because they highlight a much broader issue within the EA community regarding whom it chooses to support with money and attention, and whom it excludes. I'm not going to go to any EA conferences, at least not for a while, and I'm not going to give any money to the EA fund. I will continue working for my AI safety, animal rights, and effective giving orgs, but will no longer be doing so under an EA label. Consider this a data point on what choices repel which kinds of people, and whether that's worth it. EDIT: This is not a solemn vow forswearing EA forever. If things change I would be more than happy to join again.
Have your EA conflicts on... THE FORUM! In general, I think it's much better to first attempt to have a community conflict internally before I have it externally. This doesn't really apply to criminal behaviour or sexual abuse. I am centrally talking about disagreements, eg the Bostrom stuff, fallout around the FTX stuff, Nonlinear stuff, now this manifest stuff.  Why do I think this? * If I want to credibly signal I will listen and obey norms, it seems better to start with a small discourse escalation rather than a large one. Starting a community discussion on twitter is like jumping straight to a shooting war.  * Many external locations (eg twitter, the press) have very skewed norms/incentives to the forum and so many parties can feel like they are the victim. I find when multiple parties feel they are weaker and victimised that is likely to cause escalation.  * Many spaces have less affordance for editing comments, seeing who agrees with who, having a respected mutual party say "woah hold up there" * It is hard to say "I will abide by the community sentiment" if I have already started the discussion elsewhere in order to shame people. And if I don't intend to abide by the community sentiment, why am I trying to manage a community conflict in the first place. I might as well just jump straight to shaming.  * It is hard to say "I am open to changing my mind" if I have set up the conflict in a way that leads to shaming if the other person doesn't change theirs. It's like holding a gun to someone's head and saying that this is just a friendly discussion.  * I desire reconciliation. I have hurt people in this community and been hurt by them. In both case to the point of tears and sleepless night. But still I would prefer reconciliation and growth over a escalating conflict * Conflict is often negative sum, so lets try and have it be the least negative sum as possible.  * Probably a good chunk of it is church norms, centred around 1 Corinthians 6[2]. I don't really endorse this, but I think it's good to be clear why I think thinks.  Personal examples: * Last year I didn't like that Hanania was a main speaker at manifest (iirc) so I went to their discord and said so. I then made some votes. The median user agreed with me and so Hanania didn't speak. I doubt you heard about this, because I did it on the manifold discord. I hardly tweeted about it or anything. This and the fact I said I wouldn't created a safe space to have the discussion and I largely got what I wanted.  You might think this is a comment is directed at a specific person, but I bet you are wrong. I dislike this behaviour when it is done by at least 3 different parties that I can think of.  1. ^   2. ^  If any of you has a dispute with another, do you dare to take it before the ungodly for judgment instead of before the Lord’s people? 2 Or do you not know that the Lord’s people will judge the world? And if you are to judge the world, are you not competent to judge trivial cases? 3 Do you not know that we will judge angels? How much more the things of this life! 4 Therefore, if you have disputes about such matters, do you ask for a ruling from those whose way of life is scorned in the church? 5 I say this to shame you. Is it possible that there is nobody among you wise enough to judge a dispute between believers? 6 But instead, one brother takes another to court—and this in front of unbelievers! 7 The very fact that you have lawsuits among you means you have been completely defeated already. Why not rather be wronged? Why not rather be cheated?
AI Safety Needs To Get Serious About Chinese Political Culture I worry that Leopold Aschenbrenner's "China will use AI to install a global dystopia" take is based on crudely analogising the CCP to the USSR, or perhaps even to American cultural imperialism / expansionism, and isn't based on an even superficially informed analysis of either how China is currently actually thinking about AI, or what China's long term political goals or values are. I'm no more of an expert myself, but my impression is that China is much more interested in its own national security interests and its own ideological notions of the ethnic Chinese people and Chinese territory, so that beyond e.g. Taiwan there isn't an interest in global domination except to the extent that it prevents them being threatened by other expansionist powers. This or a number of other heuristics / judgements / perspectives could change substantially how we think about whether China would race for AGI, and/or be receptive to an argument that AGI development is dangerous and should be suppressed. China clearly has a lot to gain from harnessing AGI, but they have a lot to lose too, just like the West. Currently, this is a pretty superficial impression of mine, so I don't think it would be fair to write an article yet. I need to do my homework first: * I need to actually read Leopold's own writing about this, instead of making impressions based on summaries of it, * I've been recommended to look into what CSET and Brian Tse have written about China, * Perhaps there are other things I should hear about this, feel free to make recommendations. Alternatively, as always, I'd be really happy for someone who's already done the homework to write about this, particularly anyone specifically with expertise in Chinese political culture or international relations. Even if I write the article, all it'll really be able to be is an appeal to listen to experts in the field, or for one or more of those experts to step forward and give us some principles to spread in how to think clearly and accurately about this topic. I think having even like, undergrad-level textbook mainstream summaries of China's political mission and beliefs posted on the Forum could end up being really valuable if it puts those ideas more in the cultural and intellectual background of AI safety people in general. This seems like a really crucial question that inevitably takes a central role in our overall strategy, and Leopold's take isn't the only one I'm worried about. I think people are already pushing national security concerns about China to the US Government in an effort to push e.g. stronger cybersecurity controls or export controls on AI. I think that's a noble end but if the China angle becomes inappropriately charged we're really risking causing more harm than good. (For the avoidance of doubt, I think the Chinese government is inhumane, and that all undemocratic governments are fundamentally illegitimate. I think exporting democracy and freedom to the world is a good thing, so I'm not against cultural expansionism per se. Nevertheless, assuming China wants to do it when they don't could be a really serious mistake.)

Popular comments

Recent discussion

17
28

Disclaimer

I currently have an around 400-day streak on Manifold Markets (though lately I only spend a minute or two a day on it) and have no particular vendetta against it. I also use Metaculus. I’m reasonably well-ranked on both but have not been paid by either platform...

Continue reading

you're right, and there were anyone-created prediction markets before Manifold, like Augur. I misspoke. the real new-unintuitive thing was markets anyone could create and resolve themselves rather than deferring to a central committee or court system. I think this level of self-sovereignty is genuinely hard to think of. It's not enough to be a crypto fan who likes cypherpunk vibes; one has to be the kind of person who thinks about free banking or who gets the antifragile advantages that street merchants on rugs have over shopping malls. 

although it's ... (read more)

1
Robin
By my accounts, you have implicitly agreed that all of 1-6 used to be issues, but 2-4 are currently not issues and 5 now needs the phrase "negative equity" deleted. I'm still making mana by reading the news, so see that you've halved that claim. You're right that whalebait is less profitable, and I now need to actually search for free mana to find the free mana markets. The fact that I can still do this and then throw all my savings into it means that we should expect exponential growth of mana at some risk-free rate (depending on the saturation of these markets), which is then the comparison point for determining investment skill. In practice there are most likely better things to do with it, and also I can't be bothered. I recognise the benefit of inflation as a good thing in countering historic wealth inequality, and will remark that it's effectively a wealth tax. It unfortunately coincided with the other changes which make it harder and less rewarding to donate and worsening the time-value problem, triggering my general disengagement with the site. I agree that loans never fixed this problem, but they mitigated it partially. The difference between this and Metaculus sock puppets is that there's no reward for making them there. The virtual currencies can't be translated into real-world gain, and only one "reward" depends on other people, so making bad predictions with your sock puppets doesn't make you look that much better if people look at multiple metrics. Similarly, by requiring currency to express a belief, Manifold structurally limits engagement on questions with no positive resolution possibility - it's cost-free to predict extinction on Metaculus, but on Manifold, even with perfect foresight (or the guarantee that the market will be NAd later) you still sacrifice the time value of your mana to warn people of a risk. This problem is unique to prediction markets. They make it costly (but potentially remunerated) to express your beliefs. The other problem
4
Ben Millwood
Yeah, the idea that self-resolution and insider trading don't require central regulation to manage does seem more like a novelty, that's fair.

This is a retrospective of the AIADM 2024 Conference, Retreat, and Co-working in London.

Tl;dr: ~130 people  joined together over the span of three days to learn, connect, and make progress towards making AI safe for nonhumans.

Attendees from the onsite AI, Animals
...
Continue reading

Glad you enjoyed it and sad you weren't able to attend the retreat.

tbh, I was also quite tired after EAG and skipped out on some after conference events, which was quite suboptimal. Next year, I'm thinking about doing it before EAG and giving folks a 1-2 days of rest before EAG starts.

2
Constance Li
Hey Austin, thanks for reading this so thoroughly, making the suggestion to put it up on Manifund, and generously offering to contribute to retroactive funding. This seems like a great idea and I just made a grant request page. :)
4
BrownHairedEevee
Thanks for everything you've done, Austin! I'm especially grateful to the Manifold community for having raised $1,203 for Shrimp Welfare Project (to date); it's been one of the most popular charities on the platform.

I wanted to share this update from Good Ventures (Cari and Dustin’s philanthropy), which seems relevant to the EA community.

Tl;dr: “while we generally plan to continue increasing our grantmaking in our existing focus areas via our partner Open Philanthropy, we have...

Continue reading

I think there is a strong case for work on making deals with AIs and investigating what preferences AIs have (if any) for mitigating AI takeover risk. I think paying AIs to reveal their misalignment and potentially to work for us and prevent AI takeover seems like a potentially very promising intervention.

This work is strongly connected to digital minds work.

Further, I think there is a substantial chance that AI moral patienthood becomes a huge issue in coming years and thus it is good to ensure that field has better views and interventions.

1
Ryan Greenblatt
Some quick takes on this from me: I agree with 2 and 3, but it's worth noting that "post-AGI" might be "2 years after AGI while there is a crazy singularity on going and vast amounts of digital minds". I think as stated, (1) seems about 75% likely to me, which is not hugely reassuring. Further, I think there is a critical time you're not highlighting: a time when AGI exists but humans are still (potentially) in control and society looks similar to now.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

This session’s exercise is about doing some personal reflection. There are no right or wrong answers here, instead, this is an opportunity for you to take some time and think about your ethical values and beliefs.

What does it mean to be a good ancestor? (10 mins.) ...

Continue reading

Who doesn't want to be a good ancestor given a choice? Indeed the answer would be yes for almost all of us. But what it means to be a good ancestor may be a vague idea to think upon as most of us doesn't want to think beyond a very predictable near future.also we are not trained to think to a distant future timeline. I think it requires practice and common man is not bothered about those bits of extra stress and strain on their brain.

Few traits which I think is a prerequisites for being a good ancestor:

  1. Well Informed - information guides and shape our th

... (read more)
Mjreard commented on Why I'm leaving
27
6

This is a story of growing apart.

I was excited when I first discovered Effective Altruism. A community that takes responsibility seriously, wants to help, and uses reason and science to do so efficiently. I saw impressive ideas and projects aimed at valuing, protecting,...

Continue reading

I gave this post a strong downvote because it merely restates some commonly held conclusions without speaking directly to the evidence or experience that supports those conclusions. 

I think the value of posts principally derives from their saying something new and concrete and this post failed to do that. Anonymity contributed to this because at least knowing that person X with history Y held these views might have been new and useful.  

2
freedomandutility
I think this is a really important point. My “public sphere” of EA has very little longtermism just because of who I happen to follow / what I happen to read.
18
GideonF
There seems to be this belief that arthopod welfare is some ridiculous idea only justified by extreme utilitarian calculations, and that loads of EA animal welfare money goes to it at the expensive of many other things, and this just seems really wrong to me. Firstly, arthropods hardly get any money at all, they are possibly the most neglected, and certainly amongst the most neglected, areas of animal welfare. Secondly, the argument for arthropod welfare is essentially exactly the same as your classic antispeciesist arguments; there aren't morally relevant differences between arthropods and other animals that justifies not equally considering their interests (or if you want to be non-utilitarian, equally considering them). Insects can feel pain (or certainly, the evidence is probably strong enough that they would probably pass the bar of sentience under UK law), and have other sentient experiences, so why would we not care about their welfare? Indeed, non-utilitarian philosophers also take this idea seriously: Christine Korsgaard, one of the most prominent Kantian philosophers today, sees insects as part of the circle of animals that are under moral consideration, and Nussbaum's capabilities approach is restricted to sentient animals, and I think we have good reason to think insects are sentient as well. Many insects seem to have potentially rich inner lives, and have things that go well and badly for them, things they strive to do, feelings of pain etc. What principled reason could we give for their exclusion, that wouldn't be objectionably speciesist. Also, all arthropod welfare work at present is about farmed animals; those farmed animals just happen to be arthropods!

Leopold Aschenbrenner is starting a cross between a hedge fund and a think tank for AGI. I have read only the sections of Situational Awareness most relevant to this project, and I don't feel nearly like I understand all the implications, so I could end up...

Continue reading

He seems to believe:

  1. a relatively "low" x-risk from Superintelligence (his 2023 post gives 5%)
  2. that who controls large capital pre-Superintelligence can still use this capital post-ASI (he states so in the Dwarkesh interview)

Meta

This article should be accessible to AI non-experts, and it may turn out the AI experts already think like this, in which case it's mostly for non-experts. I'm not much of an "AI insider" as such, and as usual for me, I have weaknesses in literature search and familiarity with existing work. I appreciate comments about what in the below has already been discussed, and especially what has already been refuted :)

Thanks to Egg Syntax, Nina Rimsky, and David Mears for some comments on the draft. Thanks Dane Sherburn for sending me a link to When discussing AI risks, talk about capabilities, not intelligence, which discusses similar themes.

I plan to post this to the EA forum first, wait and see if people like it, and then if they do, cross-post it to LW and/or the alignment forum.

Link preview image is this by Possessed Photography on Unsplash.

OK let's get to the point

I think people treat...

Continue reading

Summary

For pandemics that aren’t ‘stealth’ pandemics (particularly globally catastrophic pandemics):

  1. Reason 1: Not All 'Detections' Are Made Equal: there can be significant variation in the level of information and certainty provided by different detection modalities (e.
...
Continue reading

Excellent post; I did not read it carefully enough to evaluate much of the details, but these are all things we are concerned with at the Nucleic Acid Observatory and I think your three "Reasons" is a great breakdown of the core issues.

JWS commented on Progress Studies Vs EA

Just thought I’d post this here to make sure that y’all see it:

https://asteriskmag.com/issues/06/the-ea-progress-studies-war-is-here-and-its-a-constructive-dialogue

It was also posted on Marginal Revolution, a rationalist adjacent economics blog:

https://marginalrevolution...

Continue reading

I wish Clara had pushed Jason more in this interview about what EA is and what Jason's issues with it are in more specific detail. I think he's kind-of attacking an enemy on his own making (linking @jasoncrawford so he can correct me). For example:

  • He presents a potted version of EA history, which Clara pushes back on/corrects, and Jason never acknowledges that. But to me he was using that timeline as part of the case for 'EA was good but has since gone off track' or 'EA has underlying epistemic problems which lead it to go off track'
  • He seems to connect EA
... (read more)