Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Animal Justice Appreciation Note Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario's ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it. Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!
Marcus Daniell appreciation note @Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!
39
harfe
4d
10
FHI has shut down yesterday: https://www.futureofhumanityinstitute.org/
An alternate stance on moderation (from @Habryka.) This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason.  I found it thought provoking. I'd recommend reading it. > Thanks for making this post!  > > One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to participate in discussion on their own posts (so seeing a harsh rate-limit of something like "1 comment per 3 days" is not equivalent to a general ban from LessWrong, but should be more interpreted as "please comment primarily on your own posts", though of course it shares many important properties of a ban). This is a pretty opposite approach to the EA forum which favours bans. > Things that seem most important to bring up in terms of moderation philosophy:  > > Moderation on LessWrong does not depend on effort > > "Another thing I've noticed is that almost all the users are trying.  They are trying to use rationality, trying to understand what's been written here, trying to apply Baye's rule or understand AI.  Even some of the users with negative karma are trying, just having more difficulty." > > Just because someone is genuinely trying to contribute to LessWrong, does not mean LessWrong is a good place for them. LessWrong has a particular culture, with particular standards and particular interests, and I think many people, even if they are genuinely trying, don't fit well within that culture and those standards.  > > In making rate-limiting decisions like this I don't pay much attention to whether the user in question is "genuinely trying " to contribute to LW,  I am mostly just evaluating the effects I see their actions having on the quality of the discussions happening on the site, and the quality of the ideas they are contributing.  > > Motivation and goals are of course a relevant component to model, but that mostly pushes in the opposite direction, in that if I have someone who seems to be making great contributions, and I learn they aren't even trying, then that makes me more excited, since there is upside if they do become more motivated in the future. I sense this is quite different to the EA forum too. I can't imagine a mod saying I don't pay much attention to whether the user in question is "genuinely trying". I find this honesty pretty stark. Feels like a thing moderators aren't allowed to say. "We don't like the quality of your comments and we don't think you can improve". > Signal to Noise ratio is important > > Thomas and Elizabeth pointed this out already, but just because someone's comments don't seem actively bad, doesn't mean I don't want to limit their ability to contribute. We do a lot of things on LW to improve the signal to noise ratio of content on the site, and one of those things is to reduce the amount of noise, even if the mean of what we remove looks not actively harmful.  > > We of course also do other things than to remove some of the lower signal content to improve the signal to noise ratio. Voting does a lot, how we sort the frontpage does a lot, subscriptions and notification systems do a lot. But rate-limiting is also a tool I use for the same purpose. > Old users are owed explanations, new users are (mostly) not > > I think if you've been around for a while on LessWrong, and I decide to rate-limit you, then I think it makes sense for me to make some time to argue with you about that, and give you the opportunity to convince me that I am wrong. But if you are new, and haven't invested a lot in the site, then I think I owe you relatively little.  > > I think in doing the above rate-limits, we did not do enough to give established users the affordance to push back and argue with us about them. I do think most of these users are relatively recent or are users we've been very straightforward with since shortly after they started commenting that we don't think they are breaking even on their contributions to the site (like the OP Gerald Monroe, with whom we had 3 separate conversations over the past few months), and for those I don't think we owe them much of an explanation. LessWrong is a walled garden.  > > You do not by default have the right to be here, and I don't want to, and cannot, accept the burden of explaining to everyone who wants to be here but who I don't want here, why I am making my decisions. As such a moderation principle that we've been aspiring to for quite a while is to let new users know as early as possible if we think them being on the site is unlikely to work out, so that if you have been around for a while you can feel stable, and also so that you don't invest in something that will end up being taken away from you. > > Feedback helps a bit, especially if you are young, but usually doesn't > > Maybe there are other people who are much better at giving feedback and helping people grow as commenters, but my personal experience is that giving users feedback, especially the second or third time, rarely tends to substantially improve things.  > > I think this sucks. I would much rather be in a world where the usual reasons why I think someone isn't positively contributing to LessWrong were of the type that a short conversation could clear up and fix, but it alas does not appear so, and after having spent many hundreds of hours over the years giving people individualized feedback, I don't really think "give people specific and detailed feedback" is a viable moderation strategy, at least more than once or twice per user. I recognize that this can feel unfair on the receiving end, and I also feel sad about it. > > I do think the one exception here is that if people are young or are non-native english speakers. Do let me know if you are in your teens or you are a non-native english speaker who is still learning the language. People do really get a lot better at communication between the ages of 14-22 and people's english does get substantially better over time, and this helps with all kinds communication issues. Again this is very blunt but I'm not sure it's wrong.  > We consider legibility, but its only a relatively small input into our moderation decisions > > It is valuable and a precious public good to make it easy to know which actions you take will cause you to end up being removed from a space. However, that legibility also comes at great cost, especially in social contexts. Every clear and bright-line rule you outline will have people budding right up against it, and de-facto, in my experience, moderation of social spaces like LessWrong is not the kind of thing you can do while being legible in the way that for example modern courts aim to be legible.  > > As such, we don't have laws. If anything we have something like case-law which gets established as individual moderation disputes arise, which we then use as guidelines for future decisions, but also a huge fraction of our moderation decisions are downstream of complicated models we formed about what kind of conversations and interactions work on LessWrong, and what role we want LessWrong to play in the broader world, and those shift and change as new evidence comes in and the world changes. > > I do ultimately still try pretty hard to give people guidelines and to draw lines that help people feel secure in their relationship to LessWrong, and I care a lot about this, but at the end of the day I will still make many from-the-outside-arbitrary-seeming-decisions in order to keep LessWrong the precious walled garden that it is. > > I try really hard to not build an ideological echo chamber > > When making moderation decisions, it's always at the top of my mind whether I am tempted to make a decision one way or another because they disagree with me on some object-level issue. I try pretty hard to not have that affect my decisions, and as a result have what feels to me a subjectively substantially higher standard for rate-limiting or banning people who disagree with me, than for people who agree with me. I think this is reflected in the decisions above. > > I do feel comfortable judging people on the methodologies and abstract principles that they seem to use to arrive at their conclusions. LessWrong has a specific epistemology, and I care about protecting that. If you are primarily trying to...  > > * argue from authority,  > * don't like speaking in probabilistic terms,  > * aren't comfortable holding multiple conflicting models in your head at the same time,  > * or are averse to breaking things down into mechanistic and reductionist terms,  > > then LW is probably not for you, and I feel fine with that. I feel comfortable reducing the visibility or volume of content on the site that is in conflict with these epistemological principles (of course this list isn't exhaustive, in-general the LW sequences are the best pointer towards the epistemological foundations of the site). It feels cringe to read that basically if I don't get the sequences lessWrong might rate limit me. But it is good to be open about it. I don't think the EA forum's core philosophy is as easily expressed. > If you see me or other LW moderators fail to judge people on epistemological principles but instead see us directly rate-limiting or banning users on the basis of object-level opinions that even if they seem wrong seem to have been arrived at via relatively sane principles, then I do really think you should complain and push back at us. I see my mandate as head of LW to only extend towards enforcing what seems to me the shared epistemological foundation of LW, and to not have the mandate to enforce my own object-level beliefs on the participants of this site. > > Now some more comments on the object-level:  > > I overall feel good about rate-limiting everyone on the above list. I think it will probably make the conversations on the site go better and make more people contribute to the site.  > > Us doing more extensive rate-limiting is an experiment, and we will see how it goes. As kave said in the other response to this post, the rule that suggested these specific rate-limits does not seem like it has an amazing track record, though I currently endorse it as something that calls things to my attention (among many other heuristics). > > Also, if anyone reading this is worried about being rate-limited or banned in the future, feel free to reach out to me or other moderators on Intercom. I am generally happy to give people direct and frank feedback about their contributions to the site, as well as how likely I am to take future moderator actions. Uncertainty is costly, and I think it's worth a lot of my time to help people understand to what degree investing in LessWrong makes sense for them. 
I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX". Changes: * Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing). * Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features. * More talking about honesty. Not really sure this was the problem. The issue wasn't the median EA it was in the tails. Are the tails of EA more honest? Hard to say * We have now had a big crisis so it's less costly to say "this might be like that big crisis". Though notably this might also be too cheap - we could flinch away from doing ambitious things * Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing. * OpenPhil is hiring more internally Non-changes: * Still very centralised. I'm pretty pro-elite, so I'm not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis) * Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him * Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future

Popular comments

Recent discussion

You can give me anonymous feedback about anything you want here.

Summary

  • Interventions in the effective altruism community are usually assessed under 2 different frameworks, existential risk mitigation, and nearterm welfare improvement.
    • It looks like 2 distinct
...
Continue reading

By "pre- and post-catastrophe population", I meant the population at the start and end of a period of 1 year, which I now also refer to as the initial and final population.

I guess you are thinking that the period of 1 year I mention above is one over which there is a catastrophe, i.e. a large reduction in population. However, I meant a random unconditioned year. I have now updated "period of 1 year" to "any period of 1 year (e.g. a calendar year)". Population has been growing, so my ratio between the initial and final population will have a high chance of being lower than 1.

2
Vasco Grilo
1h
Hi @MichaelStJules, I am tagging you because I have updated the following sentence. If there is a period longer than 1 year over which population decreases, the power laws describing the ratio between the initial and final population of each of the years following the 1st could have different tail indices, with lower tail indices for years in which there is a larger population loss. I do not think the duration of the period is too relevant for my overall point. For short and long catastrophes, I expect the PDF of the ratio between the initial and final population to decay faster than the benefits of saving a life, such that the expected value density of the cost-effectiveness decreases with the severity of the catastrophe (at least for my assumption that the cost to save a life does not depend on the severity of the catastrophe). I see! Yes, both Pi and Pf are population sizes at a given point in time.
2
Vasco Grilo
1h
To clarify, my estimates are supposed to account for unknown unknowns. Otherwise, they would be any orders of magnitude lower. I found the "Unfortunately" funny! Makes sense. We may even have both cases in the same tail distribution. The tail distribution of the annual war deaths as a fraction of the global population is characteristic of a power law from 0.001 % to 0.01 %, then it seems to have a dragon king from around 0.01 % to 0.1 %, and then it decreases much faster than predicted by a power law. Since the tail distribution can decay slower and faster than a power law, I feel like this is still a decent assumption. I agree we cannot rule out dragon kings (flatter sections of the tail distribution), but this is not enough for saving lives in catastrophes to be more valuable than in normal times. At least for the annual war deaths as a fraction of the global population, the tail distribution still ends up decaying faster than a power law despite the presence of a dragon king, so the expected value density of the cost-effectiveness of saving lives is still lower for larger wars (at least given my assumption that the cost to save a life does not vary with the severity of the catastrophe). I concluded the same holds for the famine deaths caused by the climatic effects of nuclear war. One could argue we should not only put decent weight on the existence of dragon kings, but also on the possibility that they will make the expected value density of saving lives higher than in normal times. However, this would be assuming the conclusion.

Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.


Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse

...
Continue reading
JWS
2h17
8
0

Sorry Oli, but what is up with this (and your following) comment?

From what I've read from you[1] seem to value what you call "integrity" almost as a deontological good above all others. And this has gained you many admirers. But to my mind high integrity actors don't make the claims you've made in both of these comments without bringing examples or evidence. Maybe you're reacting to Sean's use of 'garden variety incompetence' which you think is unfair to Bostrom's attempts to tow the fine line between independence and managing university politics but ... (read more)

26
Nisan
12h
I'd love if you could comment on which concrete actions were harmful. (I donated to CSER a long time ago and then didn't pay attention to what they were doing, so I'm curious.)
29
carrickflynn
13h
Sean is one of the under-sung heroes who helped build FHI and kept it alive. He did this by--among other things--careful and difficult relationship management with the faculty. I had to engage in this work too and it was less like being between a rock and a hard place and more like being between a belt grinder and another bigger belt grinder.  One can disagree about apportioning the blame for this relationship--and in my mind, I divide it differently than Sean--but after his four years of first-hand experience, my response to Sean is to take his view seriously, listen, and consider it. (And to give it weight even against my 3.5 years of first-hand experience!) As a tangent, respectfully listening to people's views and expressing gratitude--and avoiding unnecessary blame--was a core part of what allowed ops and admin staff to keep FHI alive for so long against hostile social dynamics. As per Anders' comment posted by Pablo here, it might be useful for extending EA's productive legacy as well. Sean thank you so much for all you did for FHI.

This might be one of the best pieces of introductory content to the concepts of effective giving that GWWC has produced in recent years!

I hit the streets of London to engage with everyday people about their views on charity, giving back, and where they thought they stood...

Continue reading

(have not watched the video fully). I agree with you.

Multiple things can be true at the same time

  1. People who live in global poverty are are very poor
  2. Many people in developed countries are among the top 10-1 percent richest globally and don't realize that they comparatively rich
  3. If these people donate a bit, they can help extremely poor people by a lot.
  4. Living in relative poverty in rich countries is hard - even if people are globally "rich". (I don't have experience with that myself, but I have consumed a bit of media on relative poverty in my own and ne
... (read more)
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
yanni kyriacos posted a Quick Take 6h ago

I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist.

Gemini did a good job of summarising it:

This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It's called "taking full responsibility" or "taking self-blame" and can be a bit challenging to understand at first. Here's a breakdown:

What it Doesn't Mean:

  • Self-Flagellation: This practice isn't about beating yourself up or dwelling on guilt.
  • Ignoring External Factors: It doesn't deny the role of external circumstances in a situation.

What it Does Mean:

  • Owning Your Reaction: It's about acknowledging how a situation makes you feel and taking responsibility for your own emotional response.
  • Shifting Focus: Instead of blaming others or dwelling on what you can't control, you direct your attention to your own thoughts and reactions.
  • Breaking Negative Cycles: By understanding your own reactions, you can break free from negative thought patterns and choose a more skillful response.

Analogy:

Imagine a pebble thrown into a still pond. The pebble represents the external situation, and the ripples represent your emotional response. While you can't control the pebble (the external situation), you can control the ripples (your reaction).

Benefits:

  • Reduced Suffering: By taking responsibility for your own reactions, you become less dependent on external circumstances for your happiness.
  • Increased Self-Awareness: It helps you understand your triggers and cultivate a more mindful response to situations.
  • Greater Personal Growth: By taking responsibility, you empower yourself to learn and grow from experiences.

Here are...

Continue reading

Summary

  • I think the following will tend to be the best to maximise the cost-effectiveness of saving a human life:
    • Accounting solely for the benefits to the person saved, saving human lives in countries with low, but not too low, real gross domestic product (real GDP) per
...
Continue reading
3
nathan98000
9h
The concept of self-esteem has a somewhat checkered history in psychology. Here, an influential review paper finds that self-esteem leads people to speak up more in groups and to feel happier. But it fails to have consistent benefits in other areas of life such as educational/occupational performance or violence. And it may have detrimental effects, such as risky behavior in teens.

Thanks for the comment and the link to the review paper! 

I think most people, including researchers, don't have a good handle on what self-esteem is, or at least what truly raises or lowers it - I would expect the effect of praise to be weak, but the effect of promoting responsibility for one's emotions and actions to be strong. The closest to my views on self-esteem that I've found so far are those in N. Branden's "Six Pillars of Self-Esteem" - the six pillars are living consciously, self-acceptance, self-responsibility, self-assertiveness, living pu... (read more)

TL;DR

Healthier Hens (HH) aims to improve cage-free hen welfare, focusing on key issues such as keel bone fractures (KBFs). In the last 6 months, we’ve conducted a vet training in Kenya, found a 42% KBF prevalence, and are exploring alternative promising interventions in...

Continue reading

Thanks for providing these external benchmarks and making it easier to compare! Do you mind if I updated the text to include a reference to your comments?

Feel free to!

74
7

Last updated: April 18, 2024

This is a reading list on the long reflection and the closely related, more recently coined notions of ASI governancereflective governance and grand challenges.

I claim that this area outscores regular AI safety on importance[...

Continue reading

I think collections like this are helpful, but it's a misleading to say it presents the "frontier of publicly available knowledge."

Taking just the first section on moral truth as an example, it seems like a huge overstatement to say this collection of podcasts and forum posts gets people to the frontier of this subject. Philosophers have spent a long time on this, writing thousands of papers. And at a glance, it seems like all of OPs linked resources don't even intend to give an overview of the literature on meta-ethics. They instead present their own pers... (read more)

Open Philanthropy commissioned a report from Stefan Dercon on economic growth as the main driver of poverty reduction. In the report, Dercon highlights a set of overlooked policies that can help boost economic growth in developing countries, as well as key reasons ...

Continue reading

Thanks for sharing. I found it very interesting, and I thought the focus on elite incentives and creating pro-growth coalitions is very promising. Having said that, I am less convinced by some of the specific policies being highlighted.

One theme I noticed running through many of the proposed policies is strengthening the power of the state. I think in some cases this can make a lot of sense - maybe El Salvador's crackdown on the gangs is good at reducing violence - but I would have thought it was worthwhile to consider the downsides to this strategy. Gover... (read more)

Photo by Adeolu Eletu

My definition of “capitalism” is:

An economy with capital markets (in addition to markets in goods and services).

Most of my friends and acquaintances generally don’t have a precise definition of “capitalism”, but use the word to mean something like:

The

...
Continue reading

And OP discusses market socialist systems which allow capital markets but not private capital!

This isn’t a petty distinction. It allows the definer to claim all of the benefits of markets and dodge the more negative effects of private ownership, pitting centralised price controls as inherent to anti-capitalist systems. And in the worst cases (not here) it allows people to motte-and-bailey their way out of the devastating effects of wealth inequality by claiming that ‘capitalism’ actually just means markets.

I mention all this because I see this definition a... (read more)