This is a special post for quick takes by Sator. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

One of the benefits of the EA community is as a social technology where altruistic actions are high status: earning-to-give, pledging and not eating animals are all venerated to varying degrees among the community. 

Pledgers have coordinated to add the orange square emoji to their EA forum profile names (and sometimes in their twitter bio). I like this, as it both helps create an environment where one is might sometimes be forced to think "wow, lots of pledgers here, should I be doing that too?" as well as singling out those deserving of our respect. 

Part of me wonders if 'we' should go further in leveraging this; bestow small status markers on those who make a particularly altruistic sacrifice. 

Unfortunately, there is no kidney emoji, so perhaps those who donate their kidney will need to settle for the kidney bean emoji (🫘). This might seem ridiculous (I am half joking with the kidney beans), but creating neat little ways for those who behave altruistically to reap the status reward might ever so slightly encourage others to collect on the bounty (i.e donate their kidney or save a drowning child) as well as rewarding those who have done the good thing. 

Austin Chen (co founder of Manifold) shared some thoughts on the EA community during a recent interview with a former-EA [see transcript here]

Austin Chen: Whether the, like SBF was committing fraud or not, or something like that, or at is how I thought of at the time and still think of it now. And to unpack that a bit more, I thought like from a like decision theory standpoint, how should the EA move into like itself, try and learn from something like this or like plan on moving forward. And my assessment at the time was that like, probably in response to this, people would be a lot more scared, a lot more like the, weak Doge, image, there's like the Buff Shiba, and like the Shiba, and I think the FTX glory days were like the strong, we're gonna do everything we're gonna save the world. And then now I think, at the time, and it also now, like the way EA has drifted this like more towards the, oh, we're gonna not do a pitch a pr, we're not going to try and make waves. We're gonna protect our like reputation. I view that to be at the time and still now like a huge error.

[00:26:01] Austin Chen: and then another thing that was just like interesting decision theory thing is I think if you abandon people who have, what I viewed as like earnestly try to support you and your movement. if you just turn tail on them, be like, oh yeah, no, actually SBF he was not acting according to our principles and we do not endorse this and we like give up on all of the stuff that, he did.

[00:26:25] Austin Chen: Then when some future billionaire is looking at your movement, what will they think? it's like a, and maybe that's the most like winning ish, like mindset to be like, oh, you're trying to engage with this not just on the merits right here, but like how this will affect like future game theoretic decisions on who works with you.

I think it is good to deter unscrupulous ultra high-net-worth individuals from engaging with EA. I think it was good that various 'EA thought leaders' came out as they did and said something along the lines of "we do not endorse this. Don't commit fraud." 

I'm not entirely sure what Chen thinks the alternative could/should have been. Defend SBF's motives? Say nothing? Either of those approaches seem like the kind of optics-maxing he critiques as being a 'huge error' (only instead of aiming to 'broadly protect reputation' it's 'protect reputation in eyes of billionaires').

It's worth pointing out that Chen has a slightly more sympathetic view of Sam than I do:

[00:27:08] Austin Chen: If I feel like in like cases when things go badly, they will just like not stand by me. 

[00:27:14] Elizabeth: things didn't just go badly. He did them badly and hurt a lot of people. which was not known at the time. I think 

[00:27:23] Austin Chen: I don't know, still debatable, . I don't know how much we should go into the object level here, but everyone made their money back be, yes, some things broke well for that to be able to happen, but like all the, people got 120% of, the value of the crypto holdings at the time, which, is not, 

Tagging @Austin since his comments are the main focus of this quick take

Richard Ngo has a selection of open-questions in his recent post. One question that caught my eye:

How much censorship is the EA forum doing (e.g. of thought experiments) and why?

I originally created this account to share a thought experiment I suspected might be a little too 'out there' for the moderation team. Indeed, it was briefly redacted and didn't appear in the comment section for a while (it does now). It was, admittedly, a slightly confrontational point and I don't begrudge the moderation team for censoring it. They were patient and transparent in explaining why it was briefly redacted. You can read the comment and probably guess correctly why it was flagged. 

Still, I am curious to hear of other cases like this. My guess is that in most of them, the average forum reader will side with the moderation team.  

LessWrong publishes most of their rejected posts and comments on a separate webpage. I say 'most' as I suspect infohazards are censored from that list. I would be interested to hear the EA forum's moderation team's thoughts on this approach/whether it's something they've considered, should they read this and have time to respond.[1]

  1. ^

    Creating such a page would also allow them to collect on Ngo's bounty, since they would be answer both how much censorship they do and (assuming they attach moderation notes) why

Hi! I just want to start by clarifying that a user’s first post/comment doesn’t go up immediately while our facilitators/moderators check for spam or a clear norm violation (such as posting flame bait/clear trolling). Ideally this process takes no more than a day, though we currently don’t have anyone checking new users outside of approximately US Eastern Time business hours.

However, some content (like your first comment) requires additional back and forth internally (such as checking with moderators) and/or with the new user. This process involves various non-obvious judgement calls, which is what caused a long delay between your submitting the comment and us reaching out to you (plus the fact that many people were out over the winter holidays). In the case of your comment, we asked you to edit it and you didn’t respond to us or edit the comment for over a week, and then our facilitator felt bad for keeping you in the queue for so long so they approved your comment.

We currently do not use the rejected content feature that LW uses. Instead, almost all[1] of the content that may have been rejected under their system ends up appearing on the rest of our site, and we currently mostly rely on users voting to make content more or less visible (for example, karma affects where a post is displayed on the Frontpage). I plan to seriously consider whether we should start using the rejected content feature here soon; if so, then I expect that we’ll have the same page set up.

I think that, if we had been using the rejected content feature, the right move would have been for us to reject your comment instead of approving it.

  1. ^

    My guess is that there are edge cases, but in practice we keep our queue clear, so my understanding is that users are typically not in limbo for more than a few days. Things like spam are not rejected — accounts that post spam are banned.

Hello!

Thanks for taking the time to respond thoroughly! I sincerely appreciate that. 

I can't quite remember when I read the message sent from the facilitator, but my memory is that it was after the comment was restored (feel free to check on your end if that's possible). I was slightly bummed out that a comment which took some effort to write was rejected and wasn't super motivated to respond defending it.

At the time, I was aware that the metaphor was abrasive, but hoped I had sanded off the edges by adding a disclaimer at the start. It can be difficult to balance 'writing the thing I honestly believe' with 'not upset anybody or make them uncomfortable when discussing moral issues 100% of the time.' I did hum and haw over whether I should post it, but ultimately decided that most people wouldn't be upset by the metaphor or would even agree with it's accuracy (given that the meat/dairy industries are both rife with animal sexual abuse). Seeing as how it was interpreted as flame bait / trolling, I somewhat regret posting it. 

On a final note; am I able to ask why you would reject it? I.e. do you believe I was trolling or flame baiting? I won't be insulted either way, but would find it useful going forward to know how I should better write my comments.

Two final notes:

• I am pleased to hear you are considering a rejected content feature. 

• I used the word 'censorship' in my original short form post and want to underscore that I don't think it's intrinsically bad to censor. I.e. the moderation team should be doing some level of censorship (and I suspect most forum users would agree).

More from Sator
Curated and popular this week
Relevant opportunities