R

Raemon

1911 karmaJoined

Comments
194

Topic contributions
1

FYI fag is a pretty central example of a slur in America imo.

It gets used and normalized in some edgy cultures but I think that’s sort of like how the n-word gets used in some subcultures. (When I was growing up at least it was probably in the top 5 ‘worst’ words to say, at least weighted by ‘anyone ever actually said them’)

There’s also a thing where ‘retarded’ went from ‘not that bad’ to ‘particularly bad in some circles’, although I’m not sure how that played out since it was ‘after my time’.

All of this is sort of anti-inductive and evolving and makes sense to not be very obvious to a foreigner.

nggrzcgf

is... that rot13'd for a reason? (it seemed innocuous to me)

I work for Habryka, so my opinion here should be discounted. (for what it's worth I think I have disagreed with some of his other comments this week, and I think your post did update me on some other things, which I'm planning to write up). But re:

incorrectly predicted what journalists would think of your investigative process, after which we collaborated on a hypothetical to ask journalists, all of whom disagreed with your decision.

this seems egregiously inaccurate to me. Two of the three journalists said some flavor of "it's complicated" on the topic of whether to delay publication, for very similar reasons to what Habryka has mentioned. It seems like at best it was a wash, and it felt pretty weird in the OP that you wrote them up as if they supported your thesis.

What’s wrong with “make a specific targeted suggestion for a specific person to do the thing, with an argument for why this is better than whatever else the person is doing?”, like Linch suggests?

This can still be hard, but I think the difficulty lives in the territory, and is an achievable goal for someone who follows EA Forum and pays attention to what organizations do what.

It seemed useful to dig into "what actually are the useful takeaways here?", to try an prompt some more action-oriented discussion.

The particular problems Elizabeth is arguing for avoiding:

  • Active suppression of inconvenient questions
  • Ignore the arguments people are actually making
  • Frame control / strong implications not defended / fuzziness
  • Sound and fury, signifying no substantial disagreement
  • Bad sources, badly handled
  • Ignoring known falsehoods until they're a PR problem

I left off "Taxing Facebook" because it feels like the wrong name (since it's not really platform specific). I think the particular behavior she was commenting on there was something like "persistently bringing up your pet issue whenever a related topic comes up."

Many of the instances here are noteworthy that a single instance of them isn't necessarily that bad. It can be reasonable to bring up your pet issue once or twice, but if there's a whole crowd of people who end up doing it every single time, it becomes costly enough to tax conversation and have systematic effects.

"Replying to an article as if it made a claim it didn't really make" is likewise something that's annoying if it just came up once, but adds up to a systemic major cost when either one person or a crowd of people are doing it over and over.

I'm not actually sure what to do about this, but it seemed like a useful frame for thinking about the problem.

Is your concrete suggestion/ask "get rid of the karma requirement?"

Quick note: I don't think there's anything wrong with asking "are you an english speaker" for this reason, I'm just kinda surprised that that seemed like a crux in this particular case. Their argument seemed cogent, even if you disagreed with it.

The comments/arguments about the community health team mostly make me think something more like "it should change its name" than be disbanded. I think it's good to have a default whisper network to report things to and surreptitiously check in with, even if they don't really enforce/police things. If the problem is that people have a false sense of security, I think there are better ways to avoid that problem.

Just maintaining the network is probably a fair chunk of work.

That said – I think one problem is that the comm-health team has multiple roles. I'm honestly not sure I understand all the roles they consider themselves to have taken on. But it seems likely to me that at least some of those roles are "try to help individuals" and at least some of those roles are more like "protect the ecosystem as whole" and "protect the interests of CEA in particular", and those might come into conflict with the "help individuals" one. And it's hard to tell from the outside how those tradeoffs get made.

I know a person who maintained a whisper network in a local community, who I'd overall trust more than CEA in that role, because basically their only motivation was "I want to help my friends and have my community locally be safe." And in some sense this is more trustworthy than "also, I want to help the world as a whole flourish", because there's fewer ways for them to end up conflicted or weighing multiple tradeoffs.

But, I don't think the solution can necessarily be "well, give the Whisper Network Maintenance role to less ambitious people, so that their motives are pure", because, well, less ambitious people don't have as high a profile and a newcomer won't know where to find them. 

In my mind this adds up to "it makes sense for CEA to keep a public-node of a whisper network running, but it should be clearer about it's limitations, and they should be upfront that there are some limits as to what people can/should trust/expect from it." (and, ideally there should maybe be a couple different overlapping networks, so in situations where people don't trust CEA, they have alternatives. i.e. Healthy Competition is good, etc)

But a glum aphorism comes to mind: the frame control you can expose is not the true frame control.

I think it's true that frame control (or, manipulation in general) tends to be designed to make it hard to expose, but, I think the actual issue here is more like "manipulation is generally harder to expose than it is to execute, so, people trying to expose manipulation have to do a lot of disproportionate work."

Part of the reason I think it was worth Ben/Lightcone prioritizing this investigation is as a retro-active version of "evaluations."

Like, it is pretty expensive to "vet" things. 

But, if your org has practices that lead to people getting hurt (whether intentionally or not), and it's reasonably likely that those will eventually come to light, orgs are more likely to proactively put more effort into avoiding this sort of outcome.

Load more