479 karmaJoined


The scientific proposition is "are there racial genetic differences related to intelligence" right, not "is racism [morally] right"? 

I find it odd how much such things seem to be conflated; if I learned that Jews have an IQ an average of 5 points lower than non-Jews, I would... still think the Holocaust and violence towards and harassment of Jews was abhorrent and horrible? I don't think I'd update much/at all towards thinking it was less horrible. Or if you could visually identify people whose mothers had drank alcohol during pregnancy, and they were statistically a big less intelligent (as I understand them to be), enslaving them, genociding them, or subjecting them to Jim Crow style laws would seem approximately as bad as it seems to do to some group that's slightly more intelligent on average.

I agree with 

if you want to make a widget that's 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work. 


if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial. 


many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.

A crux seems to be that I think AI alignment research is a fairly narrow domain, more akin to bacteriology than e.g. "finding EA cause X" or "thinking about if newly invented systems of government will work well". This seems more true if I imagine for my AI alignment researcher someone trying to run experiments on sparse autoencoders, and less true if I imagine someone trying to have a end-to-end game plan for how to make transformative AI as good as possible for the lightcone, which is obviously a more interdisciplinary topic more likely to require correct contrarianism in a variety of domains. But I think most AI researchers are more in the former category, and will be increasingly so. 

Two points: 

(1) I don't think "we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits" or "all drugs should be legal and ideally available for free from the state" are the most popular political positions in the US, nor close to them,  even for D-voters. 

(2) your original question was about supporting things (e.g. Lysenkoism), and publicly associating with things, not about what they "genuinely believe"

But yes, per my earlier point, if you told me for example "there are three new researchers with PhDs from the same prestigious university in [field unrelated to any of the above positions, let's say virology], the only difference I will let you know about them is one (A) holds all of the above beliefs, one (B) holds some of the above beliefs, and one (C) holds none of the above beliefs, predict which one will improve the odds of their lab making a bacteriology-related breakthrough the most" I would say the difference between them is small i.e. these differences are only weakly correlated with odds of their lab making a breakthrough and don't have much explanatory power. And, assuming you meant "support" not "genuinely believe" and cutting the two bullets I claim aren't even majority positions among for example D-voters, and B>A>C but barely

[not trying to take a position on the whole issue at hand in this post here] I think I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn't. And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists. Claims without justification, mostly because I find it helpful to articulate my beliefs aloud for myself: 

  • I don't think people generally having correct beliefs on irrelevant social issues is very correlated with having correct beliefs on their area of expertise
  • I think in most cases, having unpopular and unconventional beliefs is wrong (most contrarians are not correct contrarians) 
  • A bunch of unpopular and unconventional things are true, so to be maximally correct you have to be a correct contrarian
  • Some people aren't really able to entertain unpopular and unconventional ideas at all, which is very anticorrelated with the ability to have important insights and make huge contributions to a field 
  • But lots of people have very domain-specific ability to have unpopular and unconventional ideas while not having/not trusting/not saying those ideas in other domains.
  • A large subset of the above are both top-tier in terms of ground-breaking insights in their domain of expertise, and put off by groups that are maximally open to unpopular and unconventional beliefs (which are often shitty and costs to associate with)
  • I think people who are top-tier in terms of ability to have ground-breaking insights in their domain disproportionately like discussing unpopular and unconventional beliefs from many different domains, but I don't know if, among people who are top-tier in terms of ground-breaking insights in a given domain, the majority prefer to be in more or less domain-agnostically-edgy crowds. 

(1) I agree if your timelines are super short, like <2yrs, it's probably not worth it. I have a bunch of probability mass on longer timelines, though some on really short ones

Re (2), my sense is some employees already have had some of this effect (and many don't. But some do). I think board members are terrible candidates for changing org culture; they have unrelated full-time jobs, they don't work from the office, they have different backgrounds, most people don't have cause to interact with them much. People who are full-time, work together with people all day every day, know the context, etc., seem more likely to be effective at this (and indeed, I think they have been, to some extent in some cases)

Re (3), seems like a bunch of OAI people have blown the whistle on bad behavior already, so the track record is pretty great, and I think them doing that has been super valuable. And 1 whistleblower seems much better than several converts is bad. I agree it can be terrible for mental health for some people, and people should take care of themselves. 

Re (4), um, this is the EA Forum, we care about how good the money is. Besides crypto, I don't think there are many for many of the relevant people to make similar amounts of money on similar timeframes. Actually I think working at a lab early was an effective way to make money. A bunch of safety-concerned people for example have equity worth several millions to tens of millions, more than I think they could have easily earned elsewhere, and some are now billionaires on paper. And if AI has the transformative impact on the economy we expect, that could be worth way more (and it being worth more is correlated with it being needed more, so extra valuable); we are talking about the most valuable/powerful industry the world has ever known here, hard to beat that for making money. I don't think that makes it okay to lead large AI labs, but for joining early, especially doing some capabilities work that doesn't push the most risky capabilities along much, I don't think it's obvious.

I agree that there are various risks related to staying too long, rationalizing, being greedy, etc., and in most cases I wouldn't advice a safety-concerned person to do capabilities. But I think you're being substantially too intense about the risk of speeding up AI relative to the benefits of seeing what's happening on the inside, which seem like they've already been very substantial

Yes. I think most people working on capabilities at leading labs are confused or callous (or something similar, like greedy or delusional), but definitely not all. And personally, I very much hope there are many safety-concerned people working on capabilities at big labs, and am concerned about the most safety-concerned people feeling the most pressure to leave, leading to evaporative cooling.

Reasons to work on capabilities at a large lab:

  • To build career capital of the kind that will allow you to have a positive impact later. E.g. to be offered relevant positions in government
  • To positively influence the culture of capabilities teams or leadership at labs. 
  • To be willing and able to whistleblow bad situations (e.g. seeing emerging dangerous capabilities in new models, the non-disparagement stuff). 
  • [maybe] to earn to give (especially if you don't think you're contributing to core capabilities)

To be clear, I expect achieving the above to be infeasible for most people, and it's important for people to not delude themselves into thinking they're having a positive impact to keep enjoying a lucrative, exciting job. But I definitely think there are people for whom the above is feasible and extremely important. 

Another way to phrase the question is "is it good for all safety-concerned people to shun capabilities teams, given (as seems to be the case) that those teams will continue to exist and make progress by default?" And for me the strong answer is "yes". Which is totally consistent with wanting labs to pause and thinking that just contributing to capabilities (on frontier models) in expectation is extremely destructive. 

I'm confused by this post. Sam Altman isn't an EA, afaik, and hasn't claimed to be, afaik, and afaik no relatively in-the-know EAs thought he was, or even in recent years thought he was particularly trustworthy, though I'd agree that many have updated negative over the last year or two.

But a substantial number of EAs spent the next couple of weeks or months making excuses not to call a spade a spade, or an amoral serial liar an amoral serial liar. This continued even after we knew he'd A) committed massive fraud, B) used that money to buy himself a $222 million house, and C) referred to ethics as a "dumb reputation game" in an interview with Kelsey Piper.

This wasn't because they thought the fraud was good; everyone was clear that SBF was very bad. It's because a surprisingly big number of people can't identify a psychopath. I'd like to offer a lesson on how to tell. If someone walks up to you and says "I'm a psychopath", they're probably a psychopath.

Very few EAs that I know did that (I'd like to see stats, of the dozens of EAs I know, none publicly/to my knowledge did such a thing except if I remember right Austin Chen in an article I now can't find). And for people who did defend Sam, I don't know why you'd assume that the issue is them not being able to identify psychopaths, as opposed to being confused about the crimes SBF committed and believing they were the result of a misunderstanding or something like that

I think the Ibrahim Prize was created partly to "bribe" (incentivize) heads of state in Africa into being good leaders and respecting term limits. Iirc it's the biggest prize for an individual in the world

sounds like it's also below Zurich minimum wage (not totally sure if that minimum wage is currently in effect or not) and similar to the London "living wage" (which isn't a required thing)

I currently work at a large EA-ish org that allows me to fully expense EAG travel and I (like some of the other commenters) am pretty strongly in the "prefer hub" camp. Like lots of EAs, I try to intensely optimize my time, and I'd prefer to optimize for work and play separately (so I would prefer to focus on work when going to EAGs, then separately take vacations optimized for being fun for me, e.g. by being in a place that's a great fit for me and my primary partner). I am happy to travel occasionally if there's a strong impact justification, but don't want CEA trying to influence me to do travel for fun at a location and time it picks. In my experience, EAs in general are more intense about their time and possibly less into travel than most people in academia. 

Even if you assume everyone would go, I don't think it's a clear win. I think a lot of professionals in the space place a lot of value on an hour of their labor; if they value it at $100/hour (i.e. equivalent to $200k/year in donations), and you make them travel e.g. 12 hours roundtrip to get to a conference location, and that affects 300 attendees who would otherwise have had reasonable in-city daily commutes, that's $360k-equivalent added (though in reality I agree many just wouldn't go, and some would also do the vacation thing so this would funge against hours they'd spend traveling for vacation anyway). Then additionally, you have EA orgs paying the travel costs themselves, which maybe looks better for CEA but is the same to EA funders (though maybe some people can also expense it to non-EA orgs?). If the orgs are paying $1000 per person (let's say $400 on travel, $450 for 3 nights of hotel rooms at $150/night, the rest for meals and other incidental expenses) and 300 more people need to travel than otherwise would if the EAG were in a hub area, that's another $300k.

Also, CEA staff probably benefit from specialist knowledge of cities they often run EAGs in, so either they are stuck in the same non-hub city repeatedly, or they probably suffer costs of trying to run conferences in cities they aren't used to. 

It'd be partly counterbalanced, in addition to being less expensive to CEA, by being less expensive for the people that would need to travel either way (lower hotel and meal costs in lower cost-of-living cities), to get to the EAG, so I don't think it's an obvious call.

Load more