huw

1170 karmaJoined Working (0-5 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
164

It seems like some of the biggest proponents of SB 1047 are Hollywood actors & writers (ex. Mark Ruffalo)—you might remember them from last year’s strike.

I think that the AI Safety movement has a big opportunity to partner with organised labour the way the animal welfare side of EA partnered with vegans. These are massive organisations with a lot of weight and mainstream power if we can find ways to work with them; it’s a big shortcut to building serious groundswell rather than going it alone.

See also Yanni’s work with voice actors in Australia—more of this!

Just to narrow in on a single point—I have found the 'EA fundamentally depends on uncomfortable conversations' point to be a bit unnuanced in the past. It seems like we could be more productive by delineating which kinds of discomfort we want to defend—for example, most people here don't want to have uncomfortable conversations about age of consent laws (thankfully), but do want to have them about factory farming.

When I think about the founding myths of EA, most of them seem to revolve around the discomfort of applying utilitarianism in practice, or on how far we should expand our moral circles. I think EA would've broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).

I'm not keen to take a stance on whether this post should or shouldn't be allowed on the forum, but I am curious to hear if and where you would draw this line :)

huw
41
13
4
1
1

I downvoted this post, so I want to explain why. I don't think this post actually adds much to the forum, or to EA more generally. You have mostly just found a strawperson to beat up on, and I don't think many of your rebuttals are high quality, nor do they take her on good faith (to use a rat term I loathe, you are in 'soldier mindset').

I can't really see a benefit to doing so; demarcating our 'opponents' only serves to cut us off from them, and to become 'intellectually incurious' about why they might feel that way or how we might change their minds. This does, over time, make things harder for us—funders start turning their noses up at EAs, policymakers don't want to listen, influential people in industry can write us off as unserious.

There are numerous other potential versions of this post. It could have been a thought-provoking critique of Peter Singer for engaging in debate theatre. It could have tried to steelperson her arguments. It could have even tried to trace the intellectual lineage of those arguments to understand why she has ended up with this particular inconsistent set of them! All of those would have been useful for understanding why people hate us, and how we can make them hate us less. I am not a fan of this trend of cheerleading against our haters, and I worry about the consequences of the broader environment it has and is fostering :(

huw
17
9
0

I guess one thing worth noting here is that they raised from a16z, whose leaders are notoriously critical of AI safety. Not sure how they square that circle, but I doubt it involves their investors having changed their perspectives on that issue.

I do wonder if the naïveté that the OpenAI board coup was approached with is a result of this. It did not sound like something organised by people who were used to operating in a highly political, cut-throat environment, and they seemed surprised when it turned out that they were.

huw
15
3
0
1

Yeah, when I was younger I successfully represented ~100 of my colleagues in an informal pay dispute at a software company. It's really, really hard to prove retaliation, but I found myself on the receiving end of a few very intimidating meetings with HR over trivial internal comments (where, when other people had made similar comments, they had not received this treatment). I was also told that there wasn't enough budget to promote me or even give me 'exceeds expectations' in my performance reviews, when colleagues in other teams had no issue. Even if this wasn't retaliation, speaking out gave me a paranoia that lasted the remainder of my time there and led me to hold my tongue in the future.

I'm the sort of person that tries to stick up for people when I see them getting fucked over, and perhaps the average EA also has this strength of will. But I agree with Yanni that whether this 'infiltration' approach works depends on this being one of your primary goals in joining the company, and a personality with very strong will & resilience. I don't think that it's a nice side-effect or valuable bonus in someone's personal calculation to join such a firm.

huw
22
6
0
1

One factor missing from this post is the distribution of skill. Attracting the most skilled locals away from your country is likely to lose almost all of the individuals who are orders of magnitude more valuable than average. I am no fan of Great Person Theory, but categorically removing an entire band of workers from the country during their prime productive years should almost ensure that the benefits of those impacts don't accrue back home. Additionally, the top people in a given career would presumably be there anyway, and the actual counterfactual increases are in the lower tail of the skill distribution; this is borne out by the Filipino nurses example you cite, which notes that average nurse quality declined.

Consider the Indian software industry that you mention. How much stronger would it have been if the likes of Satya Nadella and Sundar Pichai had not found it overwhelmingly valuable to migrate to the U.S.? Not just them, but nearly the entire top X% of their graduating classes? If they had started businesses there, what proportion of the Indian software industry would be working for local companies instead of sending their surplus value overseas? How much more tax revenue would their governments raise?

(Also, it would be remiss not to mention that Nadella and Pichai probably don't send 10–15% of their salary back to India; and in fact, if they were interested in developing the Indian software economy, they'd choose to pay a bit more than merely ~20% of the U.S. compensation for the same graduate-level jobs)

If, as you note, the brain drain calculus depends on remittances, increased workforce, returning home, and broader contributions to or investment in the local economy, the impact of losing the top X% of skilled individuals may very well tip those scales. So I think it's unreasonable to claim that the policy concern is "completely misplaced" and that countries should, as a rule, encourage their skilled people to seek the best opportunities—this is very strong language!

What do you think about candidates who might not be 'culturally EA' or come from an EA background (i.e. know what EA is, have previous affiliation, consume EA content), but who would otherwise be good at running a cost-effective charity? (ex. How important is it to have them? How upset would you be if you got a cohort of 100% culturally EA people? Do you worry if the recruitment process selects against them?)

Load more