NK

Nick K.

148 karmaJoined

Comments
37

It's interesting to claim that money stops being an incentive for people after a certain fixed amount well below $1 million/year.

Where is this claim being made? I think the suggestion was that someone found it desirable to reduce the financial incentive gradient for EY taking any particular public stance, not some vastly general statement like what you're suggesting.

Thanks for this comment! 
I think your arguments about your own motivated reasoning are somewhat moot, since they seem more of an explanation that your behavior/public facing communication isn't straightout deception (which seems right!). As I see it, motivated reasoning is to a large extent about deceiving yourself and maintaining a coherent self-narrative, so it's perfectly plausible that one is willing to pay substantial cost in order to maintain this. (Speaking generally; I'm not very interested in discussing whether you're doing it in particular.) 

I think this misses the point: The financial gain comes from being central to ideas around AI in itself. I think given this baseline, being on the doomer side tends to carry huge opportunity cost financially. 
At the very least it's unclear and I think you should make a strong argument to claim anyone financially profits from being a doomer. 

One should stick to the original point that raised the question about salary.

  • Is $600K a lot of money for most people and does EY hurt his cause by accepting this much? (Perhaps, but not the original issue)
  • Does EY earning $600K mean he's benefitting substantially from maintaining his position on AI safety? E.g. if he was more pro AI development, would this hurt him financially? (Very unlikely IMO, and that was the context Thomas was responding to)

You could imagine a Yudkowsky endorsement  (say with the narrative that Zuck talked to him and admits he went about it all wrong and is finally taking the issue seriously just to entertain the counterfactual...) to raise meta AI from "nobody serious wants to work there and they can only get talent by paying exorbitant prices" to "they finally have access to serious talent and can get a critical mass of people to do serious work". This'd arguably be more valuable than whatever they're doing now. 

I think your answer to the question of how much an endorsement would be worth mostly depends on some specific intuitions that I imagine Kulveit has for good reasons but most people don't, so it's a bit hard to argue about it. It also doesn't help that in every other case than Anthropic and maybe deepmind it'd also require some weird hypotheticals to even entertain the possibility.

This doesn't seem to be a reasonable way to operationalize. It would create much less value for the company if it was clear that they were being paid for endorsing them. And I highly doubt Amodei would be in a position to admit that they'd want such an endorsement even if it indeed benefitted them. 

I was only mentioning Karpathy as someone reasonable who repeatedly points out the lack of online learning and seems to have (somewhat) longer timelines because of that. This is solely based on my general impression. I agree the stated probabilities seem wildly overconfident.

I agree that that comment may be going too far with claiming "bad faith", but the article does have a pretty tedious undertone of having found some crazy gotcha that everyone is ignoring. (I'd agree that it gets at a crux and that some reasonable people, e.g. Karpathy, would align more with the OP here)

What have they done or are planning to do that seems worth supporting?

Load more