80% of people already donating in one way or another was surprising to me as I read it - kind of cool to hear - do you think there's merit, if people delivering similar talks/initiatives in their own corporate environments, in asking that question really early on and having different versions of the content that they could lean towards depending on the responses?
I massively agree with this, if people are still determined to work within OpenAI then they should probably have a proven track record of strong emotional and moral resilience within hostile organisations/settings, and strong enough interpersonal skills to build internal networks of aligned parties willing to speak up at key times.
I mean this with genuine care, but most machine learning engineers (and comp sci folks in general) do not fit this bill. I realise that's a generalisation, but it should be somewhat self evident that the type of person who has spent most of their life in academia, enjoys hours of solo computer time and loves complex mathematics probably isn't simultaneously a charismatic, relationally dynamic political machine.
Commitments eroding under pressure strikes me as such a clear gap in the thinking around AI; we assume/hope that governance proposals, even those that make it into law, are actually going to followed, measured or evaluated accurately. Yet how much leverage really exists over multi-billionaire conglomerates and/or authoritarian leaning governments who are willing to bend/ignore rules as they see fit?
Somewhere between this, the lawsuits, gulf state funding and partnering with Palantir, one surely has to wonder if Anthropic are still 'the good guys of AI'
Ooo interesting, this comment got almost exclusively agreement 2-3 days ago, then almost exclusively disagreement after anthropics' response to Hegseth. I guess that response is a potentially promising indicator, that is, Anthropic does have some hard limits. I do wonder, though, if it is a strong enough signal to negate the others.
"You stepped on a nematode"