"We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."
Co-founder Daniel Gross’ thoughts on AI safety are at best unclear beyond this statement. Here is an article he wrote a year ago: The Climate Justice of AI Safety, and he’s also appeared on the Stratechery podcast a few times and spoken about AI safety once or twice. In this space, he’s most well known as an investor, including in Leopold Aschenbrenner’s fund.
I think it would be good for Daniel Gross & Daniel Levy to clarify their positions on AI safety, and what exactly ‘commercial pressure’ means (do they just care about short-term pressure and intend to profit immensely from AGI?).
(Disclosure: I received a ~$10k grant from Daniel in 2019 that was AI-related)