Today Sundar Pichai, CEO of Google, announced the merger of Google's two AI teams (Brain and DeepMind), into Google DeepMind. Some quotes:
"Combining all this talent into one focused team, backed by the computational resources of Google, will significantly accelerate our progress in AI."
"...our most critical and strategic technical projects related to AI, the first of which will be a series of powerful, multimodal AI models."
(I'll let you draw your own conclusions/opinions, and share mine in a comment.)
Another option is that they actually think that anything smart enough to be existentially dangerous is still a long way away, and statements that seem to imply the contrary are actually a kind of disguised commercial hype.
Or they might think that safety is relatively easy, and so long as you care about it a decent amount and take reasonable known precautions you're effectively guaranteed to be fine. I.e risk is under .01%, not 10%. (Yes, that is probably still bad on expected value grounds, but most people don't think like that, and actually on person-affecting views where transformative AI would massively boost lifespans, might actually be a deal most people would take.)