Today Sundar Pichai, CEO of Google, announced the merger of Google's two AI teams (Brain and DeepMind), into Google DeepMind. Some quotes:

"Combining all this talent into one focused team, backed by the computational resources of Google, will significantly accelerate our progress in AI."

"...our most critical and strategic technical projects related to AI, the first of which will be a series of powerful, multimodal AI models."

(I'll let you draw your own conclusions/opinions, and share mine in a comment.)


 

11

0
0

Reactions

0
0
Comments12


Sorted by Click to highlight new comments since:

Here's Demis' announcement.

  • "Now, we live in a time in which AI research and technology is advancing exponentially."
  • "We announced some changes that will accelerate our progress in AI."
  • "By creating Google DeepMind, I believe we can get to that future faster."
  • "safely and responsibly"
  • "safely and responsibly"
  • "in a bold and responsible way"

[1]

  1. ^

    To be fair, it's hard to infer underlying reality from PR-speak. I too would want to be put in charge of one of the biggest AI research labs if I thought that research lab was going to exist anyway. But his emphasis on "faster" and "accelerate" does make me uncertain about how concerned with safety he is.

Yes, to some extent there is the thought "this will exist anyway" and "we're in a race that I can't stop", but at some point someone very high up needs to turn their steering wheel. They say they are worried, but actions speak louder than words. Take the financial/legal/reputational hit and just quit! Make a big public show of it. Pull us back from the brink.

Or maybe I'm looking at it the wrong way. Maybe they think x-risk is "only" 10% likely, and they are willing to gamble the lives of 8 billion people for a shot at utopia that is 90% likely to succeed. In which case, I think they should be shut down, immediately. Where is their democratic mandate to do that!?

Another option is that they actually think that anything smart enough to be existentially dangerous is still a long way away, and statements that seem to imply the contrary are actually a kind of disguised commercial hype. 

Or they might think that safety is relatively easy, and so long as you care about it a decent amount and take reasonable known precautions you're effectively guaranteed to be fine. I.e risk is under .01%, not 10%. (Yes, that is probably still bad on expected value grounds, but most people don't think like that, and actually on person-affecting views where transformative AI would massively boost lifespans, might actually be a deal most people would take.) 

anything smart enough to be existentially dangerous is still a long way away

I don't think this is really a tenable position any more, post GPT-4 and AutoGPT. See e.g. Connor Leahy explaining that LLMs are basically "general cognition engines" and will scale to full AGI in a generation or two (and with the addition of various plugins etc to aid "System 2" type thinking, which are now freely being offered by the AutoGPT enthusiasts and Open AI). If this isn't clear now, it will be in a few months once Google DeepMind releases the next version of it's multimodal (text, images, video, robotics) AI.

Some experts still seem to hold it: i.e. Yann LeCun: https://twitter.com/ylecun/status/1621805604900585472  Whether or not they in fact have good reason to think this, it's surely evidence that people at DeepMind could be thinking this way too. 

I think multimodal models kind of make his points about text moot. GPT-4 is already text + images (making "LLM" a misnomer).

I'm looking for insights on the potential regulatory implications this could have, especially in relation to the UK's AI regulation policies.

  1. Given that DeepMind was a UK-based subsidiary of Alphabet Inc., does the UK still have the jurisdiction to regulate it after the merger with Google Brain? 
  2. On the other hand, what is the weight of the US regulation on DeepMind?

    I appreciate any insights or resources you can share on this matter. I understand this is a complex issue, and I'm keen to understand it from various perspectives.

My quick initial research:
The UK's influence on DeepMind, a subsidiary of US-based Alphabet Inc., is substantial despite its parent company's origin. This control stems from DeepMind's location in the UK (jurisdiction principle), which mandates its compliance with the country's stringent data protection laws such as the UK GDPR. Additionally, the UK's Information Commissioner's Office (ICO) has shown it can enforce these regulations, as exemplified by a ruling on a collaboration between DeepMind and the Royal Free NHS Foundation Trust. The UK government's interest in AI regulation and DeepMind's work with sensitive healthcare data further subjects the company to UK regulatory oversight.

However, the recent fusion of DeepMind with Google Brain, an American entity, may reduce the UK's direct regulatory influence. Despite this, the UK can still impact DeepMind's operations via its general AI policy, procurement decisions, and data protection laws. Moreover, voices like Matt Clifford, the founder and CEO of Entrepreneur First, suggest a push for greater UK sovereign control over AI, which could influence future policy decisions affecting companies like DeepMind.

I think this further acceleration toward the precipice of AGI is tragic news. The 7 mentions of the words "responsible" and "responsibly" echo hollow as corporate safetywashing. I think the time for gentle behind-the-scenes persuasion of these undemocratic companies toward safeguarding humanity is over. We need a global moratorium on AGI asap. And that will require public campaigns and political lobbying on an unprecedented scale, to get governments to step up and regulate at record speed. Who's in?

I am. Which organization is lobbying on that, because I'd be happy to join?

Great! There is campaignforaisafety.org so far. And a few of us from EA and "AI Notkilleveryoneism" Twitter are discussing things too; a lot of latent energy that is still coalescing. Think we need a few new dedicated orgs/projects focusing on this single issue. Will DM. 

Curated and popular this week
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at