I’m listing some of the important and/or EA-related events that happened in 2023. Consider adding more in the comments!
A companion post collects research and other "content" highlights from 2023. (That post features content; the one you're reading summarizes news.)
Also, the monthly EA Newsletter discussed a lot of the events collected here, and was the starting point for the list of events in this post. If you’re subscribed, we would really love feedback.
Skip to:
- News related to different causes
- AI safety: AI went mainstream, states developed safety-oriented regulation, there was a lot of discourse, and more
- Global health & development: new vaccines, modified mosquitoes, threatened programs, and ongoing trends
- Animal welfare: political reforms and alternative proteins
- Updates in causes besides AI safety, global health, and animal welfare
- Concluding notes
Other notes:
- There might be errors in what I wrote (I'll appreciate corrections!).
- Omissions! I avoided pretty political events (I think they're probably covered sufficiently elsewhere) and didn't focus on scientific breakthroughs. Even besides that, though, I haven’t tried to be exhaustive, and I’d love to collect more important events/things from 2023. Please suggest things to add.
- I’d love to see reflections on 2023 events.
- What surprised you? What seemed important but now feels like it might have been overblown? What are the impacts of some of these events?
- And I’d love to see forecasts about what we should expect for 2024 and beyond.
- I put stars ⭐ next to some content and news that seemed particularly important, although I didn’t use this consistently.
- More context on how and why I made this: I wanted to collect “important stuff from 2023” to reflect on the year, and realized that one of the resources I have is one I run — the monthly EA Newsletter. So I started compiling what was meant to be a quick doc-turned-post (by pulling out events from the Newsletter’s archives, occasionally updating them or looking into them a bit more). Things kind of ballooned as I worked on this post. (Now there are two posts that aren’t short; see the companion, which is less focused on news and more focused on "content.")
AI safety: AI went mainstream, states developed safety-oriented regulation, there was a lot of discourse, and more
See also featured content on AI safety.
0. GPT-4 and other models, changes at AI companies, and other news in AI (not necessarily safety)
Before we get to AI safety or AI policy developments, here are some relevant changes for AI development in 2023:
- ⭐ New models: OpenAI launched GPT-4 in mid-March (alongside announcements from Google, Anthropic, and more). Also around this time (February/March), Google released Bard, Meta released Llama, and Microsoft released Bing/Sydney (which was impressive and weird/scary).
- Model use, financial impacts, and training trends: more people started using AI models. Developers got API access to various models. Advanced AI chips continued getting better and compute use increased and got more efficient.
- Improvements in models: We started seeing pretty powerful multimodal models (models that can process audio, video, images — not just text), including GPT-4 and Gemini. Context windows grew longer. Forecasters on Metaculus seem to increasingly expect human-AI parity on selected tasks by 2040.
- Changes in leading AI companies: Google combined Brain and DeepMind into one team, Amazon invested in Anthropic, Microsoft partnered with OpenAI, Meta partnered with Hugging Face, a number of new companies launched, and OpenAI CEO Sam Altman was fired and then reinstated (more on that).
- Other news: Generative AI companies are increasingly getting sued for copyright infringement. (E.g. AI art tools by artists, OpenAI by NYT.) And I felt like there were many moments this year when AI “breakthroughs,” news, etc. would get awe and attention that didn’t seem appropriate given the developments’ relative unimportance.
1. Policy/governance: new regulations, policymakers take AI risk seriously, government investments into AI safety
These seem like really important changes.
- ⭐ Important regions rolled out measures aimed at reducing catastrophic AI risks:
- US: In October, President Biden issued an executive order on “safe, secure, and trustworthy” AI (summary/analysis, full order, more discussion), requiring reporting systems, safety precautions at bio labs, and more.[1]
- Also in October, the US tightened its 2022 export controls on advanced AI chips and semiconductor manufacturing equipment, making it harder for Chinese companies to access advanced chips.[2]
- China: central government regulators in China issued measures related to generative AI that releases content to the public in China (draft released in April, and finalized measures went into effect in August). The measures declare “AI service providers” liable for generated content (if it contains illegal or protected personal information, it has to be taken down, and the underlying issue fixed and reported) and regulate training data. They also require certain kinds of providers to pass a security assessment from a regulator.
- EU: EU policymakers have reached an agreement on the AI Act, which was first proposed in 2021 and had been cycling through negotiations and revisions for most of 2023 (in part due to lobbying from AI companies and less safety-oriented governments). It will go into effect after a transitional period during which the EU Commission is seeking voluntary commitments from companies to start implementing the Act’s requirements. (See a post about the Act from 2021.)
- UK: The UK hasn’t passed regulation related to AI safety, although it hosted the AI Safety Summit (discussed below) and started an advisory AI Safety Institute.
- US: In October, President Biden issued an executive order on “safe, secure, and trustworthy” AI (summary/analysis, full order, more discussion), requiring reporting systems, safety precautions at bio labs, and more.[1]
- More generally, AI safety is getting discussed and taken a lot more seriously by policymakers:
- US: NIST released an AI Risk Management Framework, the White House met with AI lab leaders and secured voluntary commitments from some of the top AI companies (May-July), and the Senate held a series of “AI Insight Forums,” three of which — in July, October, December — focused significantly on catastrophic AI risk (others were more focused on topics like bias, privacy, and IP/copyright). (See also this explainer of different proposals from August.)
- This difference in White House press briefings two months apart is a striking illustration of how perceptions changed between March and June (full press briefings + timestamps in footnote[3]). As described here, another striking example of the shift was Senator Schumer asking attendees at an AI Insight Forum for their “p(doom)”.
- UK: In November, the UK hosted the AI Safety Summit, gathering political and tech leaders to discuss risks from advances in AI and how to manage them (see a debrief and discussion of the results).
- US: NIST released an AI Risk Management Framework, the White House met with AI lab leaders and secured voluntary commitments from some of the top AI companies (May-July), and the Senate held a series of “AI Insight Forums,” three of which — in July, October, December — focused significantly on catastrophic AI risk (others were more focused on topics like bias, privacy, and IP/copyright). (See also this explainer of different proposals from August.)
- Governments are also investing in AI safety in other ways. Several countries established national AI safety institutions. The US’s NSF (in partnership with Open Philanthropy) announced $20M in grants available and a request for proposals for empirical AI safety research (deadline 16 January!).
2. Scientists and others shared thoughts on AI safety and signed statements on existential risk from AI
- ⭐ Key AI scientists started writing more on AI existential risk and the need for safety-oriented work.
- This includes two of three[4] “godfathers of deep learning,” Yoshua Bengio and Geoffrey Hinton (who left his job at Google to speak about this more freely). (Industry leaders have also shared views on AI risk: see e.g. podcast interviews with Dario Amodei, Mustafa Suleyman, and Sam Altman.)
- High-profile people outside the field of AI also shared AI safety concerns and thoughts in 2023. Examples include President Barack Obama (here's an interview), Bill Gates, and Nate Silver. (See more.)
- Statements:
- Two weeks after the launch of GPT-4, the Future of Life Institute released a letter asking for a six-month pause in “giant AI experiments.” It was signed by Elon Musk, Steve Wozniak, and many AI experts and public figures.
- ⭐ A few weeks later, hundreds of key executives, researchers, and figures signed the simpler CAIS “Statement on AI Risk”, which was covered by many media outlets, including The New York Times (front page), The Guardian, BBC News and more.
- See US public perception of the statement. Signatories include Turing Award winners, AI company executives like Demis Hassabis, Sam Altman, and Mustafa Suleyman, and others like Bill Gates, Ray Kurzweil, Vitalik Buterin, and more.
- In October, high-profile scientists published ⭐ a “consensus paper” (arXiv, policy supplement), outlining risks from upcoming AI systems and proposing priorities for AI R&D and governance. Signatories include Andrew Yao, Stuart Russell, and more.
- Also in October, prominent Chinese, US, UK, and European scientists signed a statement on a joint strategy for AI risk mitigation.
3. Public discourse included discussions of AI safety
- The public pays attention to AI risk:
- Media outlets covered AI risk. Notable content includes the following pieces (all of these are paywalled):
- ⭐ Ian Hogarth's “We must slow down the race to God-like AI” in the Financial Times
- Ezra Klein writing “This Changes Everything” in The New York Times
- “How AI Progress Can Coexist With Safety and Democracy” by Yoshua Bengio and Daniel Privitera in Time, and this Time piece by two researchers.
- (More coverage can be found here.)
- Broad-audience and introductory AI safety content was also featured in some large outlets:
- Ajeya Cotra went on Freakonomics.
- In TED Talks, Eliezer Yudkowsky asked whether superintelligent AI will end the world and Liv Boeree talked about the dark side of competition in AI.
- Yoshua Bengio outlined how rogue AIs may arise.
4. There were other developments in AI safety as a field
- Industry support[6]: OpenAI, Google, Anthropic, and Microsoft announced a Frontier Model Forum to make progress on AI safety, then shared an update about a $10M AI Safety Fund. OpenAI also announced $10M in Superalignment Fast Grants and grants for research into Agentic AI Systems.
- ARC Evals (now METR) worked with OpenAI and Anthropic to evaluate GPT-4 and Claude (pre-release). More generally, safety evaluations seem to have taken off as a field.
- The 2023 Expert Survey on Progress in AI from AI Impacts is out (2778 participants from six top AI venues). Expected time to human-level performance dropped 1-5 decades since the 2022 survey, and median respondents put 5% or more on advanced AI leading to human extinction or similar (a third to a half of participants gave 10% or more).
- New organizations working on AI safety and alignment have been announced, and there’s been a lot of research, which I will not cover here (please add highlights in the comments if you want, though!).
For more on 2023, consider checking: the CAIS AI Safety Newsletter, AI Explained, Zvi’s series on AI, or newsletters on this list.
Global health & development: new vaccines, modified mosquitoes, threatened programs, and ongoing trends
See also highlights from content and research about global health and development.
1. Progress on malaria vaccines & other ways to fight mosquito-transmitted diseases
- ⭐ The new R21/Matrix-M malaria vaccine is extremely promising (~68%-75% efficacy) and the WHO recently cleared an important hurdle for deployment.
- In April, Ghana and Nigeria approved the vaccine and the Serum Institute of India prepared to manufacture over 100 million doses per year.
- For most of the year, however, other countries with high rates of malaria weren’t rolling the vaccine out (and production wasn’t ramping up), in part because[7] the WHO hadn’t “prequalified” it yet. Alex Tabarrok, Peter Singer, and others wrote about the significant costs of delay, suggesting that the WHO didn’t seem to be treating malaria as the emergency it was.
- On December 21, the WHO announced that the new vaccine had prequalification status. (1Day Sooner’s Josh Morrison reflected on how to evaluate the effects of their advocacy efforts. See also why GiveWell funded the rollout of the first malaria vaccine, from earlier in the year, and a recent discussion of whether the vaccines are actually more cost-effective than other malaria programs.)
- ⭐ The World Mosquito Program is fighting dengue fever by releasing millions of special mosquitoes. The mosquitoes are infected with Wolbachia bacteria, which blocks transmission of dengue (and related diseases, including Zika and yellow fever); see Saloni Dattani for more.
2. Successes and other important & noticeable events related to global health and development
- ⭐ Prevalence of lead in turmeric dropped significantly after researchers collaborated with charities and the Bangladesh Food Safety Authority on interventions like monitoring and education campaigns. This work seems highly cost-effective and was supported by GiveWell. Lead exposure is extremely harmful.
- A new, potentially very effective tuberculosis vaccine has entered late-phase trials thanks to $550 million in funding from Wellcome and the Bill & Melinda Gates Foundation. The vaccine could save millions of lives. (Recent related Vox coverage.)
- PEPFAR (an HIV/AIDS program that was a huge success) is at risk.
- Charity Entrepreneurship launched new charities working on reducing microbial resistance, women’s health, reducing tobacco use, and more.
- GiveDirectly shared that around $1M was stolen from them in the DRC in 2022 and related updates. This is around 0.8% of $144M GiveDirectly helped transfer globally in 2022. Kelsey Piper discusses the events in Vox.
- Relatedly, the first results from the world’s biggest basic income experiment in Kenya are in.
See some more events in this newsletter.
3. Very important things continued to happen
⭐ I love “What happens on the average day” by @rosehadshar, which emphasizes the way "news" (and/or what gets covered) can diverge from the things that are really important. So here’s a brief outline of some global-health-related things that kept happening:
- 134 million babies were born and 61 million people died (about half from heart disease or cancers, about 1/7 from infectious diseases). About 8 billion people lived for another year.
- Extrapolating from partial information and trends, I’d guess that about 25% people (almost 2 billion) didn’t have access to safe drinking water, around 40% of people (about 3 billion) didn’t have access to clean cooking fuels, and about 700,000 people didn’t have access to electricity (although more people got electricity this year). Around 2 billion people lived in countries where same-sex sexual acts are illegal. Almost 300,000 women died in or soon after childbirth. Literacy rates kept rising. Poverty probably kept falling. Fewer people died from particulate air pollution (because we used less coal). More people worked in services and fewer worked in agriculture than in 2022. Global GDP per capita kept growing.
- There were 5 major armed conflicts (>10K combat-related deaths in the past year), and 15 more conflicts that caused at least 1000 deaths each.
Ongoing philanthropic projects kept delivering:
- The Against Malaria Foundation distributed ~90 million nets, expected to protect 160 million people. “The impact of these nets is expected to be, ± 20%, 40,000 deaths prevented, 20 million cases of malaria averted and a US$2.2 billion improvement in local economy (12x the funds applied). When people are ill they cannot farm, drive, teach – function, so the improvement in health leads to economic as well as humanitarian benefits.”
- Helen Keller International distributed over 63 million capsules of vitamin A via a program that seems highly cost-effective.
- You can see more compiled here or in GiveWell’s updates and recommendations.
For more important trends/things-that-happen, check Our World in Data, Rose’s post, Wikipedia’s Current events.[8]
Animal welfare: political reforms and alternative proteins
See also featured content and research on animal welfare.
1. Policies protecting animals: wins and losses
- ⭐ EU: The EU was on track to phase out cages for farmed pigs and egg-laying hens (and more: see a related EU Food Agency recommendation to ban cages). Unfortunately, the EU Commission seems to have dropped the promised animal welfare reforms (related thread.)
- ⭐ US: In an unexpected ruling, the US Supreme Court upheld California’s Proposition 12 (Vox), which sets minimum space requirements for animals and bans cages for egg-laying hens (cost-effectiveness).
- Prop 12 and other important U.S. animal welfare bills might still be threatened by the EATS Act, which prohibits state governments from setting standards on the production of agricultural products imported from other states. (U.S. citizens can get in touch with their legislators about this.)
2. Alternative proteins & plant-based food: supported by many countries, cleared for sale in US, banned in Italy
- ⭐ Lab-grown meat was cleared for sale in the United States.
- Italy banned cultivated meat.
- Denmark, India, the UK, Germany, and other countries invested in alternative proteins/plant-based food. (See more on this and other highlights from GFI.)
- Plant-based meat sales seem to have stagnated in the US.
3. Other important developments: the first ever octopus farm, Peter Singer’s Animal Liberation Now, bird flu
- News of a plan for the world's first octopus farm caused concern and outcry.
- Relatedly, the Aquatic Life Institute’s certification tool ranks aquaculture certifiers based on the quality of their welfare requirements; the 2023 update includes a prohibition on octopus farming. (More on invertebrate welfare.)
- Peter Singer published Animal Liberation Now and gave a TED Talk.
- Millions of farmed birds are being killed in an extremely inhumane way after a flu outbreak in the US (which seems to be ongoing).
4. Very important things continued to happen
Around 900,000 cows and 3.8 million pigs were slaughtered every day. Around 440 billion shrimp were killed on farms in 2023. Almost all livestock animals in the US lived their lives on factory farms (globally around three quarters of farmed land animals are factory farmed). And the world is on track to eat almost a trillion chickens in the next decade.
Explore more on animal welfare here and in Lewis Bollard’s newsletter, where he recently shared some wins for farmed animals from 2023.
Updates in causes besides AI safety, global health, and animal welfare
See also featured content/research on topics that don't fit into the causes above.
- After two years, USAID has shut down DEEP VZN, a controversial virus-hunting program aimed at stopping the next pandemic before it happened, which some (including Kevin Esvelt) worried would end up causing a pandemic instead of stopping it.
- Transmissibility developments in the H5N1 bird flu caused some concern and discussion about the potential danger and the odds that H5N1 would be worse than COVID-19.
- 2023 was “the hottest year ever recorded.” Coal production was probably a smaller share of global electricity production but grew overall. Cost of energy from renewable sources probably kept falling, and more energy came from renewables.
- A global catastrophic risks law was approved in the United States.
Concluding notes
Please suggest additions to the list[9] (or feedback), share your feedback on the EA Newsletter if you have any, and consider reflecting on these events! I'd also love to see (and in some cases work on) related projects:
I viewed this in large part as an exercise, and would like to do some more, like the following: seeing how large forecasts on important questions might have changed, identifying my biggest areas of confusion about what was important in 2023 and trying to list and resolve some cruxes, deliberately choosing a list of questions to forecast for 2024 and trying to forecast them, looking back on the “events” of 2023 and checking for events that surprised me (and thinking about where I should question whatever led to false expectations), seeking out information about my blindspots, choosing a subset of “events” that at least seem particularly important one way or another and trying to actually evaluate how they were impactful and to what extent, and more.
I’d also be excited about a more "meta-EA version" of this kind of collection, tracking important events and wins for EA-related people and groups. (The current list probably already skews a bit in this direction, but I'd like to see a reflection that includes things like a shift in conversation, discussion of whether and in what way we might be in Third Wave EA, etc.)
I probably won’t get to most of the above ideas, but I’ll likely work on some of them, although I expect I won’t bother to clean things up and publish them in many cases. Let me know if you have thoughts on what’s more/less useful!
- ^
See more in this tracker from CSET.
- ^
See a Twitter thread summarizing the revised controls, and this analysis of the 2022 controls
- ^
Before: 43:10-45:39 in this briefing: https://www.youtube.com/live/65ja2C7Qbno?si=neHTbDRXpQnqe64w
After: 56:08-57:20 here https://www.youtube.com/live/bFMW3OjgsuY?si=_DhGMqtFKGhOTPBf
- ^
Yann LeCun famously disagrees with them on AI risk.
- ^
AI Impacts also has a long list of US public opinion survey results from different sources.
You can also explore results from an international survey of public opinion towards AI safety, which finds that there’s some variation between countries but agreement on some questions, like the importance of testing.
- ^
Note that I'm worried about safety-washing, and in some of these cases I'm particularly unsure about what the risk/safety implications of these initiatives are.
- ^
The WHO’s “prequalification” of vaccines is important for organizations like GAVI and UNICEF to start procuring and deploying vaccines to lower- and middle-income countries.
- ^
Also: Dollar Street, Worldometer, and The Base Rate Times.
- ^
I don’t think this list even remotely covers the important things that happened in 2023. There are some obvious or predictable blindspots (e.g. I deliberately didn’t focus on scientific/knowledge developments or political changes) and lacking information (lagging metrics, events/changes that are difficult to measure, issues where we’ve passed some kind of point of no return or an inflection point but haven’t landed at the next stable equilibrium, etc.) — and I’m also just missing a bunch of stuff.
Nice, I didn't know about some of these, good to take stock after an eventful year! I am so used to GPT-4 and integrating it into my work and life that it is weird to think it has been around such a short length of time ...