Hide table of contents

tl;dr  Cases I found against OpenAI. All are US-based. First ten focus on copyright.
 

Coders
 1. Joseph Saveri Firm:  overview, complaint

Writers
 2. Joseph Saveri Firm:  overview, complaint
 3. Authors Guild & Alter:  overview, complaint
 4. Nicholas Gage:  overview & complaint

YouTubers
 5. Millette: overview, complaint

Media
 6. New York Times:  overview, complaint
 7. Intercept Media:  overview, complaint
 8. Raw Story & Alternet:  overview, complaint
 9. Denver Post & seven others:  overview, complaint
 10. Center for Investigative Reporting: overview, complaint

Privacy
11. Clarkson Firm:  overview, complaint
12. Glancy Firm:  overview, complaint

Libel
13. Mark Walters:  overview, complaint

Mission betrayal
14. Elon Musk:  overview, complaint
15. Tony Trupia:  overview, complaint


That last lawsuit by a friend of mine has stalled. A few cases were partially dismissed.
Also, a cybersecurity expert filed a complaint to Polish DPA (technically not a lawsuit).
For lawsuits filed against other AI companies, see this running list.

Most legal actions right now focus on data rights. In the future, I expect many more legal actions focussed on workers' rights, product liability, and environmental regulations.


If you are interested to fund legal actions outside the US:

  • Three projects I'm collaborating on with creatives, coders, and lawyers.
  • Legal Priorities was almost funded last year to research promising legal directions.
  • European Guild for AI Regulation is making headway but is seriously underfunded.
  • A UK firm wants to sue for workplace malpractice during ChatGPT development. 
     

Folks to follow for legal insights:

  • Luiza Jarovsky, an academic who posts AI court cases and privacy compliance tips
  • Margot Kaminski, an academic who posts about harm-based legal approaches
  • Aaron Moss, a copyright attorney who posts sharp analysis of which suits suck
  • Andres Guadamuz, an academic who posts analysis with a techno-positive bent
  • Neil Turkewitz, a recording industry veteran who posts on law in support of artists
  • Alex Champandard, a ML researcher who revealed CSAM in largest image dataset
  • Trevor Baylis, a creative professional experienced in suing and winning
     

Manifold also has prediction markets:


Have you been looking into legal actions?  Curious then for your thoughts.

Comments5


Sorted by Click to highlight new comments since:

Thanks for making the list Remmelt!

Not sure how important this one is, but Air Canada recently had to comply to a refund policy made up by its own chatbot.

Thanks! Also a good example of lots of complaints being prepared now by individuals

Obvious point that it would be neat for someone to write forecasting questions for each one, if there can be some easy way of doing so. 

Workers' rights are usually under the umbrella of systematic violations of rights, a term usually associated with Human rights. We can use similar pointers and forecast questions/solutions. Some would overlap with data mining and fair use —which are hardly followed. It is not very hard for an average company to see the pivots created by OpenAI's crisis management team. OpenAI research leads say their recent model is trained on a combination of data that's publicly available as well as data that OpenAI has licensed, but they can't go into much detail on it.

The last part is no easy feat for anyone to dive into. This conversation came out less than two days ago and seemed quite intentional. We can safely assume that this is going to be the new norm for addressing lawsuits. It is admissible in all the formal proceedings, after all. It is important to note that, statements like: in some ways, we really see modeling reality as the first step to be able to transcend it, are meticulously said in the end. I don't think anyone would want to deal with them and get stuck in an expensive limbo beyond control, which OpenAI can afford.

Actually, looks like there is a thirteenth lawsuit that was filed outside the US.

A class-action privacy lawsuit filed in Israel back in April 2023.

Wondering if this is still ongoing: https://www.einpresswire.com/article/630376275/first-class-action-lawsuit-against-openai-the-district-court-in-israel-approved-suing-openai-in-a-class-action-lawsuit

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 1m read
 · 
I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.
 ·  · 14m read
 · 
Introduction In this post, I present what I believe to be an important yet underexplored argument that fundamentally challenges the promise of cultivated meat. In essence, there are compelling reasons to conclude that cultivated meat will not replace conventional meat, but will instead primarily compete with other alternative proteins that offer superior environmental and ethical benefits. Moreover, research into and promotion of cultivated meat may potentially result in a net negative impact. Beyond critique, I try to offer constructive recommendations for the EA movement. While I've kept this post concise, I'm more than willing to elaborate on any specific point upon request. Finally, I contacted a few GFI team members to ensure I wasn't making any major errors in this post, and I've tried to incorporate some of their nuances in response to their feedback. From industry to academia: my cultivated meat journey I'm currently in my fourth year (and hopefully final one!) of my PhD. My thesis examines the environmental and economic challenges associated with alternative proteins. I have three working papers on cultivated meat at various stages of development, though none have been published yet. Prior to beginning my doctoral studies, I spent two years at Gourmey, a cultivated meat startup. I frequently appear in French media discussing cultivated meat, often "defending" it in a media environment that tends to be hostile and where misinformation is widespread. For a considerable time, I was highly optimistic about cultivated meat, which was a significant factor in my decision to pursue doctoral research on this subject. However, in the last two years, my perspective regarding cultivated meat has evolved and become considerably more ambivalent. Motivations and epistemic status Although the hype has somewhat subsided and organizations like Open Philanthropy have expressed skepticism about cultivated meat, many people in the movement continue to place considerable hop