Generative AI continued to be a hot topic for privacy-related litigation in 2024. In the US, companies using and deploying this technology saw themselves subject to lawsuits under various state and federal theories of liability, including state wiretapping and unfair or deceptive acts or practices laws. Regulators in the US (such as the Federal Trade Commission and state attorneys general) have also brought recent enforcement actions against AI companies under their authority to enforce consumer protection laws.
This article highlights notable theories of liability used to challenge AI use, development, and deployment from 2024, as well as recent cases and enforcement actions where such theories have been tested. To stay up to date on these developments, please subscribe to the WilmerHale Privacy and Cybersecurity Blog.
State Wiretapping Laws
California courts saw several lawsuits related to generative AI brought under the state’s wiretapping law, the California Invasion of Privacy Act (CIPA). Section 631(a) of the CIPA creates liability for four categories of activity: (1) intentional wiretapping of any telegraph or telephone wire, (2) willfully attempting to read or learn the contents of a communication that is in transit over a wire, (3) using, or attempting to use or to communicate, information obtained as a result of engaging in either of the previous two activities, and (4) aiding and abetting any person or persons to cause violations of the first three categories.1
In recent years, plaintiffs have increasingly brought lawsuits alleging CIPA § 631(a) violations based on “AI eavesdropping.” Many businesses use AI-powered chatbots, which are provided by a third-party service provider, rather than human customer service agents, to perform tasks such as answering customer service phone calls and helping customers with returns and exchanges. AI eavesdropping lawsuits allege that the chatbots’ AI technology intercepts and records communications between customers and the chatbots without customer consent, and that the AI provider then uses those communications to train its AI tools, in violation of CIPA.2 Often, these lawsuits also contain claims alleging violations of the California Unfair Competition Law (UCL),3 the California Constitution, or torts such as intrusion upon seclusion.
Most of these cases have been voluntarily dismissed by the plaintiff or dismissed pursuant to a joint stipulation.4 But Ambriz v. Google, LLC5 recently survived a motion to dismiss. The court dismissed the initial complaint with leave to amend in June 2024, finding that the CIPA claim was barred by § 631(b), which exempts from liability “[a]ny public utility, or telephone company, engaged in the business of providing communications services and facilities, or to the officers, employees or agents thereof, where the acts otherwise prohibited herein are for the purpose of construction, maintenance, conduct or operation of the services and facilities of the public utility or telephone company.”6 Specifically, the court found that because the defendant, Google, was acting as the agent of Verizon, a telephone company within the meaning of § 631(b), Google was exempt from liability.7 The plaintiffs subsequently filed an amended complaint, removing the § 631(a) claim against Verizon, and Google again filed a motion to dismiss. In its motion, Google argued that the claim failed under the “party exemption” because Google was a party to the conversations and therefore not an “eavesdropper” that could be held liable under CIPA; and that the plaintiffs had failed to adequately allege conduct by a “person,” the use of a “telegraph or telephone wire,” or that the contents of the communications had been intercepted while “in transit.”8 In February 2025, the court rejected these arguments and denied the motion to dismiss.9 It concluded that the party exemption did not apply—rather, Google was a third party that listened to communications between the plaintiffs and the customer service centers. Moreover, contrary to Google’s argument, it did not merely provide a “software tool,” as Google was capable of using the collected user data to improve its AI models.10 The court further found that the plaintiffs adequately alleged the requisite elements for a § 631(a) claim, including conduct by a “person,” the interception of communications “in transit,” and use of a “telegraph or telephone wire.”11
Consumer Protection Laws
Privacy concerns related to the use of generative AI have also led to consumer-protection lawsuits brought by government entities such as the Federal Trade Commission (FTC) and state attorneys general, as well as by private parties. These actions have generally been premised on companies’ allegedly false or misleading statements about the accuracy of their generative AI products.
For example, in September 2024, as part of a law enforcement sweep called “Operation AI Comply,”12 the FTC announced five cases against companies whose statements about their AI products allegedly constituted deceptive or unfair conduct that harmed consumers under FTC Act § 5(a),13 the FTC Business Opportunity Rule,14 and the Consumer Review Fairness Act.15 For example, one of these companies claimed that it offered an AI service that could supplant the expertise of a human lawyer,16 and another promised consumers guaranteed income if they invested in a “surefire” AI-powered business opportunity.17 In three of the cases, proceedings are ongoing in district court;18 in two, the FTC has filed a decision and order indicating the execution of an agreement containing a consent order.19
State attorneys general have also shown a willingness to tackle generative AI concerns under state consumer protection laws. The Texas AG sued Pieces Technologies, Inc. (“Pieces”), alleging that Pieces made false, misleading, or deceptive representations about the accuracy of the outputs of its generative AI products, which four major Texas hospitals used to obtain summaries of patients’ conditions and treatment plans, in violation of Texas’s Deceptive Trade Practices Act.20 In September 2024, the Texas AG announced a settlement agreement, under which Pieces agreed to provide clear and conspicuous disclosures in marketing and advertising its products for five years; to be permanently enjoined from making false, misleading, or unsubstantiated representations about its products or services; and to disclose any harmful or potentially harmful uses or misuses of its products or services.
Finally, private parties have also brought lawsuits under consumer protection laws, based on the theory that a company’s use of personal data to train AI models was deceptive or unlawful. For example, in Dinerstein v. Google, LLC,21 the plaintiffs brought a class action against Google, the University of Chicago, and its Medical Center, alleging that the University delivered several years of anonymized patient medical records to Google, enabling Google to train its AI models and develop software that could anticipate patients’ healthcare needs. The case involved a claim under the Illinois Consumer Fraud and Deceptive Business Practices Act, as well as, inter alia, a privacy claim for intrusion upon seclusion. The district court had dismissed the fraud claim for lack of standing and the remaining claims for failure to state a claim. The Seventh Circuit affirmed, concluding that the plaintiffs lacked cognizable injury for any of their claims.22
Other Privacy Laws
2024 also saw a rise in lawsuits alleging that personal data has been used to train generative AI models, in violation of privacy laws. In the United States, much of that litigation has been concentrated in the Northern District of California. To date, only one of those cases, A.T. v. OpenAI L.P.,23 has seen a written decision by the court, which granted the defendants’ motion to dismiss. The plaintiffs alleged that the company’s use of personal data to train its AI models violated: federal laws, namely the Electronic Communications Privacy Act24 and the Computer Fraud and Abuse Act;25 California law, including CIPA § 631 and the UCL, as well as the torts of negligence, invasion of privacy, intrusion upon seclusion, larceny and receipt of stolen property, conversion, and unjust enrichment; and New York General Business Law. In May 2024, the court granted the defendants’ motion to dismiss, explaining that the complaint’s “swaths of unnecessary and distracting allegations making it nearly impossible to determine the adequacy of the plaintiffs’ legal claims.”26
Similar cases—alleging similar sets of facts and theories of liability—have been either voluntarily dismissed or referred to private alternative dispute resolution.27 Given that these cases generally are in their early stages and have not been resolved by the courts, it is not yet clear whether any of these theories of liability will ultimately stick.
1. Cal. Penal Code § 631(a).
2. See Licea v. Old Navy, LLC, 5:22-cv-1413 (C.D. Cal.); Jones v. Peloton Interactive, Inc., 3:23-cv-1082 (S.D. Cal.); Paulino v. Navy Federal Credit Union, 3:24-cv-3298 (N.D. Cal.).
3. Cal. Bus. & Prof. Code § 17200 et seq.
4. See, e.g., Order Granting Joint Motion to Voluntarily Dismiss [ECF No. 30], Jones v. Peloton Interactive, Inc., 3:23-cv-1082 (S.D. Cal. Oct. 1, 2024), ECF No. 31; Plaintiff’s Notice of Voluntary Dismissal Without Prejudice, Paulino v. Navy Federal Credit Union, 3:24-cv-3298 (N.D. Cal. Sept. 19, 2024), ECF No. 35.
5. 3:23-cv-5437 (N.D. Cal.).
6. Cal. Penal Code § 631(b)(1).
7. Order Granting Motion to Dismiss, Ambriz v. Google, LLC, 3:23-cv-5437 (N.D. Cal. June 20, 2024), ECF No. 37.
8. Motion to Dismiss Plaintiffs’ Consolidated Class Action Complaint, Ambriz v. Google, LLC, :23-cv-5437 (N.D. Cal. Nov. 12, 2024), ECF No. 47.
9. Order Denying Defendant’s Motion to Dismiss, Ambriz v. Google, LLC, 3:23-cv-5437 (N.D. Cal. February 10, 2025), ECF No. 56.
10. Id. at 4-7.
11. Id. at 7-9.
12. FTC Announces Crackdown on Deceptive AI Claims and Schemes, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes (Sept. 25, 2024).
13. 18 U.S.C. § 45(a).
14. 16 C.F.R. § 437.
15. 15 U.S.C. § 45b.
16. Complaint, In the Matter of DoNotPay, Inc., File No. 232-3042, https://www.ftc.gov/system/files/ftc_gov/pdf/DoNotPayInc-Complaint.pdf.
17. Complaint, Fed. Trade Comm’n v. TheFBAMachine Inc., 2:24-cv-06635 (D.N.J. June 3, 2024).
18. See Fed. Trade Comm’n v. Ascend Capventures Inc., 2:24-cv-7660 (C.D. Cal.); Fed. Trade Comm’ns v. Empire Holdings Grp. LLC, 2:24-cv-4949 (E.D. Pa.); Fed. Trade Comm’n v. TheFBAMachine Inc., 2:24-cv-6635 (D.N.J.).
19. See Decision and Order, In the Matter of Rytr LLC, File No. 232-3052, Dkt. No. C-4806, https://www.ftc.gov/system/files/ftc_gov/pdf/2323052c4806finalorder.pdf; Decision and Order, In the Matter of DoNotPay, Inc., File No. 232-3042, https://www.ftc.gov/system/files/ftc_gov/pdf/DoNotPayInc-D%26O.pdf.
20. State of Texas v. Pieces Technologies Inc., DC-24-13476 (Tex.)
21. 73 F.4th 502 (7th Cir. 2023).
22. Id. at 516, 522-23.
23. No. 3:23-cv-4557 (N.D. Cal.).
24. 18 U.S.C. § 2510 et seq.
25. 18 U.S.C. § 1030.
26. Order Granting Motions to Dismiss at 1, Cousart v. OpenAI LP, No. 3:23-cv-4557 (N.D. Cal. May 24, 2024), ECF No. 78. Although the court gave the plaintiffs leave to amend, the plaintiffs filed a notice of intend not to amend and have not filed a notice of appeal, effectively ending the suit. Plaintiffs’ Notice of Intent Not to Amend First Amended Complaint, Cousart v. OpenAI LP, No. 3:23-cv-4557 (N.D. Cal. June 14, 2024), ECF No. 82.
27. See, e.g., P.M. v. OpenAI LP, No. 3:23-cv-3199 (N.D. Cal.); J.L. v. Alphabet Inc., No. 5:23-cv-3440 (N.D. Cal.).