California AG Issues AI Advisories

California AG Issues AI Advisories

Blog WilmerHale Privacy and Cybersecurity Law

On January 13, 2025, the California AG’s Office (“AGO”) issued two legal advisories regarding the application of existing California law to AI generally as well as the use of AI specifically in healthcare.

Readers of our blog will find the overall point of these legal advisories familiar, as this is the approach taken by the FTC and other federal agencies under the previous administration, including in the Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, where they warned that existing laws apply to AI and that federal agencies were resolved to vigorously use authorities to protect individual rights. This kind of guidance and signaling of enforcement efforts may be particularly significant given that the federal government may be pulling back on some of these enforcement related actions.  These California advisories serve as a reminder that while the aggressive approach to AI regulation and enforcement seen at the federal level may be rolled back under a second Trump Administration, there are still numerous state laws that apply to AI and that might be aggressively enforced by state regulators.

These advisories, coming just a few months after the state passed some significant AI bills, reaffirm California’s intent to be active in the field of AI regulation and provide helpful insight into the enforcement priorities of the California Attorney General.

These advisories are only one example of how companies using and developing AI need to account for state law and state attorney general enforcement as part of their overall risk analysis. And, while California is certainly a leader on these issues, it is by no means the only state focusing on AI concerns. The Texas AG’s office, for example, recently announced a settlement with a healthcare AI company for what it deemed to be misleading or deceptive claims, as well as a public investigative sweep into a number of companies for alleged violations relation to children’s privacy – including how some companies used AI tools to interact with minors. It seems likely that AI issues will only gain more attention at the state level, especially as states continue to pass more legislation on the topic.

In this post, we summarize key takeaways from the AGO’s advisories. To stay updated on the latest developments in California privacy law, please subscribe to the

WilmerHale Privacy and Cybersecurity Blog.

Key Takeaways

1. Broad applicability of California’s existing law to AI. The advisories assert that California’s existing laws are more than capable of addressing AI misuses and that the emergence of new AI laws will only bolster these established protections. The intentionally “broad, sweeping language” of California’s Unfair Competition Law (UCL) is then referenced as an example of how the state has drafted laws to protect against “familiar forms of fraud and deception as well as new, creative, and cutting-edge forms of unlawful, unfair, and misleading behavior.” Some illustrations of potentially unlawful uses of AI include using AI for impersonation, creating deepfakes, chatbots, or voice clones that say or do things that would likely be deceptive, and making false claims about AI’s capabilities. The advisory also notes that businesses may incur liability when they know or should have known that AI products they supply will be used to violate the law. Notably, California’s UCL treats a practice as “unlawful” and “independently actionable” if it violates federal law or another state’s law (pursuant to business activity).

The AGO also references the applicable California data privacy laws, including the California Consumer Privacy Act, California Invasion of Privacy Act, Student Online Personal Information Protection Act, and Confidentiality of Medical Information Act, noting that “data is the bedrock underlying the massive growth in AI” and that the codified constitutional right to privacy is a foundational component of the state’s approach to protecting its consumers personal information. The health care advisory, in particular, notes that California state medical privacy laws can be more stringent than federal laws like the Health Insurance Portability and Accountability Act in certain areas, so health care entities should be vigilant in ensuring that their training data, inputs, and outputs adequately protect Californians’ right to medical privacy. 

2. New California laws aimed at AI. The advisory listed several laws with specific applicability to AI, AI-generated technologies, and AI developers that went into effect on January 1, 2025.

  • AB 2013 imposes requirements on AI developers to disclose information about their training data and a dataset summary on or before January 1, 2026. Notably, the law may require retrospective documentation due to the nature of training set development over time. (Civ. Code, § 3110 et seq.)
  • AB 2905 requires disclosure of AI-generated telemarketing calls. (Pub. Util. Code, § 2874.)
  • SB 942 places obligations on AI developers to make identifying AI-generated content easier for consumers by offering to place visible markings or other detection features on content. (Bus. & Prof. Code, § 22757 et seq.)
  • SB 1120 requires health care insurers to ensure that any AI tools that make decisions about healthcare services and insurance claims are actively supervised by licensed physicians. (Health & Saf. Code, § 1367.01; Ins. Code, § 10123.135.)

3. Protecting against AI-enabled discrimination and bias in health care facilities. Health care facilities have witnessed a proliferation of AI-integration in industry ranging from AI-enabled tools that assist with scheduling to others that can now assist in making diagnoses. The AGO’s health care advisory spotlights the risks created by AI trained on data reflecting existing bias and the health inequities, patient autonomy and privacy harms that may result. The advisory continued by placing the responsibility of understanding how AI systems are trained and how the systems receive information and generate outputs on the healthcare-related entities due to the complex nature of AI and the fact that most patients are not aware of when or how AI has been used in connection with their health care. Health care entities were advised to test, validate, and audit AI systems to ensure safe and ethical usage by providers. Notably, the advisory addressed a broader category than healthcare providers, insurers, and researchers; vendors, developers, and investors were also expected to take on the responsibilities enunciated in the advisory. The AGO also stressed the importance of transparency so that patients could be informed about whether their information was being used to train AI or if AI had influenced decisions affecting their health and health care.

The AGO included examples of AI use in healthcare systems that may be unlawful, such as:

  • Denying health insurance claims using AI or other automated decisionmaking systems in a manner that overrides licensed physicians
  • Using generative AI or other automated decisionmaking tools to draft patient notes, communications, or medical orders that include erroneous or misleading information, including information based on stereotypes relating to race or other protected classifications
  • Determining patient access to healthcare using AI or other automated decisionmaking systems that make predictions based on patients’ past healthcare claims data, resulting in disadvantaged patients or groups that have a history of lack of access to healthcare being denied services on that basis while patients/groups with robust past access being provided enhanced services.
  • Double-booking a patient’s appointment or creating other administrative barriers because AI or other automated decisionmaking systems predict that patient is the “type of person” more likely to miss an appointment
  • Conducting cost/benefit analysis of medical treatments for patients with disabilities using AI or other automated decisionmaking systems that are based on stereotypes that undervalue the lives of people with disabilities
     

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.