The use of generative artificial intelligence (“AI”) by businesses and consumers over the past several months has increased dramatically—driven in large part by the enormous visibility of ChatGPT as a new AI device, improved performance of other tools, and overall wider commercial availability. It is clear that existing legal regimes in the United States only regulate the technology at the margins. There have been increasing calls to regulate the technology in the United States in a comprehensive way. Although Senator Chuck Schumer (D-NY) is working on a framework to regulate AI at the national level, that framework is still high-level and principle-based, without getting into the specifics that would need to be part of a legislative proposal. At the same time, regulators clearly are watching the development and use of these tools even without specific legal requirements in place.
Enter California, where lawmakers recently introduced A.B. 331, which sets out a framework for regulating “Automated Decision Tools.” Some of the requirements reflect technical protections and practices laid out in the White House’s Blueprint for an AI Bill of Rights, released in October 2022, including ensuring safe and effective systems, algorithmic discrimination protections, notice and explanation, and human alternatives. Although early in the legislative process, an analysis of this bill gives us a sense of how lawmakers may approach regulating this technology going forward. If the law progresses through the California legislature, it has the potential to influence how other states approach the issue. It also may influence the California Privacy Protection Agency’s rulemaking on automated decision-making (ADM) systems under the California Privacy Rights Act (CPRA).
The California bill is focused on certain types of AI tools—those which are “specifically developed and marketed to or specifically modified to, make, or be a controlling factor in making, consequential decisions.” Consequential decisions are those that would affect certain enumerated individual rights and opportunities, such as employment, education, housing, health care or health insurance, and financial services. The bill is directed to those that use the tools to make consequential decisions (“deployers”), though it would exempt very small deployers, and those that create such tools (“developers”). This reflects a common concern of regulators—the potentially discriminatory impact of AI tools on “important” decisions.
An overview of the bill’s requirements is below:
Impact Assessments. Both deployers and developers must perform impact assessments. Similar to privacy impact assessments in spirit, the function of the assessments is to provide transparency around the use of the tool, as well as to encourage a thoughtful approach to the use of the tool and ensure that sufficient guardrails are implemented prior to use.
Among other things, the impact assessments must disclose what personal information will be processed, explain the tool’s purpose, and provide a summary of the tool’s outputs and how they are used to make a consequential decision. The assessments must also provide an analysis of the potential adverse impacts on certain protected classifications (e.g., sex, race, ethnicity, age, disability, etc.), a description of the safeguards or measures that have or will be implemented to address reasonably foreseeable risks of algorithmic discrimination, and a description of how a human will use or monitor the tool as well as how the tool has been or will be evaluated for validity or relevance.
Impact assessments must be provided to the Civil Rights Department within 60 days of completion. The Civil Rights Department may bring an administrative enforcement action against any developer or deployer that does not submit the impact assessment, and seek not more than $10,000 per violation. Although the bill does not appear to give the Civil Rights Department the ability to enforce whether or not the impact assessment substantively complies with the proposed law, it does allow the Civil Rights Department to share impact assessments with other state entities “as appropriate,” which could allow it to share an impact assessment with the California Attorney General or other public attorneys.
Notice. Deployers—at the time that the tool is being used to make a consequential decision—must notify individuals about why and how the tool is being used and provide a plain language description of the tool that includes a disclosure of any human components and how any automated component is used to inform a consequential decision.
Opt-Out Requests. The bill provides a mechanism for individuals to request that they not be subject to fully automated decision-making and, instead, be subject to an alternative selection process or accommodation where technically feasible.
Developer Disclosure. The bill also imposes disclosure obligations on developers. They must make available a statement that explains the tool’s intended use and known limitations (including any reasonably foreseeable risks of algorithmic discrimination from its intended use), as well as the type of data used to program or train the tool. The statement must also include an explanation of how the tool was evaluated for validity and explainability.
Governance. Those developing and using automated decision tools must have a governance program that contains reasonable administrative and technical safeguards to address the reasonably foreseeable risks of algorithmic discrimination associated with the use or intended use of the automated decision tool. The bill includes a number of factors to be considered as part of the reasonableness determination, including the intended use of the tool and the technical feasibility and cost of available means to address the associated risks. The governance requirements in the bill bear a resemblance to information security and privacy governance programs—the program must be overseen by at least one person, there must be an annual review of policies, practices, and procedures, and there must be evaluation of and reasonable adjustments to existing administrative and technical safeguards in light of changes.
Enforcement. The bill permits the California Attorney General and other public attorneys in the state to bring a civil action against a developer or deployer for injunctive relief, declaratory relief, and reasonable attorney’s fees and litigation costs, though there is an opportunity to cure.
Private Right of Action. The bill includes a private right of action against deployers for uses that result in algorithmic discrimination, though the plaintiff bears the burden of demonstrating that the deployer’s use of the automated decision tool resulted in algorithmic discrimination that caused actual harm to the person bringing the civil action.
Whether this particular legislation progresses in any meaningful way remains to be seen (and it will almost certainly go through significant changes as it works its way through the legislative process), but we are certain to see more proposals at the state level that seek to regulate AI. We are regularly helping our clients navigate how to implement and develop AI despite the current lack of legal requirements and clear guidance and are happy to do the same for you as you consider how to make AI work for your organization.