Texas Attorney General’s Office Reaches Settlement with AI Company Over Deceptive Claims

Texas Attorney General’s Office Reaches Settlement with AI Company Over Deceptive Claims

Blog WilmerHale Privacy and Cybersecurity Law

On September 18, the Texas Attorney General (AG) announced a settlement agreement with Pieces Technologies, Inc. (“Pieces”), a Dallas-based healthcare artificial intelligence (AI) research and development firm, resolving allegations that the company made false, misleading or deceptive claims about the accuracy of its healthcare AI products. According to the Texas AG’s Office, the company inaccurately advertised its AI products and potentially harmed patients as a result. The Texas AG Office’s settlement agreement with Pieces includes requirements related to disclosures in connection with marketing and advertising its products or services, prohibitions against misrepresentations (including independence of an endorser or reviewers of a business product or service) and documentation obligations concerning potentially harmful uses of its products or services.

This settlement is notable for a few reasons. First, it reinforces the Texas AG Office’s position as a regulator that AI companies should pay attention to when it comes to data-related issues. This is the latest in a series of privacy-related actions by that office, and it seems likely that Texas will become even more active in the future. Second, it adds state AGs to the list of regulators that AI companies must consider as they develop and deploy their products (in addition to the Federal Trade Commission (FTC), which has been quite active on these issues). Finally, it highlights the fact that the relevant regulatory enforcement risk in the AI space (as is the case with privacy enforcement actions overall) will be greater for companies whose products are intended to capture sensitive information as part of their development.

In this article, we summarize the agreement entered into by Pieces and identify key takeaways from the settlement. To stay up to date on notable state privacy law developments, please subscribe to the WilmerHale Privacy and Cybersecurity Law Blog.

I. Summary of the Complaint

The complaint alleges Pieces made false, misleading or deceptive representations concerning a series of metrics and benchmarks purporting to show that the outputs of its generative AI products were highly accurate. The Texas AG contended that Pieces’ representations violated the Deceptive Trade Practices Act (DTPA), Tex. Bus. & Com. Code §§ 17.41-.63. Pieces develops healthcare AI products for use by inpatient healthcare facilities. Its product offerings include autonomous, AI-generated clinical documentation. According to the agreement, Pieces’ products “are meant to be relied on by physicians and other medical staff to assist them with treating their patients.” In marketing its products, Pieces represented it had minimal “hallucination[s],” a term used to describe generative AI products creating an output that is incorrect or misleading. Further statements made to prospective consumers claimed that its products were “highly accurate,” with a “severe hallucination rate” of “<.001%” and “<1 per 100,000.” These claims led four major Texas hospitals to provide their patients’ healthcare data to Pieces to obtain AI outputs consisting of summaries of patients’ conditions and treatment plans.

The Texas AG maintains that Pieces’ advertisements were inaccurate and may have deceived healthcare teams about the accuracy and safety of its services to induce business. Further, the Texas AG explained that by making allegedly inaccurate statements about its products while using patients’ real-time health data, Pieces put public interest at risk.

II. Stipulated Agreement 

Under the stipulated agreement, Pieces agrees to the following:

Clear and Conspicuous Disclosures in Marketing and Advertising

Pieces must provide clear and conspicuous disclosures in marketing and advertising its products and services for five years. Disclosure is required when Pieces makes any direct or indirect statements regarding metrics, benchmarks or similar measurements describing the outputs of its generative AI products. In such cases, Pieces must disclose both the meaning and definition of the metric, benchmark or similar measurement and the method, procedure or any other process to calculate the metric, benchmark or similar measurement in marketing or advertising its products and services. The agreement permits an alternative to this type of disclosure if Pieces engages an independent, third-party auditor whose findings support Pieces’ marketing or advertising claims.

Permanent Injunction of False, Misleading or Unsubstantiated Product or Service Representations

Pieces is prohibited from making misrepresentations concerning a product or service in connection with the product’s or service’s accuracy, testing methodologies, monitoring procedures, definitions, or meaning of product or service metrics and data usage. The agreement also prohibits misleading consumers or users regarding the accuracy, functionality, purpose or any other features of its products, or omitting financial or similar arrangements of individuals participating in product or service endorsements.

A Requirement to Disclose Any Potentially Harmful Uses or Misuses of Products or Services

The agreement requires Pieces to disclose any known harmful or potentially harmful uses or misuses of its products or services to consumers. Pieces will be required to document the type of data and/or models used to train its products and services; explain the intended purpose of its products and services; disclose any known, or reasonably knowable, limitations of its products or services, including risks to patients and healthcare providers from the use of the product or service, such as the risk of physical or financial injury in connection with a product’s or service’s inaccurate output; disclose potential misuses that can increase the risk of inaccurate outputs or increase the risk of harm; and provide users with information to prevent the misuse of the product or service.

III. Key Takeaways

1. AI companies that process sensitive data should be especially careful of regulatory scrutiny.

It is no surprise that this enforcement action came against a company that was processing sensitive health data about consumers. Sensitive data (including health data) has been an area of focus for regulators with regard to privacy-related enforcement actions over the past few years. It seems likely that AI companies that process sensitive data as part of developing and deploying their products will be a particular area of focus for regulators.

2. State AGs also care about AI.

The FTC has been quite active on AI issues over the past few years, both through its enforcement actions and its guidance documents. This enforcement action by the Texas AG’s Office shows that AI companies also have state AGs to contend with. In some ways, states may be trickier for AI companies given that a number of them have comprehensive state privacy laws that create affirmative obligations for companies in terms of how they are permitted to use personal data (which may impact what data is available for AI training).

3. Regulators seek transparency, reliability and accountability of AI tools that have access to personal data.

Companies advertising and marketing the accuracy of AI products should expect increased regulatory oversight. Regulators are likely to focus on AI companies that have access to high-risk personal data. In cases where healthcare data, including highly confidential personal information, is at risk, AI companies handling this data should be truthful, accurate and responsible for any representations. Concurrently, companies with access to personal data that engage providers with AI tools should thoroughly investigate those providers to ensure that they comply with standards of practice.

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.