On February 27, the Federal Trade Commission (FTC) released a blog post advising companies to monitor their claims regarding their use of artificial intelligence (or AI). According to the agency, companies relying (or purporting to rely) on AI can get themselves in trouble with the FTC by exaggerating the claims of their products, overpromising and underdelivering, and not properly accounting for reasonably foreseeable risks to consumers. The FTC’s blog on this topic indicates that the agency is closely watching how the developments with the use of this technology play out and that it will not hesitate to use its enforcement authority to penalize conduct it views to be unfair or deceptive.
The FTC’s timing with this blog post is not accidental: Over the past few months, AI has become a buzzword for many companies across various industries, including technology, medicine, finance, media, and others, and has garnered significant media attention as a result. While the use of AI by companies to improve their products and services is nothing new, the technology has recently become popularized by chatbots and other technologies that are bringing AI systems to the mainstream. These systems rely on novel technologies and need large amounts of data so that the systems can be trained to make human-like decisions, which means that the use AI potentially raises consumer protection and data privacy concerns (among other risks, which you can read more about here).
This is where the FTC comes in. The agency has routinely warned companies about its concerns regarding AI, particularly as they relate racial and other forms of discrimination. In its most recent guidance, the FTC focused specifically on advertising and advised companies to ensure that they are being transparent regarding how their AI products work and what the technology can do. For instance, according to the FTC, computers have not yet proven that they can make reliable predictions of human behavior, so any unsubstantiated advertising claims of that nature could be considered deceptive. Likewise, assertions that an AI-powered product performs better than a non-AI product must also be proven. Companies utilizing AI in their products should understand exactly how that technology works and must advertise accordingly to their consumers or else put themselves potentially at risk of an FTC enforcement action.
The FTC has also focused on the use AI technology as it relates to consumer data privacy. Not only do AI machines collect large amounts of data to continually improve their decision-making algorithms, but data collected by means of AI or used for AI purposes also raises concerns around user-informed consent and practices around data collection. Additionally, with companies using large amounts of data to fuel their AI systems, they might retain the data for longer periods of time than what was originally disclosed to users, as well as use the data for other purposes (that were undisclosed and not consented to). These practices can lead to privacy concerns. The FTC has previously brought enforcement actions against companies for using their AI systems in a manner that were inconsistent with their privacy obligations and, as part of their settlements, required companies to destroy the underlying algorithms they used to develop their AI systems.