On Thursday, January 25, the Federal Trade Commission’s (FTC) Office of Technology hosted the FTC Tech Summit to discuss key developments in artificial intelligence (AI). The FTC brought together thought leaders from across the AI landscape to consider how to foster a fair and inclusive AI marketplace given the rapid development of large language models and generative AI. The summit included remarks from Chair Lina Khan, Commissioners Rebecca Slaughter and Alvaro Bedoya, Chief Technology Officer Stephanie Nguyen, and Directors Henry Liu and Samuel Levine from the Bureaus of Competition and Consumer Protection, respectively. The event also included panel discussions on the role of chips and cloud infrastructure in the development of AI, the function of data in AI technologies and models, and AI consumer applications.
The summit further showcased the FTC’s interest in curtailing the risks and harms posed by AI to consumers. The FTC has been the most active federal regulator on AI issues to date. In addition to recently bringing the first enforcement action against a company for using AI in an allegedly biased and unfair manner, the FTC has issued guidance warning companies about the risks of AI-driven bias and discrimination, deceptive trade practices and the consequences of using copyrighted work to train AI models. Khan made it clear in her remarks that “[t]here is no AI exemption from the laws on the books,” and that the FTC is looking closely at how corporations are using AI in allegedly anticompetitive ways and to deceive consumers.
This post summarizes key takeaways from the event’s remarks and panel discussions. For more updates and analyses of current developments in AI, data privacy and cybersecurity, please subscribe to the WilmerHale Privacy and Cybersecurity Law blog.
Key Takeaways
- The FTC is looking for ways to leverage its existing authority to prevent AI harms. During the event, FTC commissioners and staff indicated that the FTC will use its existing enforcement authority to minimize harm in the AI marketplace. According to Slaughter, the best way to stay on top of the rapidly evolving AI market is “by using the full panoply of the FTC’s tools.” This includes the FTC’s proactive use of its consumer protection authority sourced in Section 5 of the FTC Act—by, for instance, requiring that companies using AI models provide notice to consumers and companies using AI models trained with unlawfully acquired data delete both the models and the underlying data (as was required by the FTC’s recent settlement with Rite Aid). The FTC has already authorized the use of compulsory process in nonpublic investigations involving products and services that use or claim to be produced using AI. The commissioners recounted how the FTC’s inaction during the emergence of technologies like adtech and social media has caused a variety of present-day harms and that the agency is keenly aware of these lessons.
- The FTC is still getting its arms around the technology and its implications for consumers. Slaughter and Bedoya explained that the FTC is prepared to exercise its Section 6(b) authority in this area, which empowers the commission to require entities to provide information about their business practices. Such information, according to the commissioners, would further inform the agency’s understanding of the AI landscape and future rules governing AI development. In fact, on the same day as the summit, the FTC issued Section 6(b) orders to five companies, requiring them to provide information regarding recent investments and partnerships involving generative AI companies and cloud service providers.
- The FTC remains focused on the potential for AI to facilitate discrimination and bias. The commissioners emphasized that the FTC, as well as other regulating agencies, must commit to curbing the possible discriminatory harms of AI, a core feature of the Biden-Harris administration’s AI agenda. Bedoya explained the importance of knowing what data is used to train and develop AI systems. He highlighted a recent settlement between the FTC and Rite Aid, which, according to the FTC, had been identifying suspected shoplifters by using facial recognition software that disproportionately misidentified minorities. Levine stated that firms should either undertake efforts to mitigate the discriminatory effects of their AI tools or stop using those tools altogether.
- The FTC will look to allocate liability away from users. According to Khan, the FTC is committed to pinpointing the firms whose activities are driving market concentration and unlawful use of data. She referenced the FTC’s recent enforcement sweep of robocall companies, which focused on upstream entities that had been enabling unlawful telemarketing. These comments are consistent with other recent statements we have heard from the FTC indicating that the companies making and deploying AI should be held responsible for downstream consumer harms.
- The FTC is concerned with market concentration at the lower levels of the “tech stack.” Though the commissioners did not directly address perceived market concentration at the chip and cloud layers of the AI production “stack,” panelists—particularly those involved in the panel devoted to the role of chips and cloud infrastructure in the development of AI—expressed concerns that this market concentration can and will hinder AI innovation and cause consumer harms. Dominant entities at these base layers may prefer vertically integrated product lines, which can lead to increased prices and reduced quality. Panelists were vehement that customers must be able to move freely between different vendors at all levels of the stack if innovation and competition are to be encouraged.
- Financial services regulators are also increasingly eyeing AI regulation. According to Atur Desai, a Consumer Financial Protection Bureau (CFPB) lawyer who participated as a panelist and spoke in his individual capacity, the CFPB has already issued two circulars announcing that companies relying on complex algorithms must provide specific and accurate explanations for denying applications. He explained that the CFPB, like other agencies, is prioritizing capacity-building around AI and appears poised to apply extant consumer financial laws where appropriate.
Associates Josh M. Feinzig and Tim J. Kolankowski contributed to this blog post.