Year in Review: The Top Ten US Data Privacy Developments from 2024

Year in Review: The Top Ten US Data Privacy Developments from 2024

Blog WilmerHale Privacy and Cybersecurity Law

2024 was another year of substantial legislative and regulatory advances at the international, federal, and state levels with regard to data protection law. Artificial intelligence (AI) regulation continued to hurdle forward with several key advancements, such as the adoption of the European Union’s Artificial Intelligence Act (“EU AI Act”) and a growing number of state AI bills in the US. The Federal Trade Commission (FTC) continued to flex its enforcement authority over data privacy and cybersecurity violations, paying special attention to claims companies made about their AI capabilities and other uses of AI the agency alleged to be “unfair.” Protecting sensitive data was a main priority for the FTC and the Department of Justice (DOJ)—the former focusing on data brokers’ collection of genetic data, consumer web data, and location data, while the latter focused on national security and issues related to US data transfers to “countries of concern.” Meanwhile, state legislatures, regulators, and attorneys general (AGs) continued to churn out comprehensive privacy laws, promulgate rules, and pursue enforcement actions.

Below we have summarized (in no particular order) the top ten data privacy developments from the past year. Companies should understand the key shifts and trends from 2024 in relation to their existing compliance obligations and anticipate potential legislative and regulatory changes in 2025 and beyond. Indeed, we have already seen some of these trends continue in the new year, with some states already proposing new privacy laws, and federal regulators taking steps to reinforce their enforcement authority.

We will continue tracking all these developments in this new year and providing analysis on the compliance changes and policy updates in our Privacy and Cybersecurity Law blog, which you can subscribe to here.

1. A Continued Surge in State Comprehensive Data Privacy Laws

2023 witnessed a flurry of state legislative activity around privacy, and 2024 was no different. We started off the year with twelve state “comprehensive” data privacy laws and wrapped with nineteen. (Keep in mind that while the term “comprehensive” is routinely used in connection with these laws, they are not all comprehensive because of the large volume of exceptions to coverage in them.)

Among the states, New Jersey was the first to pass a data privacy law in January. As the spring flowers bloomed, so did privacy legislation in New Hampshire, Kentucky, Nebraska, Maryland, Minnesota, and Rhode Island. The new state laws all expanded consumer rights such as access, deletion, and portability of personal data and also required companies to conduct data protection assessments to address associated risks resulting from activities such as targeted advertising and profiling, sales of personal data, and processing of sensitive data. Like the FTC, state legislatures were increasingly focused on sensitive data and included specific provisions regarding its processing.

One state stood out through its deviation from previous state privacy models. In contrast with the other state comprehensive laws, Maryland’s Online Data Privacy Act, modeled after the Washington Privacy Act (which has never successfully passed), will impose stricter data minimization standards on companies and prohibit the sale of sensitive data without an option for consumer choice to opt in or out. It also will establish an anti-discrimination protection that prohibits data controllers from collecting, processing, or transferring personal or publicly-owned data in ways that result in discrimination or unequal provision of goods or services based on protected characteristics. Provisions like these will likely continue to be clarified by the state’s regulators once the law goes into effect.

Companies should note the differing timelines for when these laws will take effect. The laws in New Jersey, New Hampshire, and Nebraska will take effect as early as January 2025, while Minnesota will begin enforcement later in the year. Kentucky and Rhode Island will go into effect in January 2026. Standing out from the pack once again, Maryland has a two-tier compliance date meaning the law will go into effect on October 1, 2025, but enforcement actions on processing activities will not start until April 1, 2026.

Notably, Vermont’s bill—the first state comprehensive privacy law to include a private right of action for privacy-related violations—was vetoed by the governor who believed the bill “created an unnecessary and avoidable level of risk.”

2. Notable Developments in State Privacy Law

Aside from comprehensive state privacy laws, significant changes that favored defendant companies were made to Illinois’s landmark Biometric Information Privacy Act (BIPA) and Massachusetts’s wiretapping law. Illinois’s legislature approved a bill to curtail the damages available to BIPA plaintiffs by clarifying that a BIPA violation only occurs on the initial unconsented collection of biometric data. Although this law went into effect in August 2024, it does not apply retroactively, so any violations that occurred before August will follow the law’s prior schedule of damages and accruals. Thus, the steady stream of BIPA cases with potentially large damages awards will likely not dry up any time soon. A decision released by the Supreme Judicial Court of Massachusetts narrowing the scope of the state’s wiretapping law proved a major win for companies on the receiving end of privacy class action lawsuits related to their use of pixels and other tracking technologies.

The California Privacy Protection Agency (CPPA) also made some important moves this year, issuing its first-ever enforcement advisory, which reaffirmed data minimization as a “foundational principle in the CCPA” and reminded businesses that data minimization should inform all of their data processing activities. The Agency also oversaw the rulemaking process, which included a notice-and-comment period and public hearing, to establish more data broker regulations. Although these new regulations did not go into effect until January 1, 2025, the Agency kicked off its regulatory focus on data brokers with a public investigative sweep of data broker regulatory compliance in the fall of 2024.

Washington’s My Health My Data Act (“MHMDA”) went into effect on March 31, 2024. The MHMDA, the first comprehensive state law to protect “consumer health data” outside the scope of HIPAA, is enforceable by the Washington AG and via a private right of action as a violation of Washington’s consumer protection law. The private right of action can lead to civil penalties of up to $7,500 per violation and could create class action risks for companies. However, so far, no enforcement actions or public litigation have been taken under the law, but companies should continue to carefully evaluate their compliance practices.

3. State AGs as Data Privacy and AI Regulators

Over the past few years, state AGs have carved out their place as privacy regulators and in 2024, many state AGs made it clear that they also care about AI. In September 2024, the Texas AG announced a settlement agreement with Pieces Technology, Inc., a Dallas-based healthcare AI research and development firm, which resolved allegations against the veracity of the company’s AI product claims. The agreement also highlighted another issue that is becoming an increasing focus for state AGs: the protection of consumers’ sensitive data. A report released by Connecticut’s AG early in 2024 spotlighted sensitive data—including biometric, genetic, and precise geolocation data—as a key area of enforcement focus and discussed the need for data brokers and companies that handle sensitive data to ensure that their processing activities comply with relevant legal requirements. The AG’s report also focused on the importance of privacy policies and disclosures for consumer protection, serving as a reminder that privacy policies are more than a pro forma exercise.

California’s AG echoed this point about privacy policies in a settlement agreement with DoorDash. The settlement emphasized the need for companies to clearly and comprehensively describe their data-sharing practices in their privacy policies. The agreement also showcased the California Consumer Privacy Act’s (CCPA) broad conception of a “sale” of personal information, which can include a disclosure of personal information “to a third party for monetary or other valuable consideration” (emphasis added). The complaint did not allege that DoorDash received direct monetary compensation in exchange for disclosing consumer personal information; rather, it alleged that the company received the “benefit of advertising to potential new customers,” which qualified as valuable consideration. This settlement should prompt businesses to assess whether any of their disclosures of personal information constitute “sales” within the CCPA’s definition.

4. AI Regulation and Best Practices Continue to Evolve

In addition to regulators, AI continued to be a growing focus for legislatures and policymakers in the US and around the world. The ever-evolving approaches to AI regulation illustrated the rapid development of the technology, with some countries taking major regulatory steps this year.

In the US, the early regulation of AI appears to be following a similar “patchwork” pattern as data privacy with contributions at the state and federal levels. Colorado became the first state to enact comprehensive AI legislation with its “Colorado Artificial Intelligence Act,” which applies to developers and deployers in Colorado using “high-risk AI.” This Act requires regulated entities to exercise “reasonable care” to avoid algorithmic discrimination, develop risk management programs, and ensure transparency with consumers when high-risk AI is in use. California also continued to be a leader in the AI regulatory space with three new AI laws this year: The California AI Transparency Act, the Generative AI: Training Data Transparency Act, and the Health Care Services: Artificial Intelligence Act. These laws centered on AI transparency and established key requirements such as disclosures, AI detection tools for users, and other measures that signal when content has been affected by AI. These state AI laws are set to take effect in 2026.

At the federal level, the National Institute for Standards and Technology (“NIST”) released new, nonbinding guidance documents and software in response to Biden’s Executive Order on AI. The guidance provided organizations with best practices to help improve the “safety, security, and trustworthiness” of their AI systems and included risk mitigation guidelines for developers of generative AI and dual-use foundation models, secure testing software for cyberattack responses, and a plan for global coordination in developing international AI standards.

In July, the European Commission, US Department of Justice, US Federal Trade Commission, and UK Competition and Markets Authority released a joint statement highlighting the benefits and risks of AI to competition, innovation, and consumers. The statement, which focused on generative AI foundation models and other AI products, outlined key principles for protecting competition in the AI ecosystem, such as fair dealing, interoperability of AI products, and consumer choice. Notably, the statement did not articulate how these principles would work within their respective jurisdictions’ antitrust frameworks, but it generally supported the themes and risk-based approach found in the EU’s Artificial Intelligence Act (“EU AI Act”).

5. The European Union AI Act Goes Live

Speaking of the EU AI Act, the European Parliament adopted the law on March 13, 2024, marking a significant development for companies conducting business in the European Union. The EU AI Act, considered the world’s first comprehensive legal framework for AI, introduced a risk-based approach to AI regulation and set out rules on data quality, transparency, human oversight, and accountability. The Act defines AI by focusing on the autonomy and the inference capabilities of AI systems and applies to a broad range of entities, including extraterritorial coverage for entities located outside the EU that fit specified criteria.

The Act classifies AI systems according to risk—unacceptable risk, high risk, limited risk, and minimal risk—and imposes requirements accordingly. Unacceptable risk systems are prohibited, high risk systems are subject to extensive requirements, limited risk systems are subject to transparency obligations (with special obligations for deployers and providers), and minimal risk systems do not trigger any obligations. It also prohibits AI practices that materially distort people’s behavior (in a manner that leads to physical or psychological harm) or threaten core ideals of democratic societies. The rules on prohibited AI systems will become applicable on February 2, 2025.

The regulation of generative AI under the Act (referred to as General-Purpose AI “GPAI” Models) departs from the risk-based approach described above and instead imposes specific requirements on the entities using this technology. There are additional obligations on providers of GPAI Models with “systemic risk,” meaning the model has the possibility of causing a significant impact on the EU market due to reasonably foreseeable negative effects on public health, safety, security, or fundamental rights. These requirements include conducting model evaluations, implementing risk mitigation protocols, mandating incident reporting, and establishing adequate cybersecurity measures. Alongside these extensive regulations are measures in support of innovation to help businesses explore and experiment with AI under regulatory supervision.

6. Continued Expansion of FTC Enforcement Activities

The year opened with back-to-back enforcement actions against data brokers processing consumer location data, signaling that the protection of sensitive data would be an enforcement priority for the agency. Through enforcement actions and other published guidance, the FTC specified and expanded what it considers to be “sensitive data,” building out a list that now includes location data, genetic data, and consumer web browsing data.

The FTC issued its first-ever prohibition on the use, sale, and disclosure of sensitive location data against X-Mode Social and Outlogic (“X-Mode”), a location data broker, and a few days later announced a similar action against InMarket Media, another data broker, for its allegedly illegal collection, use, and processing of consumer location data. While the cases primarily focused on different areas—X-Mode on misrepresentations and InMarket on transparency and notice-and-consent obligations—both actions highlighted the need for companies to actively oversee third-party data collection practices, provide clear and accurate disclosures to third parties regarding collection of sensitive location data, ensure that the uses for location data are commensurate with potential risks of harm, and obtain informed consent from consumers.

The top of the year also featured three enforcement actions against sellers of genetic testing products, which stressed the importance of securing biometric and genetic information, ensuring accuracy of claims, providing adequate privacy protocols, and obtaining consent for the use and disclosure of genetic data. As stated in the press release announcing the actions, the FTC intended to put companies that collect or store genetic data “on notice that the FTC expects security in line with the sensitivity of the data.” This principle carried through in a later enforcement action against Avast, which discussed how web browsing data that reveals highly sensitive information about a consumer may also be considered sensitive data. Notably, UK-based Avast serves as a reminder to multinational companies that their data practices outside the US could still fall within the FTC’s enforcement authority.

The FTC also took a special interest in AI this year, alongside its data privacy and cybersecurity enforcement activities. The agency kicked off the year with an enforcement action against Rite Aid, alleging that the company caused consumer harm through the deployment of facial recognition technology in its stores, among other charges. This action marked the first time the agency claimed that a company’s use of AI was “unfair” and provided an early example of the FTC’s developing approach to enforcing consumer protection in the age of AI. The agency hosted a tech summit on AI wherein speakers explored various ways to leverage the FTC’s authority against harmful uses of AI and reaffirmed messaging from existing guidance stating that current anti-discrimination laws would apply to new technologies, like AI. Indeed, a few weeks after the Rite Aid enforcement action, FTC Chair Lina Khan remarked that “[t]here is no AI exemption from the laws on the books.”

The agency continued to crack down on alleged unfair and deceptive business practices through an initiative titled “Operation AI Comply,” which focused on potentially false or exaggerated claims about an AI product’s offerings or abilities. As part of this operation, the agency announced five enforcement actions against companies that allegedly made false promises to consumers, claiming that the companies failed to deliver as promised and cost their subscribers financial loss. For example, one company promised an “AI lawyer” that would allow consumers to sue for assault on their own, while another promised AI-powered investment solutions that could make millions.

Finally, the FTC also remained active in adjacent issue areas like cybersecurity, with claims against companies for misrepresentations and inadequacies regarding their data security practices. Generally, these enforcement actions highlighted the importance of appropriate data retention policies, security safeguards, and timely and accurate data breach notifications.

7. An Active Year for HIPAA Enforcement

The U.S. Department of Health and Human Services Office for Civil Rights (“HHS OCR”) spotlighted data privacy and cybersecurity practices as priorities this year, bringing persistent enforcement actions and instituting higher fines against health care entities for HIPAA-related issues, including ransomware, phishing, impermissible access to electronic protected health information (“PHI”), and impermissible disclosure of reproductive health information. In total, the agency brought 22 HIPAA enforcement actions—the second highest in OCR history—and collected $9.9 million in settlements and civil penalties. OCR also launched its Risk Analysis Initiative to urge health care entities to take the necessary steps required to safeguard protected health information by implementing thorough assessments that evaluate the confidentiality, integrity, and availability of PHI in order to reduce the risks of cyber incidents overall. In 2024, the agency brought two enforcement actions under this new initiative, and we expect this to continue into the new year. Notably, the most recent enforcement action against Holy Redeemer, a HIPAA-covered entity that disclosed a female patient’s PHI, reaffirms OCR’s interest in privacy for reproductive health information.

8. New Restrictions on International Data Transfers

The US Department of Justice (DOJ) entered the scene as a new regulator of sensitive data this year with the release of its Notice of Proposed Rulemaking (NPRM) regarding transfers of bulk US sensitive personal data or government-related data to “countries of concern,” such as China, Russia, Iran, North Korea, Venezuela, and Cuba. The proposed rule defines six categories of “sensitive personal data”: (1) covered personal identifiers, (2) precise geolocation data, (3) biometric identifiers, (4) human genomic data, (5) personal health data, and (6) personal financial data. The proposed rules also placed prohibitions on foreign transactions that would give entities from “countries of concern” access to specified volumes of data within the six categories. Despite the rule’s narrow scope, it would impose additional organizational responsibilities on companies engaged in vendor agreements, employment agreements, and investment agreements that qualify as “restricted transactions.” The DOJ issued the final rule on December 27, 2024. The rule will go into effect on April 8, 2025.

On April 4, 2024, President Biden signed into law H.R. 815, which includes the Protecting Americans’ Data from Foreign Adversaries Act (“PADFA” or “the Act”). The Act generally prohibits data brokers from selling, licensing, transferring, disclosing, trading, or providing access to “personally identifiable sensitive data” of Americans to foreign adversaries, namely China, Russia, Iran, and North Korea, or entities controlled by a foreign adversary. Although the DOJ Rule and this Act share a common purpose, the PADFA focuses more on categories of data rather than transactions. The Act includes sixteen categories of “sensitive data,” including biometric information, precise geolocation information, and genetic information. The Act went into effect on June 23, 2024, and violations will be enforced by the FTC. Collectively, the DOJ Rule and the PADFA reenforce a clear message from the federal government to foreign adversaries: Americans’ private information is not for sale.

9. Children’s Privacy as an Enforcement Priority

Like national security, children’s privacy tends to be an issue that garners support from both sides of the aisle and continues to generate regulatory interest at both the state and federal levels. Early in the year, the FTC published proposed modifications to the Children’s Online Privacy Protection Rule (the “COPPA Rule”) that would require operators to “obtain verifiable parental consent before any collection, use, or disclosure of personal information from children.” Some other notable changes included data security requirements, restrictions on “nudging,” safe harbor programs, notice requirements, and an enlarged scope of the “personal information” covered by the COPPA Rule to include biometric information.

At the state level, California’s AG announced a settlement with Tilting Point Media, highlighting how crucial it is that companies processing children’s data ensure that they are doing so with the appropriate consents and authorizations, such as parental consents (for users under the age of 13) and affirmative opt-in authorizations (for users between the ages of 13 and 16). The agreement also reiterated the importance of data minimization—collection of only what is reasonably necessary for the child to participate in the activity—and accurate privacy policy disclosures. This agreement was the first in a series of three enforcement actions involving children’s data, demonstrating California’s interest in this privacy protection area.

10. Another Attempt at Federal Comprehensive Privacy Legislation Falls Flat

Democratic and Republican representatives joined forces to revive efforts to pass a comprehensive data privacy law at the federal level this year. The bill, The American Privacy Rights Act of 2024 (APRA), faced early opposition and ultimately fizzled out following a cancelled markup session by the House Committee in June, failing to advance any further than its predecessor, the American Data Privacy Protection Act (ADPPA). The APRA was modeled after the ADPPA and shared similar provisions establishing a private right of action for individuals and preemption of state laws (with a small carveout for remedies under Illinois’s biometric and genetic privacy laws and California’s Privacy Rights Act). The APRA, however, contained stricter data minimization requirements for covered entities and service providers; established direct obligations for data brokers and large data holders; empowered the FTC and state AGs with authority to enforce provisions; and created rights for consumers to opt out of certain data transfers, targeted advertising, covered algorithms, and AI decisions. Despite its short lifespan, some of the APRA’s new provisions could still be influential for future federal privacy legislation.

 

Authors

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.