close
close

Amid AI Crackdown, FTC Takes Aggressive New Stance on Injustice and Media and Instrumental Liability | Kelley Drye & Warren LLP

Amid AI Crackdown, FTC Takes Aggressive New Stance on Injustice and Media and Instrumental Liability | Kelley Drye & Warren LLP

Much has already been said and written about the FTC’s recent enforcement initiative, calledOperation AI Comply.” The coordinated sweep announced last month involved five separate FTC enforcement actions against companies that use or claim to use AI tools to improve consumer goods and services. For example, as part of of the sweep, the FTC targeted a company called DoNotPay that claimed its ownAI Lawyer services” could replace a human lawyer andreplace the $200 billion legal industry with artificial intelligence”—claims that we, as KDW lawyers, were glad to see proved unsubstantiated. The sweep also involved enforcement actions against three business opportunity providers who claimed their AI tools could help clients generate passive income through online storefronts.

While the headlines and press releases focused on the role of AI in each case, the basic principles of application and rationale for DoNotPay and the three business opportunity cases are fairly straightforward and consistent with FTC’s Historical Approach to Deception. But a closer reading of the fifth enforcement action against Rytr reveals a surprising new stance on liability that any company providing services to third parties (whether using AI or not) should take a close look. Specifically, the FTC’s complaint against Rytr suggests that companies can be held liable under the FTC Act for failing to foresee how their goods and services could be used by third parties to commit deception.

As a background, Rytr sold an AI-powered writing assistant that allowed customers to generate written content in various forms, including email writing, generating product descriptions, blogs and articles. One of Rytr’s use cases was taggedTestimonial & Review” and allowed clients to generate written content for consumer reviews based on keyword and tone selections. The FTC alleged that Rytr violated the FTC Act by providing itsTestimonial & Review” because it could be used to generate fake reviews that mislead consumers who decide to buy the service or product described. Several aspects stand out in Rytr’s complaint:

  • The principal count of the complaint is onemeans and instrumentalities” are counted according to Rytr users who offer themeans generating written content for consumer reviews that is false and misleading.” This is unusual:means and instrumentalities” or M&I counts are typically sought in conjunction with one or more Section 5 violations, rules, or laws alleging an underlying deception. In this case, for example, we would have expected an allegation that third-party users were posting or using fake reviews generated by Rytr to mislead consumers, but there is no allegation.
  • Not only are there no allegations related to the use of fake reviews to deceive consumers, but there is no evidence presented in the complaint that such fake reviews were ever used. The Commission relies instead on user inputs and outputs created by its own researchers, as well as evidence that over time, certain Rytr users generated hundreds or thousands of comments. Importantly, there is no allegation that these generated reviews were used or that they deceived consumers in a material way.
  • The complaint also includes a new unfairness count alleging that Rytroffered a service intended to rapidly generate unlimited content for consumer reviews and created false and misleading written content for consumer reviews.” But again, there is no evidence in the complaint that these fake reviews or misleading are actually used to deceive consumers, which raises questions about how the complaint satisfies thesubstantial harm” of the unfairness test.

Notably, Rytr is the only case within the FTC’s AI enforcement sweep that was not unanimously cleared, resulting in a 3-2 vote along party lines. The two Republican commissioners issued separate dissenting statements (and joined each other’s statements) expressing concern that the mere possibility that Rytr’s tools could be used to create false or misleading customer reviews is not in itself a violation of Section 5. (As we discussed yesterday). here, discrepant statements have been common of late, and there’s often a lot to unpack.)

Specifically, Commissioner Ferguson’s dissenting statement stated that ”

In this sense, Rytr’s action is not new because of its link with artificial intelligence, but because of its approach to make a company directly responsible through means and instruments. potential misuse by a third party. Many companies and industries offer tools and services that could hypothetically be used to improve consumer experiences or perpetrate fraud. The Rytr case creates uncertainty about how the agency can attribute liability to service providers in the future.

In short, the FTCThe “Operation AI Comply” initiative offers a handful of clear, AI-related compliance points that shouldn’t come as a surprise to regular readers of this blog: Be aware of the limits of your AI tools, make sure they performance claims are fully substantiated, do not override the involvement of AI in any products and services, and generally expect AI claims to receive the same (or perhaps more) scrutiny as other advertising claims.

The real focus, however, is the agency’s attempt to expand the scope of M&I liability for non-deceptive products and services based on how they might be used by third parties. While Rytr’s matter ended with a consent agreement, it will be interesting to see what happens if a similar complaint comes to litigation in the future.

(See source.)