FTC Weighs in on Threats to Competition from Artificial Intelligence in Comment to U.S. Copyright Office
The Federal Trade Commission (FTC) recently submitted a comment to the U.S. Copyright Office in response to its “Notice of Inquiry” in the Federal Register examining copyright issues related to artificial intelligence (AI). The FTC’s comment focuses largely on potential threats to competition from AI and potential unfair copyright practices involving AI. It further suggests that certain actions relating to AI may violate Section 5 of the Federal Trade Commission Act, even if the actions are otherwise consistent with copyright law.
FTC Concerns
Focusing on potential competitive threats from AI, the submission argues that “the rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms.” According to the FTC, many of these incumbents are vertically integrated and “control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data.” In turn, the comment cautions that such companies may have an incentive “to unlawfully entrench their market positions in AI and related markets including digital content markets.” In other words, the FTC’s concern is that if large, vertically integrated incumbent firms control all of the inputs for AI, then they may also control all of the AI’s decision-making, including decisions that might exclude or disadvantage competitors.
The FTC comment acknowledges that while “[m]any large technology firms possess vast financial resources that enable them to indemnify the users of their generative AI tools,” training an AI tool on protected content without the creator’s consent or selling output generated from such an application may potentially “constitute an unfair method of competition or an unfair or deceptive practice, especially when the copyright violation deceives consumers.” The FTC goes on to warn that it “will vigorously use the full range of its authorities to protect Americans from deceptive and unfair conduct” and notes that “[t]he FTC is empowered under Section 5 of the FTC Act to protect the public against unfair methods of competition, including when powerful firms unfairly use AI technologies in a manner that tends to harm competitive conditions.”
The FTC also emphasizes the potential for both consumers and creators to be harmed when authorship does not match up with consumer expectations, which can be exacerbated by AI practices. For example, according to the FTC, “deepfakes” – using AI to generate music, videos, text or images that have similar, but not identical, characteristics to the underlying source content – may potentially constitute a deceptive practice or an unfair method of competition.
Other threats to competition underscored by the FTC comment include that AI tools could be “used to facilitate collusive behavior” that unfairly inflates prices, causes price discrimination, or manipulates output. These types of AI compliance issues have been on the FTC’s radar for some time. As far back as 2017, the FTC was raising concerns over purported “algorithmic collusion” the concept that adoption of AI-powered pricing algorithms across competitors can potentially constitute an anticompetitive agreement to set artificially inflated prices.
Other Recent Actions
While the comment to the U.S. Copyright Office raises more questions than it answers, recent actions signal an increased willingness on the FTC’s part to consider competition and consumer protection issues in AI. On November 21, the FTC authorized the use of civil investigative demands (CIDs) in nonpublic investigations involving AI-driven products and services. CIDs are compulsory and function like subpoenas, allowing for the collection of documents, information and testimony. According to the FTC, the authorization “streamlines” the FTC’s ability to issue CIDs. In practice, the resolution allows a single commissioner, instead of a majority of sitting commissioners, to approve compulsory process requests in any investigation within the scope of the resolution for the next 10 years. What practical effect this resolution will have remains to be seen; however, businesses engaged in conduct that may be implicated by the resolution should be aware that FTC staff will now have an expedited ability to carry out compulsory process requests, which will very likely increase the number and scope of investigations conducted by the FTC.
The FTC also recently revealed that it has collected more than 100 public comments on AI’s impact on the US$576 billion cloud computing industry. Most of these public comments addressed issues specifically related to competition, including software licensing practices, egress fees, and minimum spend contracts. With respect to minimum spend contracts, some of the public comments expressed concerns regarding the power of large incumbent technology firms, with some of the commenters pointing out that certain provisions in cloud computing contracts incentivize customers to consolidate their use of cloud services to a single cloud provider. Similarly, several commenters noted that egress fees assessed for moving data in or out of specific cloud environments could discourage customers from using multiple cloud providers or switching from one cloud environment to another.
FTC Chair Lina Khan was explicit about the agency’s focus on AI during a November 2 speaking engagement at Stanford University, noting “[t]he FTC is firing on all cylinders” with respect to AI. Khan reiterated that there “is no exemption from the laws on the books” for AI, and that the FTC will be “clear-eyed in ensuring that claims of innovation are not used as cover for lawbreaking.”
Impact
AI is rapidly evolving, and questions of law and liability certainly remain, but as federal scrutiny increases, companies and executives should be aware of the antitrust risks posed by incorporating AI into their businesses and take steps to ensure that such use complies with the antitrust laws. For example, companies should ensure that their AI practices do not unreasonably foreclose rivals, create unfair or coercive power asymmetries, facilitate collusion, or lead to unreasonably low standards of competition.