This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. As noted below, some of these developments provide industry with the opportunity for participation and comment.
Federal Legislative Developments
AI remained a focal point for Congress this quarter. Multiple bills proposing to regulate AI were introduced, covering issues such as antitrust, transparency, and training data, and House leadership created a bipartisan task force to address AI regulation.
- Antitrust: Some bills introduced this quarter relate to the potential impact of AI on competition. For example, in January, Senator Klobuchar (D-MN) introduced the Preventing Algorithmic Collusion Act of 2024 (S. 3686). The Act would create a presumption that a defendant entered into an agreement, contract, or conspiracy in restraint of trade in violation of the antitrust laws if the defendant: (i) distributed a pricing algorithm to two or more persons with the intent that the pricing algorithm be used to set or recommend a price or (ii) used a pricing algorithm to set or recommend a price or commercial term of a product or service and the pricing algorithm was used by another person to set or recommend a price. The Act also would require companies using algorithms to set prices to provide transparency and would prohibit the use of “nonpublic competitor data” to train any pricing algorithm.
- Transparency: Other bills focus on transparency requirements for AI. For instance, in March, Representative Eshoo (D-CA-16), along with 3 bipartisan co-sponsors, introduced the Protecting Consumers from Deceptive AI Act (H.R. 7766). The Act would direct the National Institute of Standards and Technology (“NIST”) to facilitate the development of standards for identifying and labeling AI-generated content, including through technical measures such as provenance metadata, watermarking, and digital fingerprinting. The Act also would require generative AI developers to include machine-readable disclosures within audio or visual content generated by their AI applications. Providers of covered online platforms would have to implement the disclosures to label AI-generated content.
- Consent for Training Data: Legislative proposals also focus on consent for use of training data. For example, Senators Welch (D-VT) and Lujan (D-NM) introduced the Artificial Intelligence Consumer Opt-in, Notification, Standards, and Ethical Norms for Training Act or the “AI CONSENT Act” (S. 3975). The Act would require entities to receive an individual’s express informed consent before using “covered data” (defined broadly) to train an AI system.
- AI Task Force: This quarter, House Speaker Mike Johnson (R-LA-4) and Minority Leader Hakeem Jeffries (D-NY-8) announced the establishment of a bipartisan Task Force on AI. Speaker Johnson and Leader Jeffries have each appointed 12 members to the Task Force. Among other things, the Task Force will produce a comprehensive report that will include: (i) guiding principles; (ii) forward-looking recommendations; and (iii) bipartisan policy proposals.
Federal Regulatory Developments
- National Science Foundation (“NSF”): The NSF announced the launch of the National AI Research Resource (“NAIRR”), a two-year pilot program that will support AI researchers and aid innovation. NSF will partner with ten other federal agencies as well as 25 private sector, nonprofit, and philanthropic organizations to power AI research and inform the design of the full NAIRR ecosystem over time. Specifically, the NAIRR pilot will support research to advance safe, secure, and trustworthy AI, as well as the application of AI to challenges in healthcare and environmental and infrastructure sustainability. The NAIRR launch meets a goal outlined in the White House’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (“EO”), which directs NSF to launch a pilot for the NAIRR.
- Department of Commerce: The Department of Commerce published a proposed rule to require providers and foreign resellers of U.S. Infrastructure-as-a-Service products to, among other things, notify the Department of Commerce when a foreign person transacts with that provider or reseller to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity. The AI provisions of the proposed rule stem from mandates in the EO on AI. Comments are due by April 29, 2024.
- Federal Communications Commission (“FCC”): The FCC released a declaratory ruling stating that under the Telephone Consumer Protection Act (“TCPA”), telemarketing calls using an artificial or prerecorded voice simulated or generated through AI technology can be made only with the prior express written consent of the called party unless an exemption applies. The declaration followed the submission of reply comments supporting the change by the Attorneys General of 25 states and the District of Columbia. Further, the FCC announced that it will relaunch the Consumer Advisory Committee (“CAC”) to focus on emerging AI technologies and consumer privacy issues.
- Federal Trade Commission (“FTC”): The FTC issued a supplemental Notice of Proposed Rulemaking (“NPRM”) that would amend the Rule on Impersonation of Government and Business (“Impersonation Rule”) to prohibit the impersonation of individuals using AI and extend liability for violations of the Impersonation Rule. Comments are due by April 30, 2024. Additionally, the FTC published a blog post warning AI companies that it may be unfair and deceptive to quietly change their terms of service to adopt more permissive data practices, such as using consumers’ data for AI training, without adequate notice to consumers.
- White House Office of Management and Budget (“OMB”): OMB issued its first government-wide policy memorandum on deploying AI in the federal government and managing its risks. The memorandum establishes requirements and guidance for federal agencies that aim to strengthen AI governance, advance responsible AI innovation, and manage AI risks, especially those risks that affect the rights and safety of the public. For example, the memorandum requires agencies to implement minimum governance procedures for certain rights-impacting and safety-impacting AI use cases.
- U.S. Patent and Trademark Office (“USPTO”): The USPTO published a guidance declaring that while AI systems and other “non-natural persons” cannot be listed as inventors in patent applications, the use of an AI system by a natural person does not preclude a natural person from qualifying as an inventor. Further, the person using the AI must have contributed significantly to the invention; simply overseeing an AI system’s creation is not sufficient. Those seeking patents must disclose if AI was used during the invention process. In conjunction with the guidance, the USTPO issued examples to illustrate the application of the guidance in specific situations. Comments on the guidance and examples are due by May 13, 2024.
AI Litigation Developments
Plaintiffs continue to test theories in lawsuits against companies developing AI models, with a number of suits focused on copyright infringement and related claims. The defendants in the copyright cases have responded by arguing, among other things, that the plaintiffs failed to plead facts establishing that models were trained on materials covered by copyright registrations, failed to support claims that the model is both an infringing “copy” and “derivative” of each registered work on which it was allegedly trained, and failed to identify copyright management information (“CMI”) that the defendants allegedly altered or removed. 2024 Q1 litigation developments include, for example:
- New Copyright Complaints:
- On March 8, a group of book authors brought a direct copyright infringement claim against Nvidia, alleging that Nvidia copied and used their copyright-protected works to train their NeMo Megatron series of LLMs. Nazemian et al. v. Nvidia Corp. 24-cv-1454 (N.D. Cal.). The same day, the authors also brought a copyright infringement suit against MosaicML for direct infringement and Databricks, Inc. for vicarious infringement concerning the training of Mosaic’s MPT LLM model series, including MPT-7B and MPT-30B. O’Nan et al. v. Databricks Inc. et al., 3:24-cv-01451 (N.D. Cal).
- On February 28, two suits were filed by news media organizations against OpenAI, alleging that OpenAI violated the Digital Millennium Copyright Act by training the ChatGPT LLM with copies of their works from which content management information had been removed. Raw Story Media, AlterNet Media v. OpenAI, et al., 24-cv-1514 (S.D.N.Y.); The Intercept Media, Inc. v. OpenAI, Inc. et al., 1:24-CV-01515 (S.D.N.Y.). The Intercept also named Microsoft as a defendant.
- On January 5, a class action complaint was filed by journalists and authors of nonfiction works against Microsoft and OpenAI alleging that the companies unlawfully reproduced their copyrighted works for the purpose of training their LLMs and ChatGPT. Basbanes v. Microsoft, 1:24-cv-84 (S.D.N.Y.). The Basbanes suit has since been consolidated with Authors Guild, et al., v. Open AI Inc., et al., 23-cv-08292 (S.D.N.Y.) and Alter, et al., v. Open AI Inc., et al., 23-cv-10211 (S.D.N.Y.).
- Responses in New York Times Case: On February 26, OpenAI filed a motion to dismiss in The New York Times Company v. Microsoft et. al. 1:23-cv-11195 (S.D.N.Y), arguing, among other things, that NYT failed to allege that OpenAI had actual knowledge of specific acts of infringement for the purposes of contributory copyright liability and that NYT failed to identify the CMI that OpenAI allegedly removed. On March 3, Microsoft filed a partial motion to dismiss, arguing, among other things, that NYT failed to state a claim against Microsoft for contributory infringement for failure to allege an underlying direct infringement by end users and that NYT cannot allege Microsoft’s actual knowledge of (or willful blindness to) any act of direct infringement. On March 11 and March 18, NYT responded to both motions to dismiss, making procedural arguments and arguing, among other things, that OpenAI had knowledge of contributory infringement because NYT had actually informed OpenAI of this alleged infringement. Though fair use arguments are not being litigated at this stage, both parties have discussed fair use case law in their briefing.
- Copyright Management Information (“CMI”) Dismissals in GitHub Case: On January 3, the court in Doe v. GitHub, 22-cv-6823 (N.D. Cal.) issued its second decision on a motion to dismiss, partially granting and denying the motion for six of the plaintiffs’ eight claims. The court found that some of the plaintiffs sufficiently alleged Article III standing to seek damages based on amended allegations that included examples of their code that were output by the Copilot coding tool. The court also found that certain state law claims were preempted by the Copyright Act and dismissed them with prejudice. The court also dismissed the plaintiffs’ CMI claims under the Digital Millennium Copyright Act with leave to amend, holding that the claims at issue lie only when CMI is removed or altered from an identical copy of a copyrighted work, and the amended complaint only identified examples of outputs that were alleged to be modifications of copyrighted code, and not identical copies. On January 25, the plaintiffs filed a second amended complaint, re-alleging the CMI claims and bringing two breach of contract claims for open-source license violations and selling licensed materials in violation of Github’s policies. On February 28, defendants Microsoft and Github moved to dismiss again, arguing that the plaintiffs still failed to plead that CMI was removed from identical copies of the plaintiffs’ works.
- Response to Amended Complaint in Google Case: On January 5, the plaintiffs in Leovy v. Google LLC, 3:23-cv-03440 (N.D. Cal) amended their complaint to name the plaintiffs, allege different causes of action, and plead additional allegations concerning Google’s alleged violations of the plaintiffs’ rights under property, privacy, and copyright law, among other things. On February 9, the defendant moved to dismiss the plaintiffs’ amended complaint with prejudice. With respect to the plaintiffs’ web scraping claims, the defendant argued, “outside copyright law (including its protection for fair use), there is no general right to control publicly available information.” The defendant argued that the plaintiffs’ direct copyright infringement claims based on generative AI output should be dismissed because the plaintiff pled that “Bard’s output necessarily infringes the copyrights in all the works Bard trained on” without providing any examples of a “substantially similar” infringing output. The motion did not argue for dismissal of the direct copyright infringement claim based on the training process. With respect to the plaintiffs’ negligence claims, the defendant argued that the plaintiffs failed to adequately allege that it owed the plaintiffs a duty of care and that the economic loss rule otherwise barred a negligence claim.
- Dismissals and Consolidation in N.D. Cal Litigation: On February 12, in a consolidated opinion, the court granted the defendants’ motions to dismiss the claims in Tremblay et al. v. OpenAI, Inc. et al., 3:23-cv-03223 (N.D. Cal.) and those in the related case of Silverman, et al v. OpenAI, Inc., et al., 23-cv-03416 (N.D. Cal.) case for vicarious infringement, violation of the Digital Millennium Copyright Act, and negligence, with leave to amend. The court also dismissed the plaintiffs’ unjust enrichment claim with prejudice, but allowed the unfair competition claim to proceed. The case was subsequently consolidated on February 16 with the Silverman case and Chabon v. OpenAI, et al., 23-cv-04625 (N.D. Cal). On March 13, the plaintiffs filed a first consolidated amended complaint (under the new caption, “In Re ChatGPT Litigation”), narrowing to two counts of direct copyright infringement and violation of the California Unfair Competition Act.
- Right of Publicity Complaint: On January 25, representatives of comedian George Carlin’s estate filed suit in Main Sequence, Ltd. et. al. v. Dudesy, LLC, 24-cv-711 (C.D. Cal.), alleging that the defendants, by training an AI model to mimic Carlin’s stand-up performances and by publishing the allegedly AI-created “George Carlin Special,” have unlawfully used Carlin’s name, image and likeness without consent, in addition to infringing copyrighted Carlin materials. There is some uncertainty expressed in the complaint as to whether the “George Carlin Special” was produced using a generative AI model or involved a human-written script paired with assistive tools such as an AI voice generator. The plaintiffs allege that in either case, Carlin’s image and likeness was unlawfully used and his reputation harmed.
- Autonomous Vehicle Accessibility Act: On January 30, Representatives Greg Stanton (D-AZ) and Brian Mast (R-FL), members of the House Transportation and Infrastructure Committee, introduced the bipartisan Autonomous Vehicle Accessibility Act (H.R. 7126). The Act is intended to help people with disabilities better access the mobility and independence benefits of ride-hail CAVs, such as by: (1) prohibiting states from issuing motor vehicle operator’s licenses in a manner that prevents a qualified individual with an ADA disability from riding as a passenger in a vehicle equipped with an automated driving system that is operating in fully autonomous mode; and (2) requiring the Secretary of Transportation to conduct an accessible infrastructure study to determine the best practices for public transportation infrastructure to be modified to improve the ability of Americans with disabilities to find, access, and use ride-hail autonomous vehicles. The bill was referred to the Subcommittee on Highways and Transit on February 12, 2024.
- Focus on Data Privacy Practices of Vehicle Manufacturers: On February 27, Senator Markey (D-MA) sent a letter to the FTC asking the FTC to investigate the data privacy practices of car manufacturers. Senator Markey noted that the responses automakers provided to his late 2023 inquiry “gave [him] little comfort” and that the companies’ “ambiguity and evasiveness calls out for the investigatory powers of the FTC.” The letter “urge[s] the [FTC] to use the full force of its authorities to investigate the automakers’ privacy practices and take all necessary enforcement actions to ensure that consumer privacy is protected.”
- Continued Attention on Connectivity and Domestic Violence: As we reported in our last update, the FCC has taken steps to increase its understanding of certain safety issues implicated by connected vehicles with respect to the potential for wireless connectivity and location data to negatively impact partners in abusive relationships. Continuing this focus, on February 28, the FCC issued a press release reporting that Chairwoman Rosenworcel circulated a Notice of Proposed Rulemaking regarding how the agency can leverage existing law to ensure that car manufacturers and wireless service providers “understand the full impact of the connectivity tools in new vehicles and how these applications can be used to stalk, harass, and intimidate.” If adopted, the NPRM “would seek comment on the types and frequency of use of connected car services that are available in the marketplace today.” Among other things, the NPRM would ask if changes to the FCC’s rules implementing the Safe Connections Act are needed to address the impact of connected car services on domestic abuse survivors. It also would seek comment on what steps connected car services can proactively take to protect survivors from the misuse of such services.
Privacy
With respect to privacy, a number of states kicked off the new year with new privacy laws and the FTC continued to bring enforcement actions related to companies’ privacy practices.
- New State Privacy Laws: Legislatures in New Jersey, New Hampshire, and Kentucky passed new data privacy laws that largely resemble the approaches taken under existing privacy frameworks in the U.S. Maryland’s legislature has also passed a comprehensive privacy law, although both chambers are working to reconcile differences. Additionally, Nebraska enacted a genetic privacy law regulating direct-to-consumer (“DTC”) genetic testing companies. The law is one of a flurry of bills regarding DTC genetic testing that have been introduced in several states since the beginning of 2024, following the enactment of several DTC genetic testing laws in 2023.
- FTC Consent Orders: The FTC recently announced proposed consent orders with Outlogic and InMarket Media related to the use of precise geolocation data. Both companies collect location data using software development kits (“SDKs”) installed in first and third party apps, among other data sources. According to the FTC’s complaints, Outlogic sold this data to third parties (including in a manner that revealed consumer’s visits to sensitive locations) without obtaining adequate consent, and InMarket used this data to facilitate targeted advertising without notifying consumers that their location data will be used for targeted advertising. In both cases, the FTC alleged that these acts and practices constituted unfair and/or deceptive acts or practices under Section 5 of the FTC Act.
Cybersecurity
Federal cybersecurity regulators have had a busy start to 2024 and set in motion a number of new proposed rules and cybersecurity standards that, if implemented, will redefine the landscape for federal cybersecurity regulations in the years ahead.
- Critical Infrastructure Broadly Defined: The U.S. Cybersecurity and Infrastructure Security Agency (“CISA”) published a proposed rule to implement the cyber incident reporting requirements for critical infrastructure entities from the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”). Notably, the proposed rule broadly defines critical infrastructure entities (pursuant to Presidential Policy Directive 21) across the 16 critical infrastructure sectors. In total, CISA estimates that over 300,000 entities would be covered by the rule. CIRCIA has two cyber incident reporting requirements for covered critical infrastructure entities: a 24-hour requirement to report ransomware payments and a 72-hour requirement to report covered cyber incidents to CISA. Under CIRCIA, the final rule must be published by September 2025.
- Cybersecurity Framework 2.0: The U.S. National Institute of Standards and Technology (“NIST”) published version 2.0 of its Cybersecurity Framework. The new version incorporates some significant updates to the Framework including: expanded application (i.e., broad application regardless of cybersecurity program maturity); a new “govern” function (i.e., whether an organization’s cybersecurity risk management strategy, expectations, and policy are established, communicated, and monitored); increased focus on cybersecurity supply chain risk management (e.g., whether an organization performs due diligence on potential suppliers and monitors the relationship through the technology or service life cycle); and new reference tools.
- Federal Cybersecurity Enforcement Action: The U.S. Department of Health and Human Services Office of Civil Rights announced that it had settled a cybersecurity investigation with Montefiore Medical Center, a non-profit hospital system based in New York City, for $4.75 million.
We will continue to update you on meaningful developments in these quarterly updates and across our blogs.