President Biden Issues Wide-Ranging Executive Order on Artificial Intelligence
On 30 October 2023, President Biden issued a long-awaited executive order (EO) on artificial intelligence (AI). The EO itself is available here, and a one-page fact sheet issued by the White House is available here. The document is the longest substantive EO issued by this administration.
- The EO builds on the Biden administration’s prior AI initiatives, including the Blueprint for an AI Bill of Rights and the voluntary commitments made by leading AI companies.
- President Biden invoked the authorities in the Defense Production Act to require companies that develop certain high-performance AI models to (1) notify the government of their development efforts, and (2) report on the results of so-called “red team” tests of the AI systems.
- The EO includes numerous directives aimed at a variety of federal agencies to initiate their own rulemakings on AI. As these agencies issue proposed rules over the next year or so, they will create a broad network of AI regulations in a variety of different sectors.
- The EO also includes several provisions designed to turbocharge the domestic AI industry, such as providing additional funding for AI research and development and revising the visa system to facilitate bringing AI experts to the United States.
At a high level, the EO seeks to expand on the administration’s prior efforts to establish a framework for AI development and innovation. The EO primarily focuses on government agencies but not exclusively on government use of AI. The EO states that the administration intends to publish a separate national security memorandum, which will specifically address national security regulations designed to prevent foreign nations and nonstate actors from accessing and exploiting AI systems. The EO has several provisions that will create risks and opportunities for private sector companies, some of which are outlined below.
Regulating AI Development and Requiring Disclosures
The EO invokes the Defense Production Act—a law passed during the Korean War in 1950 and used most notably during the COVID-19 crisis, which enables the president to direct US development of certain technologies critical for national defense—to impose reporting requirements on companies developing, or planning to develop, potential “dual-use foundation models.” This provision expands and makes mandatory the voluntary commitments made by leading AI companies earlier this year. This includes AI models that are “trained on broad data, generally [use] self-supervision, [contain] at least tens of billions of parameters, [are] applicable across a wide range of contexts, and exhibit, or could easily be modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, [or] national public health and safety.” Within 90 days of the date of the EO, companies would have to (1) notify the US Department of Commerce (Commerce) when training, developing, or producing these AI models; (2) report on the ownership and possession of the model weights of certain AI models; and (3) submit the results of the models’ training and adversarial testing analysis (red teaming) and any actions taken to meet safety objectives to Commerce on an ongoing basis. This reporting on AI red-team testing results will be based on guidance put out by the National Institute of Standards and Technology (NIST) within 270 days of the date of the EO. Until then, reporting will include the results of any red-team testing related to “lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives.”
Companies, individuals, or organizations acquiring, developing, or possessing large-scale computing clusters must also follow certain reporting requirements, including on the existence and location of the clusters.
Commerce, in consultation with other agencies, must also define and regularly update the set of technical conditions for models and computing clusters that would be subject to these reporting requirements. Until they are defined, the requirements cover models that exceed a certain threshold for computing power, as well as the collection of computing clusters in sets of machines that exceed certain computing capacity and are housed within a single data center.
Building AI Regulatory Framework
The EO also sets out numerous directives that require federal agencies to draft and promulgate regulations in their respective fields relating to use of AI. These agency directives expand on the administration’s prior efforts to establish a framework for AI development, most notably the Blueprint for an AI Bill of Rights, issued by the White House last year, and will create a vast network of AI regulations over the coming months. Key directives include the following:
- The EO directs NIST to establish guidelines and best practices for developing and deploying safe, secure, and trustworthy AI systems.
- The EO directs Commerce to promulgate various regulations that will require infrastructure as a service (IaaS) providers to report and mitigate foreign persons that transact with US-based IaaS providers to train large AI models.
- The EO directs the US Department of the Treasury (Treasury) to issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.
- The EO directs Commerce to undertake a regulatory review and ultimately promulgate regulations that will require watermarking and content provenance standards to ensure that all content issued by the US government can be authenticated as AI-generated or human-generated.
- The EO also directs the Federal Acquisition Regulatory Council to require government contractors to adhere to the same watermarking and content provenance standards.
- The EO directs the US Department of Labor (Labor) to develop rules prohibiting the use of AI in surveilling employees in the workplace.
- The EO directs the Consumer Financial Protection Bureau and the US Department of Housing and Urban Development to develop regulations preventing discriminatory uses of AI in a variety of housing-related applications, such as digital advertising and credit analysis.
The EO indirectly addresses the myriad issues around data privacy that arise in the context of AI models, particularly generative AI and large language models (LLMs). Instead, the EO focuses on reinforcing the arguments around a renewed push for a federal data privacy law. As such, the EO highlights the administration’s perspective that Congress must take action with respect to data privacy standards, and it reiterates the administration’s call for Congress to pass federal data privacy legislation, which has been under consideration on Capitol Hill for years. The administration’s view is echoed in statements from Congressional leadership that they would prefer to pass privacy legislation before advancing AI legislation.
The EO includes several provisions that relate to cybersecurity. The reporting requirements for developers of certain AI systems, discussed above, require disclosure of cybersecurity protections that developers take to assure the integrity of the training process and to protect model weights. The EO directs Treasury to issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks. The EO also requires an annual assessment of risks to critical infrastructure arising from AI. The EO directs the US Department of Health and Human Services to create a task force and issue a strategic plan that will, among other things, examine the cybersecurity risks associated with health care arising from AI. The EO also directs the US Office of Management and Budget to issue guidance to agencies on best practices relating to cybersecurity and AI.
Turbocharging the US AI Industry
The EO also seeks to promote innovation and competition in the US AI industry by attracting talent to the United States, funding research and development for new technology, and expanding investments in AI-related education. The EO directs the US Department of State (State) and the US Department of Homeland Security (DHS) to take appropriate steps to attract and retain talent in AI and other critical and emerging technologies to ensure that the “companies and technologies of the future are made in America.” Of the 13 directives aimed at modernizing and streamlining the visa process for noncitizens working on, studying, or conducting research in AI, the most consequential are those that direct the agencies to initiate, or consider initiating, a rulemaking process. Within 120 days of the date of the EO, State must consider initiating a rulemaking to establish new criteria to designate countries and skills on the J-1 Exchange Visitor Skills List as it relates to the visa’s two-year home-country physical presence requirement after completion of the exchange visitor program. State must also consider implementing a domestic visa renewal program and, within 180 days of the date of the EO, must consider initiating a rulemaking to expand the renewal program to include academic J-1 research scholars and F-1 science, technology, engineering, and mathematics (STEM) students. In the same time frame, DHS must continue its ongoing rulemaking process to modernize the H-1B program to improve its usage by “experts in AI and other critical and emerging technologies,” as well as consider initiating a rulemaking to enhance the process for noncitizens, “including [these] experts … and their spouses, dependents, and children, to adjust their status to lawful permanent resident.” The EO also directs Labor to publish a request for information within 45 days asking for input on AI and other STEM-related occupations that lack qualified workers.
The EO also indicates an intention to fund research and development for new technology and expand investments in AI-related education. Within 150 days of the date of the EO, the National Science Foundation (NSF) shall fund and launch at least one AI-focused Regional Innovation Engine (NSF Engine) in the United States. Each NSF Engine is eligible for up to US$160 million in funding. The EO also directs the NSF to establish at least four more National AI Research Institutes within 540 days of the date of the order.
Advancing Equity and Civil Rights
Building on the administration’s prior efforts to establish a values framework for AI development, this EO reaffirms the role that AI policies must play in advancing civil rights, civil liberties, equity, and justice. Specifically, the EO includes directives aimed at ensuring compliance with federal laws and “holding those deploying and developing AI accountable to standards that protect against unlawful discrimination and abuse.” The EO promotes the protection of civil rights related to government benefits, as well as in the labor, housing, and consumer financial markets.
For over a decade, the firm has supported clients developing AI and has a bipartisan team of professionals, including former members of Congress, advising companies as they navigate this rapidly developing policy and regulatory landscape. The firm's Public Policy and Law practice group will continue to actively monitor developments in the AI policy debate.
This publication/newsletter is for informational purposes and does not contain or convey legal advice. The information herein should not be used or relied upon in regard to any particular facts or circumstances without first consulting a lawyer. Any views expressed herein are those of the author(s) and not necessarily those of the law firm's clients.