On October 30, 2023, President Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Order”). The Order is the most comprehensive federal policy on AI to date and covers a wide range of topics. It sets new standards for AI safety and security, addresses how AI developments could impact individuals’ privacy and civil rights, discusses how the U.S. can continue to be a leader in AI innovation and competition, and much more. This Order closely follows the July 21, 2023, announcement by the Biden administration that seven major AI companies, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI, voluntarily agreed with the administration to place more guardrails around the development and deployment of AI. The Order has many implications for companies that are developing and deploying AI systems:
First, the Order will require companies developing any dual-use “foundation model” (a model capable of threatening national security in addition to having conventional civilian uses) to notify the federal government of the following: (i) training, development, and production activities, including the physical and cybersecurity protections taken to assure the integrity of the training process against sophisticated threats; (ii) ownership and possession of the model weights[1] of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights; (iii) the results of any developed dual-use foundation model’s performance in relevant AI red-team testing; and (iv) a description of any associated measures the company has taken to meet safety objectives. The Order requires companies to provide this information on an ongoing basis, though it is presently unclear how often this update must be provided. This reporting requirement will likely help the government identify and mitigate the potential risks posed by dual-use foundation models[2].
Second, the Order calls on Congress to pass data privacy legislation to protect Americans by requiring federal agencies to evaluate how they collect and use commercially available information. This legislation would impact a wide range of businesses, especially data brokers.
Third, it could impact cloud computing more generally by requiring companies that acquire, develop, or possess a potential large-scale computing cluster capable of training AI at certain rates to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster. Further, the Order makes clear that even though the particular disclosures are undefined for now, the requirement applies to: (i) any model that was trained using a quantity of computing power greater than 1026 (one hundred septillion) integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 (one hundred sextillion) integer or floating-point operations; or (ii) any computing cluster that has a set of machines physically collocated in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 (one hundred quintillion) integer or floating-point operations per second for training AI.
Finally, the Order will require U.S. infrastructure-as-a-service providers (“IaaS Providers”) to notify the Secretary of Commerce when a foreign person transacts with that IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity. It will also block the foreign resale of U.S.-developed IaaS to “foreign persons” (as is under ITAR) absent reporting on the particulars of the processing. This requirement is likely intended to help the federal government track the development of large AI models outside the United States and protect the U.S. from foreign cyberattacks.
Key Takeaways
- Companies developing dual-use foundation models will be required to report certain information to the federal government, including training and development activities and results of red team testing.
- Businesses, especially data brokers, will likely have to update their data privacy policies to abide by new federal legislation.
- Companies acquiring, developing, or possessing a large-scale computing cluster will be required to report this information to the federal government.
- IaaS Providers will be required to report transactions where a foreign buyer is contracting for AI model-training services.
[1] A “model weight” is a numerical parameter within an AI model that helps determine the model’s outputs in response to inputs. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/#:~:text=(u)%20The%20term%20%E2%80%9Cmodel,outputs%20in%20response%20to%20inputs.
[2] Content generation risk creating disinformation at scale and could sway public opinion on a particular topic. President Biden shared an example where he heard a recording of his voice saying something (deepfake) and asking, “when the hell did I say that?” https://www.dailymail.co.uk/news/article-12690155/Biden-reveals-time-heard-deepfake-voice-asks-hell-did-say-President-compares-AI-science-fiction-new-czar-Kamala-laughs.html