In Greek mythology, Sisyphus was punished by Hades for cheating death (twice) by forcing him to roll an immense boulder up a hill only for it to roll back down every time it neared the top. AI stakeholders know the feeling. Attempting to keep pace with the downpour of artificial intelligence-related regulation, guidance, rules and requirements emerging over the past two years feels like a mythical challenge.

At any point in time, there are 50 U.S. states, five inhabited territories, the White House, a federal district, a dozen federal agencies, a hundred-odd state agencies and a couple thousand municipalities all tackling the same question: what are the rules for a safe, legal and generally non-evil deployment of artificial intelligence tools?

Different regulators have come up with different answers to that question. What have they focused on so far?

Preventing Bias

AI models are only as good as the data they are trained on. Developers and deployers of AI tools should develop, document, and implement anti-bias protocols to prevent training and validation data poisoned with bias from contaminating outputs. Examples include:

  • The Colorado Artificial Intelligence Act requires developers and deployers of a “high-risk artificial intelligence system” to use “reasonable care” to avoid algorithmic discrimination.
  • Illinois’ Artificial Intelligence Video Interview Act requires that employers who perform artificial intelligence analysis of video interviews analyze how the race and ethnicity of job applicants is being handled by the AI tool. This data must be submitted to the Illinois Department of Commerce and Economic Opportunity which will determine if there is racial bias in the employer’s use of artificial intelligence.

Rights Relating To Automated Decision Making and Profiling

Found in state comprehensive privacy laws like the CCPA, these regulations generally require data protection assessments for high-risk deployments of automated processing of consumer personal data and offering the right for consumers to opt-out of automated processing that produces legal effects or effects of similar significance. Examples include:

  • The Oregon CPA requires businesses to honor consumers’ right to opt-out of automated processing of personal data if the processing produces legal effects or effects of similar significance. It also requires data protection assessments for automated processing of personal data if the processing presents a reasonably foreseeable risk of unfair or harmful treatment of the consumer.
  • The California Privacy Protection Agency is in the middle of issuing new CCPA regulations governing consumer access, opt-out, and appeal rights with respect to businesses’ use of automated decision-making technology as well as requirements for mandatory risk assessments and cybersecurity audits,

Controlling Deepfake/Synthetic Media

Regulatory focus has been on three primary areas: non-consensual sexual-related imagery, non-consensual likeness or voice used for commercial purposes, and political misinformation. Generally, the first two are either criminalized or have a civil private cause of action created for victims while the latter must be labeled as being manipulated. Examples include:

  • Hawaii establishes, as a class C felony, the creation, disclosure, or threatened disclosure of sexual-related deepfakes “with intent to substantially harm the depicted person with respect to that person’s health, safety, business, calling, career, education, financial condition, reputation, or personal relationships, or as an act of revenge or retribution.”
  • Washington provides a private cause of action for political candidates against creators or sponsors of synthetic media in “electioneering communications.” However, it is an affirmative defense if the electioneering communication includes a disclosure stating: “This (image/video/ audio) has been manipulated.”

Requiring Bot Transparency

These regulations generally prohibit the use of artificial intelligence to impersonate a human being without disclosure. Examples include:

  • California’s Bot Disclosure Law outlaws the use of an AI-driven bot to impersonate a human being for purposes of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services or to influence a vote in an election. However, disclosure of the bot’s artificial identity through a clear, conspicuous, and reasonably designed notice prevents liability.
  • Utah’s Artificial Intelligence Policy Act requires certain deployers of generative artificial intelligence to clearly and conspicuously disclose to the person with whom the generative artificial intelligence interacts if asked or prompted by the person, that the person is interacting with generative artificial intelligence and not a human.

Anti-Deceptive Uses and Statements

While we are still awaiting direct formal federal legislation or agency ruling making on the regulation of artificial intelligence, the FTC’s use of its existing Title V powers provides guidance for non-deceptive use of AI. Examples include:

  • The FTC brought an enforcement action against an online dating service that created chatbots which appeared to be other human users of the service with the goal of inducing potential customers to sign up for the service.
  • The FTC has warned that it may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.

Sectorial Regulations

In the insurance industry specifically, the National Association of Insurance Commissioners has released a model bulletin (which has been adopted by many U.S. states), with requirements for insurers to develop programs to ensure responsible use of AI systems, which is instructed to include “processes and procedures providing notice to impacted consumers that AI systems are in use and provide access to appropriate levels of information based on the phase of the insurance life cycle in which the AI systems are being used.” Examples include:

  • The Connecticut Insurance Department has adopted the NAIC model bulletin and further requires domestic insurers to complete an artificial intelligence certification on or before September 1, 2024, and annually thereafter.

Governmental Use

Many states have passed laws requiring the creation of task forces while others are requiring specific safeguards for their own state and local governmental use of AI systems or even banning certain high-risk governmental uses of AI altogether. Examples include:

  • The White House’s Executive Order 14110 created 150 AI-related requirements to be considered by federal agencies guided by eight core policies and principles.
  • Pennsylvania’s 4 Pa. Code § 7.993 requires standards for the design, development, procurement and deployment of generative AI technology by governmental agencies.

So, what are the general principles connecting the current state of AI regulation?

  1. Transparency is key:
    • Be transparent and don’t deceive consumers about how you use AI tools;
    • Be transparent about how consumer data is used to train or validate AI tools;
    • Be transparent about how AI decision-making impacts a consumer; and
    • Be transparent about your use or disclosure of AI generated content (including chat bot interactions).
  2. If your AI tools make or support significant decisions impacting consumers, have explicitly documented (and followed) policies, procedures, and training for ensuring that they are working as intended, evaluating bias and identifying adverse impact to consumers. Focus on any use of sensitive data (like biometric information) and impact on protected rights (like housing) or protected classes (like racial identity), in both AI tool outputs and inputs.
  3. Secure your AI tools from malicious interference. If you are deploying a tool from a third-party provider, be sure to include it in your cybersecurity audits and assessments.
  4. Develop and offer mechanisms for consumers to opt-out of decisions made by, or using, AI tools.

Need help understanding how AI regulation may affect you or your business? Dykema’s Artificial Intelligence and Innovation practice is here to assist. You can review Dykema’s 2024 Report on Legal Trends in Artificial Intelligence here.