Colorado just became the first U.S. state to pass a law (Senate Bill 24-205 “SB 24-205” or the “CAIA”) regulating consumer harms arising out of artificial intelligence (“AI”). While the CAIA will not go into effect until February 2026, it is part of a growing trend in the U.S., including, most notably, the White House’s guidance on “Algorithmic Discrimination Protections” published at the end of 2023.

Continue Reading Colorado’s Artificial Intelligence Act (CAIA) – The First U.S. State Law Regulating Consumer Harms Arising Out of AI

The American Data Privacy and Protection Act (ADPPA), proposed in 2022, is no more. The relay race of proposed federal privacy legislation has now entered its final leg with the American Privacy Rights Act (APRA).

Continue Reading Federal Privacy Legislation Is Inching Toward the Finish Line With the American Privacy Rights Act

That whistling sound you hear may not be an old-school newspaper walking past a graveyard—it may well be an AI industry-killing asteroid. On December 27, 2023, the New York Times filed a groundbreaking suit against OpenAI and Microsoft. The Times alleged copyright infringement, vicarious copyright infringement, contributory copyright infringement, violations of the Digital Millennium Copyright Act’s prohibition on removing copyright management, unfair competition, and trademark dilution. The 69-page, 204-paragraph complaint, filed in the Southern District of New York, alleges, among many other things, that:

Continue Reading Will the New York Times Take Down Large Language Models?

The Department of Justice recently announced a “disruption campaign” against the Blackcat ransomware group (aka ALPHV or Noberus), including seizing the group’s darknet website and releasing a decryption tool for victim entities to recover their systems.

Continue Reading ALPHV/Blackcat Ransomware Group Announces New Rule: No Rules…Anything, Anywhere

On October 30, 2023, President Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Order”). The Order is the most comprehensive federal policy on AI to date and covers a wide range of topics. It sets new standards for AI safety and security, addresses how AI developments could impact individuals’ privacy and civil rights, discusses how the U.S. can continue to be a leader in AI innovation and competition, and much more. This Order closely follows the July 21, 2023, announcement by the Biden administration that seven major AI companies, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI, voluntarily agreed with the administration to place more guardrails around the development and deployment of AI. The Order has many implications for companies that are developing and deploying AI systems:

Continue Reading Biden’s Executive Order and Its Possible Effects on Companies Developing and Deploying AI Systems

The Big Apple now demands big commitments from financial institutions regarding cybersecurity practices. Yesterday, the New York State Department of Financial Services (“NYDFS”) adopted its second set of amendments to its 2015 “Cybersecurity Requirements For Financial Services Companies” (“Amended Cybersecurity Regulation”), with some amendments immediately going into effect. The law requires “covered entities,” including but limited to financial institutions or insurance providers authorized to conduct business in New York, to implement and maintain a cybersecurity program, to report cybersecurity events, and to annually certify their compliance with the law. The Amended Cybersecurity Regulation now requires:

Continue Reading Security State of Mind: Amendments to NYDFS’s Cybersecurity Regulation Go Live

The SEC has been on a cybersecurity tear in 2023, instituting new rules on disclosures of cybersecurity events and threat assessments. But not wanting to let go of the past, it brought suit on October 30 in the Southern District of New York against SolarWinds and its Chief Information Security Officer, Timothy Brown. The SEC based the action on what it saw as mismatches between SolarWinds’ public disclosures and what the SEC saw in its investigation. The case certainly is a first in many ways: the first cybersecurity-related SEC case with allegations of intentional concealment, in which internal controls have figured prominently, and where SEC brought an action against the CISO personally. This has been blown up in data security media to suggest that CISO is somehow the most dangerous position in a corporation. In reality, this is not IT Armageddon, but there are some practical lessons.

Continue Reading SEC Enforcement Against SolarWinds and Its CISO: Time to Freak Out?

Many businesses think their websites, like a spacecraft following Newton’s laws of motion, should just keep going once established. What may be reasonable in deep space is not particularly safe in the galaxy of data privacy, which is choked with debris, asteroids, and radiation. This fall is as good a time as any to make sure your electronic presence is still on course—especially as more states come online with new laws and regulations in 2024. Consider three questions:

Continue Reading Start Your Website Spring Cleaning – This Fall

This year, the Cook County docket has seen an influx of class action claims seeking redress under an older Illinois privacy statute, the Genetic Information Privacy Act (GIPA), no doubt due to the statute’s extreme statutory damage provisions. GIPA, enacted in 1998, provides a private right of action and permits recovery for actual damages or for statutory damages of $2,500 per negligent violation and $15,000 per intentional or reckless violation of the statute. The potential for massive awards has clearly caught the eye of the plaintiff’s bar. Indeed, despite sporadic filings over the past decade, nearly 30 cases have been brought under GIPA in 2023 in Cook County alone, the majority of which have been filed in the last two months.

Continue Reading Employers Beware – New Life for an Old Statute: Cook County Class Action Litigation Under the Genetic Information Privacy Act

In a very short time, AI has evolved from an abstract idea to a practical tool. This demands legal thinking that can account for its use. AI as a concept began in the 1950s when well-known mathematician and scientist Alan Turing conceptualized using computers to simulate intelligent behavior and critical thinking. However, even though labs developed checkers and chess programs in the 1950s and rudimentary chatbots by the 1960s, hardware and software constraints made AI inaccessible to most people until the 2000s, when developers began to integrate deep learning into AI applications. Today, cell phones, computers, and other intelligent machines perform complicated functions that once only inhabited human imagination and (science) fiction. For example, map applications use AI to help drivers efficiently navigate traffic; social media applications use AI in facial recognition functions; digital devices use AI for voice recognition commands; and cars are increasingly self-driving with the help of AI. In addition, businesses use AI to predict consumer trends, monitor employees, and make important financial decisions such as approving loans and deciding customers’ insurance policies. The potential applications of AI are still being realized, and the possibilities seem endless.

Continue Reading An Overview of AI