In a very short time, AI has evolved from an abstract idea to a practical tool. This demands legal thinking that can account for its use. AI as a concept began in the 1950s when well-known mathematician and scientist Alan Turing conceptualized using computers to simulate intelligent behavior and critical thinking. However, even though labs developed checkers and chess programs in the 1950s and rudimentary chatbots by the 1960s, hardware and software constraints made AI inaccessible to most people until the 2000s, when developers began to integrate deep learning into AI applications. Today, cell phones, computers, and other intelligent machines perform complicated functions that once only inhabited human imagination and (science) fiction. For example, map applications use AI to help drivers efficiently navigate traffic; social media applications use AI in facial recognition functions; digital devices use AI for voice recognition commands; and cars are increasingly self-driving with the help of AI. In addition, businesses use AI to predict consumer trends, monitor employees, and make important financial decisions such as approving loans and deciding customers’ insurance policies. The potential applications of AI are still being realized, and the possibilities seem endless.
Now, businesses are exploring new ways to use AI to automate processes and boost profitability. These visions are clouded by confusion in terminology. AI is not a single technology, nor does it have a singular use. It can be described broadly as the science and engineering of using computer programs and other intelligent machines to simulate human problem-solving and decision-making skills. AI has evolved from systems dependent on deduction to systems primarily dependent on data. Early systems were based on a concept called “reasoning as search” – essentially using deductions to travel a predetermined maze of possibilities. This hit a wall when it became clear that problems could branch exponentially very quickly.
Modern AI products rely instead on “learning sets” of data and come to recognize patterns within the data to perform a task. These break down into three broad categories, which we will discuss in order of increasing complexity.
First, basic machine learning ingests specific, labeled data and uses defined algorithms to solve problems and make decisions, frequently used to classify data points. The difference between machine learning and older, “reasoning as search” AI is that the algorithms in machine learning evolve as the machine receives more data, creating a “neural network” that is constantly learning, similar to a human brain – with an input layer, a processing layer, and an output layer. Machine learning results in a model particular to the type of data and task at hand. Machine learning is predictable and transparent, but it requires conscientious training, and its models are not very portable between uses.
Second, deep learning is an elaboration of AI and machine learning that increases the autonomy of modeling, allows the system to accept data that is more “raw,” and builds neural networks with more layers between input and output. Deep learning machines can ingest unlabeled, unstructured data in raw form, process the data, weigh the characteristics of the data points, and use evolving algorithms to provide problem-solving and decision-making functions. This, just as in basic machine learning, is reinforced by feedback. Deep learning is less transparent to the end user because the multiple processing layers make it difficult to reverse-engineer exactly how the system is weighing data and making decisions.
Finally, and most recently, there is generative AI. Generative AI combines deep learning with natural language processing, typically over extremely large data sets. Natural language processing is the ability of computers and AI applications to identify patterns and relationships in text, images, and spoken words in a way that approximates human comprehension. Generative AI uses this to learn to generate outputs intended to predict human expression. As a result, generative AI allows computers to generate new outputs such as text, images, music, and video. It also allows computers to interact with humans. As evidenced in the recently developed chatbots like ChatGPT that have sent shockwaves through the imaginations of the public – not just for uncanny replications of human communications but also in high-profile glitches and exposition of sensitive information.
As the use of AI becomes more pervasive in our society, so do its legal implications. These issues involve the particulars of training, use, results, and disclosures – and implicate nearly every aspect of the law, from data privacy to contracts to torts to civil rights. Generative AI, in particular, has provoked novel discussions of whether abstracting data sets is “fair use” – and more fundamentally, who or what can hold a copyright.* Businesses should be mindful – even in what look like non-technological contracts – of the benefits, risks, and implications of using AI.
*No part of this article was written by AI.
This article was also authored by Summer Associate Josh Kluzak.