How the Utah Artificial Intelligence Policy Act Impacts Health Professionals - McDermott Will & Emery

How the Utah Artificial Intelligence Policy Act Impacts Health Professionals

Overview


On March 13, 2024, Utah Governor Spencer Cox signed Utah State S.B. 149, the Artificial Intelligence Policy Act (the AI Act) into law, which amends the Utah consumer protection and privacy laws to require disclosure, in certain circumstances, of artificial intelligence (AI) use to consumers, effective May 1, 2024. Interestingly, the AI Act takes a bifurcated approach to the disclosure requirement. The law holds businesses and individuals at large to one standard, and it holds regulated occupations, including healthcare professionals, to another standard. The AI Act does not require individuals’ consent or directly regulate how generative AI is used once it is disclosed to patients.

KEY TAKEAWAYS

  • While Utah’s AI Act requires disclosure of AI use to general consumers only if asked, the law requires regulated professionals (such as physicians) to prominently disclose the use of AI in advance.
  • The AI Act’s definition of “generative AI” is vaguely and broadly worded and may include solutions beyond those typically considered generative AI.
  • The Office of Artificial Intelligence Policy’s AI learning laboratory program allows AI developers and deployers to benefit from regulatory mitigation, including waived civil fines, during pilot testing.

Alongside nascent efforts at the federal level to consider regulatory approaches to the development and deployment of AI, an increasing number of state regulatory bodies, including legislatures and professional boards, are entering the fray. Utah is the latest state to endeavor to craft a regulatory response to AI. While many states have enacted laws that impact the deployment of AI tools, including more robust data privacy requirements, few have specifically addressed the deployment of AI tools, and none have specifically focused on the deployment of generative AI tools.

In Depth


BIFURCATED AI USE DISCLOSURE REQUIREMENTS

Commercial Use of AI Under Utah Law

The AI Act requires anyone that deploys generative AI to interact with a person in a manner that is regulated under Utah consumer privacy laws to disclose the use of AI clearly and conspicuously, but only if asked or prompted by the consumer. The AI will need to be trained to confirm that it is AI when asked, and AI deployers will otherwise need to respond to requests for information from consumers. Furthermore, AI deployers likely cannot place the disclosure in terms and conditions because the law requires clear and conspicuous disclosure.

Health Professional Use of AI Under Utah Law

Conversely, the AI Act requires regulated professionals, such as physicians, to “prominently” disclose the use of generative AI in advance of its use. The disclosure should be provided verbally at the start of an oral exchange or conversation and through electronic messaging before a written exchange.

The AI Act leaves the practitioner to determine what the disclosure should include and how much detail to provide. The AI Act is also unclear on how often disclosure is required (i.e., whether disclosure of the use of generative AI should come before each instance of its use, or only before the first instance by a particular regulated professional or organization with which the professional is associated with respect to a particular patient or other consumer). Finally, while the legislation is clear that these disclosure obligations apply where generative AI is used while providing the regulated service (e.g., the practice of medicine), it is often unclear when professional services begin and end. For example, regulated healthcare professionals may struggle to determine whether they are engaging in licensed activity (e.g., the practice of medicine) in a number of circumstances such as care coordination, record management, data analysis and other broad-based activities that are increasingly common in value-based care environments.

While the emphasis on public transparency has benefits, laws such as the AI Act present a few challenges to professionals seeking to achieve meaningful transparency. For example, many patients of regulated healthcare professionals will not understand how sophisticated AI technology works. Accordingly, if a patient has a question about the AI solution and the professional does not know the answer, the dynamic relationship between the professional and patient could be undermined. Further, healthcare professionals do not routinely disclose all technology to be used during a course of treatment, and the AI Act will likely prompt patients to ask questions of this technology in particular. The rationale for treating generative AI technology differently than, for example, an MRI, is not clear.

Which Approach Applies to You?

As a result of the bifurcated approach to governing AI disclosure requirements, AI deployers will need to evaluate which requirement applies to them. Technically, the law delineates between (a) “regulated occupations,” which include healthcare professionals, and (b) all others who “use, prompt, or otherwise cause generative artificial intelligence to interact with a [consumer] ….” The definition of “regulated occupations” includes all occupations regulated and licensed by the Utah Department of Commerce. This definition includes licensed individuals like physicians, but it is not clear whether the disclosure requirement applies to nonlicensed employees or co-workers acting on behalf of the professional. The second prong of the disclosure portion of the AI Act has broader application, which can result in seemingly inconsistent applications. For example, in the value-based care space, companies may use AI to help guide patients in making triage decisions. Such a use would only be subject to the broader requirement to disclose the use of AI if the consumer asks for the disclosure. However, when a physician uses AI in a similar manner to interact with a patient and help the patient understand his or her symptoms to offer diagnostic input, such use would be subject to the regulated occupation requirement to disclose the AI use at the outset. While these cases are very similar on their face, the AI Act treats them rather differently.

DEFINITION OF GENERATIVE AI

The AI Act defines “generative AI” as “an artificial system that: is trained on data; interacts with a person using text, audio, or visual communication; and generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.” While the definition includes some ambiguous terms, it contains the essential elements of what is commonly understood to be generative AI. Nonetheless, it does involve some confusing and perhaps poorly considered aspects.

For example, the third element of the definition vaguely describes a pseudo-objective standard around the outputs by reference to outputs created by a human. When considering the breadth of the communication methodologies humans use, this would likely capture every output imaginable, from a simple text message to graphs and charts to artistic expression. However, the other elements are likely otherwise generic enough for the definition to capture just about any machine learning-developed automated system. Thus, while the AI Act correctly and implicitly acknowledges that generative AI, as opposed to nongenerative AI, presents more risks and more potential sensitivities for consumers, the differentiating definition may be hard to implement and may inadvertently capture AI tools that do not raise the kinds of concerns the AI Act is trying to address.

ENFORCEMENT AND LIABILITY

The AI Act states that when the use of generative AI is not preceded by proper disclosure, the use violates Utah consumer protection laws, which can result in civil penalties of up to $5,000 per violation in enforcement actions brought by the Utah Attorney General. In such cases, AI deployers, as applicable, are liable for the violation and subject to corresponding penalties. The AI Act provides that a party violating the disclosure requirement cannot avoid liability by claiming that the generative AI “made the violative statement; undertook the violative act; or was used in furtherance of the violation.” This provision undermines a generative AI tool deployer’s ability to defend against a claim of violation by stating that the developer of the technology is responsible, and it may also work to undermine the argument that a generative AI tool has enough independent agency to be recognized as an independent actor. However, an AI deployer could seek to hold a developer liability through indemnities or other contractual risk allocation provisions.

RULEMAKING AUTHORITY AND MITIGATION

While other states have created sub-agencies or focus groups to study AI, under the AI Act, the Utah Office of Artificial Intelligence Policy will essentially be a proving ground for effective AI policy and technologies. The AI Act also creates a regulatory agency called the Office of Artificial Intelligence Policy. The Office of Artificial Intelligence Policy was created to administer the AI learning laboratory program and to engage with stakeholders about potential regulatory developments in the field. The AI learning laboratory will assess AI technologies to inform state regulatory frameworks, encourage AI development in the state and assist in the evaluation of proposed regulatory structures. Significantly, the AI learning laboratory will allow participants who join the laboratory to test their products.

Participants in the AI laboratory program may apply for regulatory mitigation during pilot testing, which will allow them to test their AI products in the real world without compliance concerns. This feature of the AI laboratory program is particularly important given the fast-changing nature of AI and the uncertainty around each new use case. Further, the Office of Artificial Intelligence Policy has rulemaking authority to impose rules and requirements for the use of AI products for AI laboratory program participants. Through this rulemaking authority, the Office will likely create a more tangible definition of regulatory mitigation in the AI context.

Our cross-practice team continues to closely monitor developments in AI. Reach out to one of the authors of this On the Subject or your regular McDermott lawyer to discuss the potential legal implications for your business.