Overview
During the AI Governance and Security Assessment Workshop, Shawn Helms and Jason Krieser of McDermott Will & Schulte and Patrick Murphy of Palo Alto Networks Unit 42, discussed ways to govern the use of generative artificial intelligence (AI) and how to address security risks associated with AI in a fast-paced environment.
Key takeaways from the program included:
In Depth
- AI is becoming ubiquitous. AI is rapidly permeating all facets of business operations. We are approaching an environment in which the “oddball” case will be the absence of AI rather than its presence. Organizations must proactively plan for and guide AI adoption instead of attempting to block it. This includes implementing safe and responsible deployment practices and developing a clear understanding of each tool’s risk profile. Thoughtful governance at the outset will help facilitate adoption and utilize AI tools.
- Fragmented legal landscape in the US. There is no comprehensive federal framework that provides unified guidance or regulatory oversight for AI in the United States. Common themes emerging across state legislatures include transparency obligations, consumer and employee protection requirements, data privacy safeguards, bias mitigation mandates, and restrictions on deepfakes. States appear to be diverging in their approaches. For example, Texas is taking a business friendly approach, whereas Colorado imposes heavier burdens (especially for high risk systems) while offering an affirmative defense tied to risk mitigation frameworks. This divergence will affect the overall compliance approach of organizations, particularly those that operate across multiple states or globally.
- Zero trust foundation. Human oversight should be integrated into every AI-driven action, and outputs generated by AI tools must undergo appropriate human review. Recent incidents have demonstrated that AI can be used to create or amplify security vulnerabilities that adversaries might not have been able to exploit without AI-powered capabilities. The introduction of AI into the threat landscape has also accelerated the pace of attacks and compressed incident-response timelines, which increases the pressure on organizations to evaluate, strengthen, and routinely stress-test their security infrastructure with AI-related risks in mind.
- Autonomous AI attacks. The threat landscape now includes autonomous or semi-autonomous AI-assisted cyber operations that are highly scalable, adaptable, and difficult to attribute. Adversaries are now leveraging LLMs, agents, and model context protocols to automate data exfiltration, and perform post exfiltration analysis. As a result, organizations must prepare for attackers executing faster, more persistent, and more sophisticated attack campaigns, and must modernize their defense systems accordingly.
- AI governance programs. Organizations of all sizes should prioritize the development of durable, comprehensive AI governance programs. A key component of this effort is to establish cross-functional governance committees to steer AI adoption in a safe direction. Other efforts include overseeing AI adoption, establishing and updating AI policies, conducting enterprise-wide AI inventories (including identifying shadow AI), and monitoring legal and regulatory developments. The National Institute of Standards and Technology (NIST) has published a NIST AI Risk Management Framework, which helps organizations reduce AI risk, automate AI risk management, and continuously monitor AI risks.