AI: Understanding What's Ahead in Privacy & Cybersecurity - McDermott Will & Emery

Key Takeaways | AI and the Next Frontier: Understanding What’s Ahead in Privacy and Cybersecurity

Overview



During this webinar, Global Privacy & Cybersecurity Partners Kathryn Linsky, Romain Perray and Stephen Reynolds shared best practices for businesses to prepare for the rapidly evolving risks and challenges posed by artificial intelligence (AI).

Top takeaways included:

  1. Understanding business goals: Companies should identify the AI they are currently using or plan to use. In this assessment, companies should also consider the types of decision-making by AI to determine the level of risk the AI presents. Once this assessment is complete, they should develop governance frameworks and policies to comply with applicable laws and regulations.
  2. Being transparent: Although comprehensive legislation is slow to develop in the United States, the patchwork of existing and newly introduced federal and state laws focuses on transparency. Companies using AI should prepare to disclose, in their privacy policies or other external-facing documents, how they are using AI, what personal information they are using to train their AI, where this personal information is coming from and the decisions being made by AI.
  3. Correcting biases and avoiding hallucinations and injection attacks: AI is susceptible to bias, generating output based on unreliable learned patterns instead of accurate data and providing misinformation. When training and using AI, be wary of using the information provided as a source of truth and presenting the information as such.
  4. Conducting risk/impact assessments: US states, like California, and the European Union are placing obligations on companies engaging with AI to conduct privacy risk assessments. The EU’s draft AI Act requires companies to perform an impact assessment that includes monitoring to reduce the risk of bias, while California recently released draft regulations on these assessments that define AI broadly. To comply with these regulations both in the US and the EU, companies should prepare to provide nonprivileged responses that can be shared with regulators regarding their training, use and processing of personal information for AI.
  5. Ensuring the highest level of protection in the EU: The EU’s approach to regulating AI is consistent with its general privacy-by-design approach. In addition to the safeguards in the General Data Protection Regulation for automated decision-making, the EU appears ready to adopt the AI Act by the end of the year or the beginning of next year with an effective date in 2026. If adopted, the AI Act will apply to companies both in and outside the EU and will impose various levels of transparency and accountability obligations depending on the risk the AI presents. Companies operating in the EU should begin to assess the level of risk of their AI to prepare for the AI Act.

Explore AI’s implications in more industries and service areas, and how you can seize opportunities and mitigate risk across the legal, business and operational areas of your organization, through our webinar series.

Dig Deeper

Cambridge, United Kingdom / Speaking Engagements / July 1-3, 2024

Privacy Laws & Business | 37th International Conference

Chicago, IL / Speaking Engagements / May 14, 2024

Modern Healthcare Digital Health Summit: Patients and Trust

Washington, DC / / May 8-10, 2024

2024 Privacy + Security Spring Academy

New York / McDermott Event / April 10-11, 2024

Digital Health Forum 2024

Washington, DC / Speaking Engagements / April 3-4, 2024

IAPP Global Privacy Summit 2024

Webinar / McDermott Webinar / March 19,2024

Healthcare Privacy Risks and Enforcement

Get In Touch