The Impact of the New EU AI Act on the Medtech and Life Sciences Sector - McDermott Will & Emery

The Impact of the New EU AI Act on the Medtech and Life Sciences Sector

Overview


As technology continues to advance almost every aspect of healthcare, so the use of AI has become an increasing focus for developers and the regulators who are racing to keep pace with rapid advancements in technology.

Software (including AI) with a medical purpose is already regulated in Europe and the United Kingdom as a medical device and requires comprehensive assessment before it can be placed on the market under EU Medical Device Regulations 2017 (MDR) and the EU In Vitro Diagnostic Medical Devices Regulation (IVDR).

Despite the existing comprehensive regulatory requirements, there has been concern that the current framework does not fully address the ethical and transparency risks associated with AI. The European parliament is leading the way with the Act, which applies to all sectors but will have significant implications in the life sciences sector, particularly for AI medical device manufacturers. Click here for our general overview of the Act.

Like the General Data Protection Regulation, the Act has global reach; it will apply to providers wherever they are in the world if they place, or put into service, an AI system in the European Union. The Act is also only one piece in the puzzle of new AI-related legislation and will need to be read in the context of changes proposed on product liability and AI liability.

In Depth


Defining an AI System

Over the last few years, it has become popular to describe technologies as artificial intelligence, even where the software may be a fixed or locked algorithm with no adaptiveness.

It will now be important for manufacturers to determine whether their software are truly AI systems with the scope of the Act, which defines an AI system as:

A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The key term here, “infer” is not precise, but the recitals to the Act give helpful context about how it should be interpreted, stating that AI does not include systems based on rules defined solely by natural persons to automatically execute operations. In other words, the Act does not appear to apply to software comprised of rules-based fixed algorithms. Systems that go beyond basic data processing and enable learning, reasoning, or modelling are, however, likely to be caught.

The line here may not always be clear cut, and it appears that the Act will not apply to many current “AI” solutions, which operate using fixed diagnostic algorithms rather than independent or self-learning capabilities, although stabilised systems with incremental learning may be caught.

High Risk AI Systems

Under the Act, any AI system that is a Class IIa (or higher) medical device, or uses an AI system as a safety component, is designated as “high risk”.

The Act also specifies certain types of healthcare AI systems as high risk, whether or not they are medical devices, such as AI systems used by public authorities to evaluate the eligibility of people for essential public services, and AI systems that are emergency healthcare patient triage systems.

What AI Medical Device Providers Need to Know

Under the AI Act, high-risk AI systems will need to comply with a raft of additional requirements, many of which overlap with the current rigorous requirements of conformity assessment under the MDR and IVDR

These entirely new requirements include a conformity assessment by a notified body that the AI system meets the requirements under the AI Act, including with respect to the technical documentation and risk management system. During the development of the legislation, there was concern that this “double certification” would lead to significant delay of market entry and double running cost for device manufacturers. The legislators have accommodated these concerns, in part. For medical devices, the Act states that the conformity assessment procedure in the MDR and IVDR must be followed, and that the requirements of the Act will be part of that assessment.

The Act also allows medical device notified bodies to carry out AI conformity assessments, provided that their AI competence has been assessed under the MDR and IVDR. In other words, a single declaration of conformity is proposed, although the precise mechanics for this remain unclear.

Given the well-publicised lack of notified body capacity in the run-up to the implementation of the MDR and IVDR, medical device manufacturers will naturally be concerned to ensure that their existing notified body has been assessed as competent to conformity assess AI systems. If two notified bodies are required, that may risk divergent views on how the same or similar requirements are to be met.

Many of the requirements in the Act also replicate existing requirements in the EU MDR and EU IVDR. For example, the requirements to have a quality management system, technical documentation, and instructions for use.

The AI Act contemplates a single set of technical documentation to include all the requirements, both under the EU MDR and the EU AI Act. However, medical device manufacturers that have already certified their devices under the EU MDR, may need to amend their technical documentation to reflect the additional requirements of the EU AI Act.

Additional requirements for AI systems that are not already in the EU MDR and the EU IVDR, include:

  • Governance and data management requirements for training and testing data sets
  • New record-keeping requirements, including the automatic recording of events (logs) over the system’s lifetime
  • Transparent design requirements so deployers can interpret the output and use it appropriately
  • Human oversight design requirements
  • Accuracy and cybersecurity requirements.

Where medical device manufacturers are providers or deployers of general-purpose AI models or systems, they will also need to comply with these requirements.

Although legislators have made efforts to attempt to streamline overlap between regulatory frameworks, many questions remain. For example, it is not clear how the substantial modification framework under the AI Act will interact with the MDR and IVDR modification rules. Likewise, it is not clear whether devices undergoing a trial (performance evaluation or clinical investigation) will need to be AI Act certified prior to use in the trial.

The Act proposes harmonised standards, but it is not currently clear whether these will overlap or differ from current harmonised standards, such as ISO 13485.

The industry will be keen to see the guidance on these points, and to understand whether the costs and time of a second certification present a barrier to market entry of the most innovative products.

The Impact on Deployers of High Risk AI Systems

Unlike the MDR and IVDR, which place responsibilities on economic operators in the supply chain, the AI Act also puts responsibilities onto the deployers of AI systems, being any person using an AI system in the course of a business or professional activity, such as hospitals or clinicians. These deployers will have new obligations, including:

  • Taking appropriate technical and organisational measures to ensure that AI systems are used in accordance with their instructions for use
  • Assigning human oversight to competent, trained people
  • Monitoring and surveillance
  • Maintaining system logs when these are under their control
  • Undertaking, where applicable, data protection impact assessments.

Impact on the Use of AI Systems in the Wider Life Sciences Sector

There is increasing use and development of AI systems and models across the medicine product lifecycle; from drug discovery, through to clinical trial recruitment, and post market vigilance activities.

Many of these AI systems are unlikely to be medical devices because they do not have an intended medical purpose or are not designated as high risk under AI Act. The Act imposes only relatively lightweight obligations on the providers of these systems, such as requirements to ensure a sufficient level of AI literacy of their staff (including training), and certain transparency requirements.

Many pure scientific AI applications may also benefit from an exception from the AI Act. The EU Parliament has indicated that the intention of the Act is to support innovation, and the Act exempts AI systems, including their output, that were specifically developed and put into service for the sole purpose of scientific research and development.

This exemption may be helpful for life sciences companies at the early stages of research, but it is not yet clear how the term “scientific research” will be interpreted and whether this exemption will only apply to academic, as opposed to commercial research.

Although life sciences companies may welcome the light touch approach under the AI Act, this is unlikely to be the end of the story for AI systems used in medicinal development.

In December 2023, EMA published a five year workplan setting out its key work streams for its strategy on AI. This built on the draft reflection paper issued in 2023, which was published for consultation on the use of AI in the lifecycle of medicinal products. This draft paper emphasised that marketing authorisation holders and applicants are responsible for ensuring that any AI used to support the development of a drug complies with existing requirements, such as good practice (GxP) and European Medicines Agency scientific guidelines. The reflection paper also outlined various use cases for AI across the product lifecycle and gave indications of risk for AI used at various stages.

According to the workplan, further AI guidance and the reflection paper are due to be published in 2024 and 2025.

Implementation Timeline

The AI Act is likely to enter into force later this year, with a phased implementation period, followed by a phased transition period, before becoming enforceable. Obligations for high-risk AI systems already covered by other EU regulation, such as medical devices, will only come into force 36 months after the Act enters into force.

Whilst this is a relatively generous period, it is worth bearing in mind that the MDR and IVDR had longer implementation periods, and in both cases, these have now been extended.