Latest News

Implementing AI in Regulated Industries to Guarantee GMP Compliance – Is it Possible? Part 2. What is an AI Management System?

Implementing AI in Regulated Industries to Guarantee GMP Compliance – Is it Possible? Part 2. What is an AI Management System?

June 24, 20246 min read

Part 1 of this three-part article series looked at the current status of the regulators regarding AI in the pharmaceutical and medical device industries. If you missed Part 1, you can read it here. 

If you wanted to implement AI into production processes or include it as part of the product itself, like software as a medical device, how would you: 

  • Maintain GMP?  

  • Demonstrate / provide evidence of compliance and product safety? 

  • Manage increased amounts of collected data? 

  • And probably many, many more questions. 

In this article, I’ll look at a new standard, ISO 42001:2023, that details the requirements for a management system to ensure AI remains ‘safe’. This ISO standard does not include specifics on the AI system itself, only the means of managing it – the management system. 

 

Overview of ISO42001 Information Technology - Artificial Intelligence – Management System. 

The first and current version of ISO42001 was published in 2023 to help organisations ‘responsibly perform their role with respect to AI systems – use, develop, monitor or provide products and services that utilize AI’. Although it is not a standard specifically for therapeutic goods, it does introduce the concept that AI needs a management system for appropriate control and use. 

If you are familiar with the Good Automation Management Practices (GAMP), then ISO42001 will make sense as a framework around an AI (or IT) system for ensuring control, traceability and transparency. The standard is laid out like all ISO standards, taking into consideration top management responsibilities and integration with the rest of the business. Main sections include: 

  • Context of the organisation 

  • Leadership 

  • Planning 

  • Support 

  • Operation 

  • Performance evaluation 

  • Improvement 

  • Four annexes covering reference controls, implementation guidance for AI controls, potential AI-related objectives and risks, and use of the AI management system across domains or sectors. 

 

AI Concerns 

The Introduction of ISO42001 notes that AI raises concerns, such as: 

  • “The use of AI for automatic decision-making, sometimes in a non-transparent and non-explainable way, can require specific management beyond that of the classical IT system” 

  • “The use of data analysis, insight and machine learning, rather than human-coded logic to design systems, both increases the application opportunities and changes how such systems are developed, justified and deployed” 

  • “AI systems that perform continuous learning change their behaviour during use. They require special consideration to ensure their responsible use continues with changing behaviour”. 

ISO42001 provides guidance for, “establishing, implementing, maintaining and continually improving an AI management system within the context of an organisation”. 

The AI management system should be integrated with processes and the overall management structure – this, in pharmaceutical and medical device companies, is the QMS. The specific use of AI must be considered in the design of processes, information systems and controls, which is a consistent requirements across all ISO standards – think quality policy and objectives, customers (or patients), risks, processes for managing concerns associated with AI both internally and with external parties like suppliers and partners. 

 

Requirements for Leadership 

Top management are expected to: 

  • Ensure there is an AI policy with AI objectives, compatible with the strategic direction of the company 

  • Ensure the integration of the AI management system requirements into business processes 

  • Provide resources and communicate the importance of effective and conforming AI management 

  • Make sure that the AI system achieves the intended results and promote continual improvement 

 

Addressing Risks and Opportunities  

Planning for the AI management system must address risks and opportunities, within the designated scope defined by the company, so that: 

  • There is assurance that the AI management system can achieve the intended results 

  • Prevent or reduce undesirable effects 

  • Achieve continual improvement. 

To do this, the company must establish and maintain a list of AI risk criteria so that: 

  • Acceptable versus non-acceptable risks can be identified 

  • Perform risk assessments and risk treatments 

  • Assess AI risk impacts. 

As part of determining risk treatments, the company should determine all controls necessary to implement the AI system safely. 

 

AI System Impact Assessment 

The company must define both the: 

  • Process for assessing the potential consequences of AI, as well as 

  • The potential consequences of it’s deployment, intended use, and foreseeable misuse, 

for individuals, groups or societies, that could result from the development, provision or use of AI systems. 

 

Integration with the QMS 

Like other processes and management systems, the AI management system must be integrated with other QMS processes. This includes processes and documentation to show: 

  1. Appropriate management of changes – these must be planned and controlled. Presumably a change control or IT change control process is applicable. 

  1. Training and competency – Determine the competence of personnel completing work that affects the AI performance, and ensure that they are trained and can demonstrate competence. 

  1. Inclusion in the internal audit schedule – the AI management system must conform to the company’s requirements for its AI management system and the associated documentation. There should be established criteria and scope for each audit. 

  1. Inclusion within management review – Inputs to management review should include the status of all previous actions, changes in external/internal issues relevant to the AI management system, changes in need or expectation of interested parties relevant to the AI management system, and AI performance and trends 

  1. Continuous improvement – The company is expected to continuously improve the suitability, adequacy and effectiveness of the AI management system. 

  1. Non-conformances and CAPA – the company is expected to react to nonconformances (control and correct), evaluate root cause and implement any actions needed. 

 

What does all this tell us? 

ISO42001 is only a high-level framework for planning and implementing an AI management system – the framework around an AI system or process. It is following the established ISO path for a management system, very much like for either a QMS or Safety Management System (SMS). So, the company is responsible for determining controls, risks, risk mitigation / treatments, resources and competency needed to safely execute an AI system. However, as AI is currently an exploding field, implementing a management system around an AI algorithm in one company could be vastly different to a different AI system in another company. So, ISO42001 is trying to provide a pathway to manage an AI system without the AI details. Consequently, it has to be high-level. 

I think that ISO42001 is a necessary first step to helping companies wanting to implement AI in their production or QMS systems, or within a medical device, to demonstrate control and compliance. The annexes also provide another layer of helpful detail. However, much more detailed guidance is required from the regulators. 

 

Look out for Part 3 of this article series where I discuss if AI could be used to generate QMS documents. 

Back to Blog