
TGA Review on AI Regulation Strengthening and Clarification
The Therapeutic Goods Administration (TGA), Australia’s regulator of therapeutic goods, has published its outcomes report titled Clarifying and strengthening the regulation of Medical Device Software including Artificial Intelligence (AI) in July 2025. At the same time, the Department of Health, Disability and Ageing released its final report on Safe and Responsible Artificial Intelligence in Health Care: Legislation and Regulation Review. Together, these documents provide a comprehensive view of how Australia intends to regulate AI in healthcare settings, with a particular focus on medical device software. For manufacturers, biotechnology companies, and testing laboratories, these outcomes signal significant implications for compliance and innovation.
This article, prepared for Quality Systems Now, specialists in GMP and regulatory compliance, summarises the key findings and their relevance to industry stakeholders.
Scope and Context of the Review
The TGA undertook its review in recognition of the increasing role that software and AI systems play in medical devices and therapeutic goods. Advances in adaptive AI, natural language processing, and predictive analytics mean that existing regulatory definitions and frameworks require clarification to remain fit-for-purpose.
The review was supported by Federal Government funding under the 2024–25 Budget, which allocated nearly 40 million dollars to policy development in safe and responsible AI. This reflects the government’s acknowledgment that AI can both enhance and complicate healthcare delivery, creating opportunities while raising regulatory and ethical challenges.
Consultation and Methodology
The TGA’s review was informed by an extensive consultation process. A public consultation period ran from September to October 2024, attracting over 50 formal submissions. In addition, more than 600 stakeholders participated in workshops and webinars. These included healthcare professionals, industry representatives, software developers, and consumer groups.
Engagement extended to technical reference groups focusing on software as a medical device (SaMD) and AI, as well as consumer and regulatory technology experts. The process ensured the findings were grounded in real-world perspectives from both industry and healthcare delivery settings.
Key Findings
The TGA’s report produced 14 findings across several categories, including legislation, regulation, guidance, and international harmonisation. The following sections summarise the most significant points.
Legislative Framework
The review confirmed that the Therapeutic Goods Act 1989 remains the foundation of regulation. However, the terminology within the Act does not always map neatly to the realities of AI development and deployment. Terms such as “manufacturer” and “sponsor” can be ambiguous when applied to AI systems, particularly those built collaboratively or updated dynamically. Suggestions were made to incorporate clearer definitions, including references to “software,” “adaptive algorithms,” and “AI drift,” to provide greater regulatory certainty.
Technology-Agnostic and Risk-Based Approach
A central finding of the review is that Australia’s framework is technology-agnostic and risk-based. This means regulation is based on the potential risk posed by a medical device rather than the underlying technology. This principle remains appropriate for AI-enabled devices, as it allows the framework to adapt as technology evolves. However, stakeholders emphasised that additional clarity is required on how risk should be assessed when dealing with AI that changes behaviour over time.
Excluded Software
Reforms introduced in 2021 defined several categories of software that were excluded from medical device regulation. While these exclusions remain relevant, the rapid uptake of AI has blurred the lines in certain cases. For example, digital scribes—AI tools that transcribe or summarise clinical consultations—may fall within the scope of regulation when their functions extend into diagnosis or treatment recommendations. The review indicated that some excluded categories may need to be reconsidered in light of technological developments.
Transparency and Accountability
A key concern raised during consultation was accountability for AI-driven outputs. In many cases, the entity deploying AI in clinical practice is not the same as the original developer. This creates challenges in ensuring responsibility for safety, performance, and updates. The TGA highlighted the need for stronger expectations around transparency, including clearer documentation of datasets, algorithms, and change management processes.
Guidance and Resources
Stakeholders identified shortcomings in existing TGA guidance materials. Information on AI and software as a medical device is scattered across multiple webpages and documents, making it difficult for manufacturers and sponsors to navigate. There is strong demand for comprehensive, centralised guidance that covers:
Management of adaptive or continuously learning AI
Handling of open datasets and software of uncertain provenance
Post-market performance monitoring and reporting
Interpretation of essential principles for AI-enabled devices
The TGA committed to developing new and more accessible resources in these areas.
International Harmonisation
The TGA’s review stressed the importance of aligning with international frameworks, such as those emerging from the European Union’s AI Act and ongoing work within the International Medical Device Regulators Forum (IMDRF). Harmonisation reduces duplication for global manufacturers and ensures Australian regulation remains internationally credible.
While Australia’s approach is not identical to other jurisdictions, the underlying risk-based philosophy is broadly consistent. Continued engagement with global standards will be essential to maintain alignment.
Digital Scribes
A notable example raised in the report was digital scribes. These AI systems, which record and summarise doctor–patient consultations, are increasingly common. If they simply capture information, they may not qualify as medical devices. However, once they begin suggesting clinical actions, they cross into regulated territory. This illustrates how quickly AI-enabled tools can transition from supportive software to devices with direct health implications.
The TGA signalled it will provide more specific guidance on this category to avoid uncertainty for developers and users.
Compliance and Enforcement
Finally, the review acknowledged the need for stronger compliance and enforcement. As AI-enabled devices proliferate, unapproved products are more likely to enter the market. The TGA intends to enhance its enforcement mechanisms, combining education and targeted compliance actions with the ability to remove unapproved products more effectively.
Broader Healthcare Policy Context
The Department of Health’s complementary report on Safe and Responsible AI in Health Care provides a wider lens. It identifies challenges in governance across state and federal levels, highlighting the fragmented regulatory landscape. It also calls for stronger leadership in AI policy, improved communication channels, and accessible guidance for clinicians and consumers.
The report acknowledges the potential of AI to reduce clinician burden, improve diagnostic accuracy, and enhance patient outcomes. However, it stresses that without robust regulatory structures, risks of misuse, bias, and patient harm could undermine trust and safety.
Implications for Industry and Regulatory Practice
For therapeutic goods manufacturers, testing laboratories, and biotechnology firms, the implications are clear. AI-enabled tools with a health purpose must be carefully assessed against regulatory requirements. Sponsors must ensure that their quality management systems capture the specific risks associated with adaptive software, including updates, dataset integrity, and performance monitoring.
Documentation will become increasingly important, with regulators expecting greater transparency about the provenance of training data, testing protocols, and post-market surveillance. Alignment with international standards such as ISO 14971 and IEC 62304 will also support compliance in this evolving landscape.
For companies supported by Quality Systems Now, these findings reinforce the need for proactive compliance strategies. Adopting best practices early and anticipating regulatory developments will help organisations avoid delays, manage risk, and build trust with regulators and end-users.
Conclusion
The TGA’s July 2025 outcomes report represents a comprehensive effort to strengthen and clarify the regulation of AI in medical devices. It affirms the resilience of Australia’s risk-based framework while recognising the need for refined terminology, improved guidance, and enhanced accountability. The complementary Department of Health report situates these findings within the broader healthcare system, emphasising both the opportunities and risks of AI.
For industry, the message is clear: AI is here to stay, but it must be developed, deployed, and maintained within robust regulatory guardrails. By adapting now, therapeutic goods manufacturers, testing laboratories, and biotechnology companies can harness AI’s potential while ensuring compliance and patient safety.