Health NZ staff told to stop using ChatGPT to write clinical notes
Health New Zealand has issued a directive prohibiting staff from using ChatGPT to write clinical notes due to concerns about accuracy, compliance, and ethical implications, effective March 26, 2026, i
Health NZ Staff Told to Stop Using ChatGPT to Write Clinical Notes
The News
Health New Zealand (Te Whatu Ora) has issued a directive prohibiting its staff from using ChatGPT for writing clinical notes, effective March 26, 2026 [1]. This decision follows internal reviews and concerns about the accuracy, compliance, and ethical implications of relying on generative AI tools in healthcare. The ban is necessary to ensure patient safety, data integrity, and adherence to professional standards.
The directive applies nationwide, impacting all healthcare providers under the umbrella of Te Whatu Ora [1]. While the exact mechanics of enforcement are not fully detailed, it is understood that staff will need to explore alternative solutions for administrative tasks.
The Context
Since its release in November 2022, ChatGPT has revolutionized how businesses and individuals interact with generative AI, offering unparalleled capabilities in natural language processing (NLP), text generation, and even multimodal outputs like images [4]. However, the integration of such tools into clinical workflows presents unique challenges.
Healthcare professionals face stringent regulations regarding patient data privacy, treatment accuracy, and documentation standards. The use of ChatGPT for clinical notes raises concerns about potential biases in AI-generated content, errors in medical advice, and the ethical implications of delegating critical tasks to non-human agents [1].
Why It Matters
The ban on ChatGPT for clinical notes has far-reaching implications for developers, enterprises, and the broader AI ecosystem.
Impact on Developers and Engineers
From a technical perspective, the directive introduces significant friction for healthcare professionals who may have relied on ChatGPT's capabilities to streamline administrative tasks. This shift underscores the need for more specialized AI tools tailored to healthcare workflows, which are currently underdeveloped in the market [1].
Developers must now navigate a complex landscape of compliance requirements, data governance, and ethical standards when building AI-driven solutions for healthcare. This could slow down innovation unless companies prioritize these factors from the outset.
Impact on Enterprises and Startups
For enterprises like Health NZ, the ban represents a strategic move to mitigate risks associated with AI adoption. While ChatGPT offers undeniable benefits in efficiency and productivity, its potential for errors in clinical contexts poses significant liabilities [1].
On the flip side, this decision could create opportunities for traditional enterprise software providers that offer established electronic health record (EHR) systems. Companies like Epic Systems and Cerner may see renewed interest as healthcare organizations seek reliable alternatives to AI-driven tools [4].
Winners and Losers in the Ecosystem
The immediate losers in this scenario are OpenAI and its ecosystem of developers who specialize in ChatGPT integrations. The restriction on clinical use could dampen demand for generative AI tools in healthcare, at least in the short term.
However, traditional healthcare software vendors are poised to benefit from this shift. As organizations prioritize compliance and reliability over innovation, established players with proven track records in medical data management could gain market share [1].
The Bigger Picture
This decision by Health NZ is part of a broader trend in the AI industry, where companies are increasingly balancing innovation with risk management. OpenAI's recent struggles to scale its services while maintaining quality have led to criticism from users and stakeholders.
For instance, its attempt to integrate e-commerce features into ChatGPT has been met with mixed results, with some users reporting decreased performance and reliability [2]. In contrast, competitors like Microsoft and Google are doubling down on enterprise AI solutions.
Microsoft's partnership with OpenAI to embed GPT-5 capabilities into its Azure cloud platform represents a direct challenge to OpenAI's consumer-focused strategy. Similarly, Google's DeepMind Health initiative is gaining traction in clinical settings, offering tools that integrate seamlessly with existing EHR systems [4].
Looking ahead, the next 12-18 months will likely see a divergence in AI development strategies. While OpenAI continues to experiment with new use cases, companies like Microsoft and Google are focusing on building robust, scalable solutions for enterprise environments.
Daily Neural Digest Analysis
The directive by Health NZ to ban ChatGPT for clinical notes is a significant step in the ongoing evolution of AI adoption in healthcare. While it addresses immediate concerns about patient safety and compliance, it also raises important questions about the future of generative AI in sensitive industries.
Mainstream media has focused heavily on the technical aspects of the ban, but the broader implications for OpenAI's business model are often overlooked. The company's recent pivot toward e-commerce and away from core AI research may have long-term consequences for its reputation and market position [4].
A more critical analysis would also consider the potential for overregulation to stifle innovation. While patient safety is paramount, overly restrictive policies could hinder the development of AI tools that could ultimately improve healthcare outcomes.
As we look ahead, one pressing question remains: How can OpenAI and other AI companies regain trust in critical sectors like healthcare? The answer may lie in redefining their core mission and prioritizing ethical considerations over short-term gains.
References
[1] Editorial_board — Original article — https://www.rnz.co.nz/news/national/590645/health-nz-staff-told-to-stop-using-chatgpt-to-write-clinical-notes
[2] TechCrunch — OpenAI’s plans to make ChatGPT more like Amazon aren’t going so well — https://techcrunch.com/2026/03/24/openais-plans-to-make-chatgpt-more-like-amazon-arent-going-so-well/
[3] OpenAI Blog — Powering product discovery in ChatGPT — https://openai.com/index/powering-product-discovery-in-chatgpt
[4] VentureBeat — Testing autonomous agents (Or: how I learned to stop worrying and embrace chaos) — https://venturebeat.com/orchestration/testing-autonomous-agents-or-how-i-learned-to-stop-worrying-and-embrace
Was this article helpful?
Let us know to improve our AI generation.
Related Articles
Ensu – Ente’s Local LLM app
Ente launches Ensu, a local Large Language Model (LLM) application that enables developers and enterprises to harness advanced AI capabilities directly on their devices, bypassing traditional cloud-ba
Google Lyria 3 Pro makes longer AI songs
Google's Lyria 3 Pro model introduces significant advancements in AI-generated music, enabling longer tracks than ever before, thanks to the integration of TurboQuant algorithm, which enhances memory
Liquid AI's LFM2-24B-A2B running at ~50 tokens/second in a web browser on WebGPU
Liquid AI's LFM2-24B-A2B model achieves an impressive ~50 tokens per second performance when running in a web browser using WebGPU API, marking a significant leap forward for browser-based AI inferenc