Neural Notes: ChatGPT Health’s Australian Launch Raises Hard Regulatory Questions
SmartCompany
READ
Details
- Date Published
- 16 Jan 2026
- Priority Score
- 4
- Australian
- Yes
- Created
- 17 Jan 2026, 08:15 am
Description
ChatGPT Health is launching in Australia, raising new questions about regulation, data storage, accountability and AI’s role in healthcare.
Summary
OpenAI's introduction of ChatGPT Health in Australia presents significant regulatory challenges relating to user data storage and the integration with national health systems. The tool is designed to offer users educational and informational support rather than clinical diagnoses, but it leverages comprehensive clinical input from hundreds of global physicians. This raises questions about its impact on traditional healthcare pathways and the responsibilities of users and providers. While the initiative aims to supplement healthcare with pre-clinical interactions, concerns remain about its oversight and the potential for adverse outcomes resulting from its advice, especially within Australia's mixed public–private healthcare system.
Body
Welcome back to Neural Notes for 2026! In the first edition of the year, OpenAI is bringing ChatGPT Health to Australia, leading to questions around regulation and data storage.
According to OpenAI, health is already one of the most common use cases for ChatGPT globally. Reportedly, more than 230 million people globally ask the platform health and wellness questions each week.
Related Article Block Placeholder
Article ID: 321249
Neural Notes: When saying no to AI means losing your doctor
Tegan Jones
The new Health experience formalises that behaviour, allowing users to ground conversations in personal health information, connect wellness apps, and track patterns over time.
OpenAI confirmed to SmartCompany that ChatGPT Health is designed strictly as an informational tool, stressing that “ChatGPT Health is for education and information, not medical diagnosis or treatment” and it is “never intended to replace doctors or professional medical care”.
The company said ChatGPT’s role is to “provide general information that exists, so individuals can make informed decisions with their healthcare providers — not to make decisions for them”.
Smarter business news. Straight to your inbox.
For startup founders, small businesses and leaders. Build sharper instincts and better strategy by learning from Australia’s smartest business minds. Sign up for free.
* indicates required
Email Address *
By continuing, you agree to our Terms & Conditions and Privacy Policy.
The company also pointed to extensive clinical input during development, telling SmartCompany it had “worked in partnership with a cohort of over 200 physicians from 60 countries of practice, including Australia, to advise and improve the models powering ChatGPT Health”.
That work is said to have included expert review, red-teaming, and the development of health-specific evaluations designed to assess response quality, safety, and appropriate escalation in sensitive contexts.
ChatGPT Health’s safety, privacy and containment
Privacy and containment are central to how OpenAI is positioning the product. OpenAI said ChatGPT Health is “a dedicated space where health conversations stay separate from the rest of your chats, with strong privacy protections by default”.
Health data, the company said, is encrypted and protected by layered safeguards, with “conversations in Health not used to train our foundation models”.
Those protections build on existing controls across ChatGPT, including temporary chats and the ability for users to permanently delete conversations from OpenAI’s systems within 30 days.
OpenAI also highlighted its work in mental health safety, saying it has worked with “170+ mental health experts” to improve ChatGPT’s ability to “detect and respond to signs of potential distress, de-escalate conversations, and guide people to real-world support”.
Health-specific model performance is evaluated using HealthBench, which OpenAI describes as a framework focused on safety, response quality, and appropriate escalation rather than simple factual accuracy. The company also confirmed there are “no plans for ads in ChatGPT Health”.
Related Article Block Placeholder
Article ID: 322212
Heidi Health goes fur-ther with AI for pet care
Tegan Jones
Where the gaps remain
What remains less clear is how ChatGPT Health will ultimately intersect with Australia’s healthcare infrastructure and health technology ecosystem.
OpenAI did not respond to SmartCompany’s questions regarding whether the product would integrate with national systems such as My Health Record, nor did it outline where Australian users’ health data would be stored or processed as the product scales locally.
While OpenAI has been careful to frame ChatGPT Health as informational rather than clinical, that positioning could place it outside Australia’s existing medical device and diagnostic regulatory regimes, leaving a gap between real-world influence and formal oversight.
The company also did not address how accountability would be handled if a user acted on guidance from ChatGPT Health and experienced harm or a delay in care, beyond reiterating that the tool is not intended to replace clinicians.
That gap matters not just for federal regulators, but for Australia’s publicly funded health system, where state-run hospitals and services ultimately absorb the downstream effects of how and when people seek care.
OpenAI similarly did not outline how ChatGPT Health would handle guidance related to insurance or payer decisions in Australia’s mixed public–private health system, even as the product positions itself as a way to help users understand healthcare trade-offs.
OpenAI is rolling out ChatGPT Health into a system already under pressure, where cost, access and wait times are increasingly pushing people to seek answers outside traditional care pathways.
Even without diagnosing or treating patients, tools like ChatGPT Health will arguably shape a parallel, pre-clinical layer of healthcare. It will influence how people interpret results, prepare for appointments, and decide when to seek care, often through confident, personalised explanations that can be difficult for users to contextualise.
As with many AI deployments, the technology is moving faster than the institutions expected to respond to it. The norms around acceptable use, transparency, and risk are therefore likely to be set by platform design decisions well before regulators intervene.
ChatGPT may not diagnose or treat, but according to OpenAI’s own data, the original platform is already affecting how people understand and act on their health.
That alone makes its local rollout, and regulator response to it, worth keeping a close eye on.
Stay in the know
Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.