AI tools are genuinely useful for NDIS documentation. Support coordinators, behaviour support practitioners, and plan managers are all finding ways to save time with AI — drafting progress notes, summarising assessments, pulling together service agreements faster.
But there's a question most teams aren't asking: where does your data actually go?
The answer matters a lot when you're working with NDIS participant information.
The Problem with Public AI Tools
ChatGPT is the obvious starting point for most people experimenting with AI at work. It's free, accessible, and genuinely capable. The problem isn't what it can do — it's what happens to your data when you use it.
When you paste participant information into ChatGPT, that data is sent to OpenAI's servers in the United States. Depending on your account settings, it may be used to improve OpenAI's models. There's no isolation between your organisation and every other person using the same platform.
The free tier of ChatGPT, by default, uses conversations for training. Even with a paid account, you need to actively opt out of data sharing — and most users don't realise this until after they've already shared sensitive information.
This isn't unique to ChatGPT. Google Gemini, Microsoft Copilot in consumer mode, and most free AI tools have similar policies. The product is free because the data has value.
For most use cases, this is a reasonable trade-off. For NDIS participant data, it isn't.
What "Private AI" Actually Means
"Private AI" gets thrown around loosely, so it's worth being specific about what it actually means in practice.
A properly private AI deployment means:
Dedicated infrastructure. Your organisation runs on its own isolated instance, not a shared pool. Your conversations and documents don't sit alongside other organisations' data.
No model training on your data. Your inputs are never used to improve the underlying model. Participant notes stay participant notes — they don't become training examples.
Australian data residency. Data is stored and processed in Australia, on Australian infrastructure. It doesn't leave the country.
Full audit trail. You can see who accessed what, when. Compliance teams can actually answer questions about data handling.
This is fundamentally different from using a consumer AI product, even a paid one.
The NDIS Data Difference
NDIS participant data is some of the most sensitive personal information that exists. We're talking about disability diagnoses, behaviour triggers, medication schedules, restrictive practices, trauma history, and family circumstances.
The Privacy Act 1988 applies to NDIS providers. Health information — which covers much of what's in NDIS documentation — is treated as sensitive information requiring a higher standard of protection.
The NDIS Practice Standards require providers to protect participant privacy and handle personal information appropriately. The NDIS Commission takes complaints about privacy breaches seriously.
Beyond the formal requirements, there's the practical reality: participants trust providers with their most personal information. That trust is foundational to the entire support relationship.
Using a public AI tool with participant information isn't just a compliance risk — it's a breach of that trust.
How LAIT Handles This Differently
LAIT AI was built specifically for this problem. The infrastructure decisions weren't made for convenience — they were made because NDIS workflows require something that public AI can't provide.
AWS Sydney region. All data is hosted in Australia, in AWS's Sydney data centres. No participant information crosses international borders.
Amazon Bedrock, not direct OpenAI. LAIT uses AWS Bedrock as the underlying AI service. AWS explicitly does not use your data to train its models. This is a contractual commitment, not a policy setting you might accidentally turn off.
Per-organisation isolation. Each LAIT customer runs in a separate environment. Your data is never co-mingled with another organisation's data.
Audit logging. Every interaction is logged. You can demonstrate to an auditor, a participant, or the NDIS Commission exactly how information was handled.
See LAIT AI in action
Book a personalised demo and discover how LAIT AI keeps your organisation's data private.
Practical Advice for Providers Evaluating AI Tools
If you're evaluating AI tools for your organisation, here are the questions worth asking before you sign up:
Where is data stored? If the answer is "overseas" or "we use OpenAI," that's a red flag for NDIS participant data.
Is your data used for training? "No" should be a contractual commitment, not just a policy preference.
What isolation exists between customers? Shared infrastructure means shared risk.
Can you produce an audit trail? If a compliance question comes up, you need to be able to answer it.
Is there a BAA or equivalent data agreement? For health information, you need a formal agreement about how it's handled.
The AI market is moving fast and the marketing claims are getting bolder. Focus on the underlying infrastructure decisions — where does data go, who can access it, and what are the contractual protections.
NDIS providers have a real opportunity to use AI to reduce documentation burden and spend more time on actual support work. The goal is to do that without compromising the privacy of the people you're there to support.
Explore how LAIT approaches security and data handling, or learn more about our platform for NDIS providers.
NDIS AI Privacy Guide
Download our guide on using AI safely in NDIS environments — covering data privacy, compliance, and best practices.