Back to Blog
Why Generic AI Tools Put PI Firms at Risk (And What to Do About It)
Legal Tips9 min readJanuary 14, 2026Casey

Why Generic AI Tools Put PI Firms at Risk (And What to Do About It)

Key Takeaways

  • Generic AI tools aren't HIPAA-compliant and may use your queries and data for model training
  • Documented sanctions cases show real consequences: attorneys have been sanctioned for citing AI-generated fake cases
  • PI-specific AI requires: HIPAA compliance, case-level data isolation, source-linked citations, and legal domain understanding
  • The solution: Purpose-built platforms with AI + human verification, not generic chatbots

The Problem with "Good Enough" AI

Most AI platforms are optimized for speed, not security.

They generate answers fast but can't link to the documents where those answers came from. They store your prompts without telling you where that data lives or who can access it. They don't isolate data for each customer and case—creating risk of sensitive data from one matter leaking into an unrelated one.

When it comes to personal injury law, this creates liability. Your cases involve:

  • Medical records
  • Billing histories
  • Insurance claim files
  • Settlement negotiations
  • HIPAA-protected health information

You're already on the hook to protect this data. But if your AI tools aren't designed for legal use, you may be exposing yourself in ways that are hard to detect—and harder to explain later.


Why Most AI Tools Aren't Built for PI

Generic AI platforms—including ChatGPT, Claude, and other general-purpose tools—are built for speed and scale, not legal-grade confidentiality.

1. Weak Access Controls

Generic tools often lack robust permissioning. Even some "secure" platforms make it hard to answer basic questions:

  • Who accessed this case file last week?
  • Did that departed employee still have access?
  • Can we trace how this data was modified?

SOC 2 audits are a must-have baseline. But internal access controls—offboarding procedures, credential hygiene, audit trails—matter just as much.

2. No Client-Level Data Separation

Storing data securely is one thing. Keeping it logically isolated is another.

Proper legal AI platforms don't just separate data at the firm level—they isolate at the case level. This means:

  • No accidental spillover between unrelated cases
  • Tighter internal controls for staff
  • Reduced blast radius if something goes wrong

3. Legal Blind Spots

Even if a tool is secure, you still need to worry about what it does with your data.

General-purpose AI doesn't specialize in legal documents—and it shows. These tools often:

  • Misread ICD codes
  • Skip over critical billing gaps
  • Suggest treatments for deceased clients
  • Summarize notes without citing sources
  • Confidently state facts that don't exist

The last point is the most dangerous. When an AI generates a plausible-sounding citation that doesn't exist, the attorney using it may not discover the problem until opposing counsel or the court does.


Real Consequences: Documented Sanctions Cases

These aren't hypothetical risks. Courts have already sanctioned attorneys for AI-generated errors.

Avianca Airlines Case (June 2023)

A personal injury attorney used ChatGPT to prepare a filing against Avianca Airlines and cited at least six cases that didn't exist. The fabricated citations looked legitimate—complete with case names, reporters, and page numbers—but were entirely invented by the AI.

The incident triggered a sanctions hearing and led multiple federal courts to require lawyers to certify whether AI was used in filings.

Vancouver Custody Case (February 2024)

A lawyer came under review after submitting ChatGPT-generated case law in a child custody dispute. The cases cited to support her argument for taking her client's children overseas turned out to be entirely fictional.

The Pattern

In both cases, attorneys trusted AI output without verification. The tools generated confident, detailed citations that weren't real—and the attorneys submitted them to courts.

Generic AI doesn't understand the stakes of legal practice. It doesn't carry your professional liability insurance. And it doesn't face sanctions when things go wrong—you do.


What PI-Ready AI Should Include

If you're evaluating AI tools for personal injury practice, here's what secure, compliant platforms provide:

RequirementWhy It Matters
End-to-end encryptionProtects data in transit and at rest—not just one or the other
HIPAA-compliant infrastructureActual audit-traceable controls, not just a marketing checkbox
Business Associate AgreementLegal commitment to HIPAA compliance
Role-based access controlsPermissions by role, not just by account
Case-level data isolationMinimizes blast radius of any mistake or breach
Audit logsKnow who accessed what, when, with complete trails
No training on client dataYour files stay yours—never used to improve the vendor's models
Legal domain understandingAI trained on PI workflows: causation, treatment gaps, damages
Source-linked citationsEvery claim traceable to a specific document and page

The Training Data Question

A critical concern with generic AI: your prompts and data may be used to train models.

Sam Altman (OpenAI CEO) has stated that queries might be discoverable. If your firm uses ChatGPT with medical records, that data may become part of training sets—potentially surfacing in other users' outputs.

Purpose-built legal AI platforms contractually guarantee that client data is never used for model training.


Why AI + Human Review Is the Solution

The failure mode in sanctions cases wasn't AI itself—it was trusting AI output without verification.

Generic AI gives you raw output and leaves you to figure out what's correct. Proper legal AI works differently:

Source-Linked Verification

Every fact in an AI-generated document links back to the source record. Click any claim to see the original page. If the AI states a treatment date or diagnosis, you can verify it against the actual medical record in seconds.

This transforms AI from "trust me" to "check this."

Human Expert Layer

For high-stakes outputs—demand letters approaching policy limits, complex economic calculations, cases heading to trial—purpose-built platforms offer optional human expert review.

The AI does the heavy lifting of extraction and organization. Human experts verify accuracy before final output. You get speed without sacrificing reliability.

The Hybrid Model

  • AI processes thousands of pages consistently
  • Every fact links to source documentation
  • Human verification ensures accuracy for high-value matters
  • Attorneys maintain strategic control

This isn't AI replacing attorneys—it's AI handling extraction while humans focus on judgment and strategy.


Security Checklist for AI Vendors

Before uploading client data to any AI platform, verify:

  • SOC 2 Type II certification — Annual third-party security audit
  • HIPAA BAA available — Business Associate Agreement for PHI
  • GDPR compliance — Data privacy standards
  • Clear data residency policies — Know where data is stored
  • No model training on client data — Contractual guarantee
  • Audit trail capabilities — Complete access and modification logs
  • Case-level data isolation — Logical separation between matters
  • Role-based access controls — Granular permissions
  • Encryption at rest and in transit — End-to-end protection

If a vendor can't provide clear answers on these points—or if the answers require "trust us"—keep looking.


Generic AI (ChatGPT, etc.)Purpose-Built Legal AI
No HIPAA complianceHIPAA-compliant with BAA
Data may train modelsClient data never used for training
No source citationsEvery fact linked to source page
Invents plausible-sounding factsOnly states what it can verify
No case isolationCase-level data separation
No audit trailsComplete access logging
Generic outputsTrained on PI workflows
No human verification optionExpert review available

The distinction matters because many firms assume "AI" is a single category. It's not. Tools built for consumer chatbots operate differently than tools built for legal practice with sensitive client data.


Practical Steps for Your Firm

If You're Already Using Generic AI

  1. Stop uploading medical records to ChatGPT, Claude, or similar tools immediately
  2. Review your BAAs — Does your current tooling have proper agreements in place?
  3. Audit recent AI use — Were any AI-generated materials submitted to courts or opposing counsel?
  4. Evaluate PI-specific alternatives — Purpose-built platforms exist for this workflow

If You're Evaluating New Tools

  1. Request compliance documentation upfront — SOC 2 reports, BAAs, data policies
  2. Test with real (anonymized) data — See how the tool handles your actual case types
  3. Verify source linking — Can you trace every AI statement to its origin?
  4. Ask about human verification options — What's the process for high-stakes outputs?
  5. Check the business model — If it's free, you're probably the product

Frequently Asked Questions

Can I use ChatGPT if I anonymize the data first?

Anonymization is harder than it appears. Medical records often contain identifying information throughout—not just in header fields. And even with anonymization, you're still getting outputs that aren't source-linked or verified. The risk-reward calculation doesn't favor generic tools.

What about enterprise versions of ChatGPT or Claude?

Enterprise tiers improve some security features, but they still lack legal domain training, source citation, case-level isolation, and human verification workflows. They're more secure versions of tools not designed for legal practice—not purpose-built legal AI.

Are there ethical rules about AI disclosure?

Several jurisdictions now require disclosure of AI use in court filings. Even where not required, courts increasingly expect transparency. The safest approach: use AI responsibly, verify outputs, and be prepared to explain your process if asked.

How do I explain AI use to clients?

Focus on the outcome: AI helps process records faster and more thoroughly, identifying details that manual review might miss. Emphasize that attorney judgment and human verification remain central to case strategy. Most clients care about results and security—explain how AI improves both.


Conclusion

AI is transforming personal injury practice. But not all AI carries equal risk.

Generic tools like ChatGPT aren't built for legal work. They lack HIPAA compliance, may use your data for training, don't cite sources, and occasionally invent facts that sound real but aren't. The sanctions cases show these aren't theoretical concerns.

Purpose-built legal AI platforms address these gaps with security infrastructure, source-linked citations, case-level isolation, and human verification options.

The defense won't care that you used "industry standard" tools if the wrong file leaks—or if your filing cites cases that don't exist. Choose AI designed for the stakes of legal practice.


Ready to see what secure, PI-specific AI looks like? Request a demo to explore compliant workflows.

Share

Similar Articles

Why Generic AI Tools Put PI Firms at Risk | Precedent | Precedent