The Chatbot That Made a Promise It Couldn't Keep
🔴 REAL INCIDENT: Air Canada held liable for chatbot misinformation (February 2024)
What Happened
On November 11, 2022, Jake Moffatt visited Air Canada's website with a heavy heart. His grandmother had just passed away, and he needed to book a last-minute flight from Vancouver to Toronto for the funeral.
Like many customers, Moffatt consulted the website's AI chatbot for guidance on bereavement fares—discounted tickets offered to those traveling for family emergencies.
The chatbot's response was clear and confident: Book at full price now, and apply for a bereavement discount retroactively within 90 days of travel.
Moffatt did exactly that. He paid $1,630.36 for tickets and submitted his bereavement application after returning home.
Air Canada refused the refund.
The Fine Print vs. The Chatbot
Here's where it gets interesting. The chatbot's advice directly contradicted Air Canada's actual policy.
The real bereavement policy, buried elsewhere on the website, clearly stated that discount requests must be made before travel, not after. The chatbot had invented a policy that didn't exist.
When Moffatt pointed out the discrepancy, Air Canada's response was remarkable:
"The chatbot is a separate legal entity that is responsible for its own actions."
Air Canada essentially argued that while the chatbot lived on their website, answered questions on their behalf, and displayed their branding—they weren't responsible for what it said.
The Tribunal's Verdict
On February 14, 2024, the British Columbia Civil Resolution Tribunal rejected this argument completely.
Tribunal member Christopher Rivers wrote:
"Air Canada does not explain why customers should have to double-check information found in one part of its website on another part of its website."
The ruling was unambiguous: Companies are liable for all information on their websites, including statements made by AI chatbots.
Air Canada was ordered to pay Moffatt $812 in damages—the difference between what he paid and what he would have paid under actual bereavement rates.
Why This Matters for Every Enterprise
The $812 damages are almost irrelevant. The precedent is everything.
This case established several principles that should keep every enterprise AI deployer awake at night:
1. You cannot outsource liability to your AI.
The chatbot was "part of Air Canada's website" and therefore Air Canada was "responsible for all the information on its website."
2. Customers don't need to verify AI answers.
The tribunal rejected the argument that Moffatt should have checked the chatbot's answer against other sources. If your AI says something, you own it.
3. "Reasonable care" applies to AI outputs.
The tribunal found Air Canada "did not take reasonable care to ensure its chatbot was accurate." This standard will likely apply to future cases.
The Root Cause
This wasn't a sophisticated attack or edge case. The chatbot simply didn't have access to the correct policy, or was trained on outdated information, or hallucinated a policy that seemed reasonable.
The real failure was operational:
- No verification layer between AI output and customer-facing responses
- No monitoring to catch when the chatbot gave incorrect policy information
- No feedback loop where customer complaints could identify AI accuracy issues
- No human escalation for high-stakes questions like bereavement policies
How It Could Have Been Prevented
This case is a textbook example of what happens when AI agents operate without proper supervision.
With an agent command center, Air Canada could have:
- Monitored chatbot responses for policy-related queries and flagged inconsistencies
- Required human verification for sensitive topics like bereavement, refunds, or legal commitments
- Audited agent accuracy by comparing outputs against source-of-truth policy documents
- Detected patterns where customers were receiving incorrect information before litigation occurred
The technology to prevent this existed. The operational discipline to deploy it didn't.
The Lesson
Air Canada's defense—"the chatbot is a separate entity"—was legally absurd. But it reveals a common mindset: the idea that AI tools are somehow outside the normal chain of responsibility.
They're not.
Every AI agent that speaks on your behalf is you. Every promise it makes, every policy it invents, every commitment it offers—you own it.
The question isn't whether your AI will make a mistake. It's whether you'll find out from your monitoring system or from a legal summons.
This is why we built Supervaize: to help enterprises monitor, audit, and govern AI agents before small errors become costly precedents.
Sources:
