Create your very own Auto Publish News/Blog Site and Earn Passive Income in Just 4 Easy Steps
Air Canada tried to make its chatbot subject to the AI bus.
It did not work.
A Canadian court recently ruled that Air Canada must compensate a customer who purchased a full-fare ticket after receiving inaccurate information from the airline's chatbot.
Air Canada had argued that its chatbot made up the answer and therefore it should not be held liable. As Pepper Brooks from the movie Dodgeball would say, “That’s a bold strategy, Cotton. Let’s see if it pays off for them.”
But what does this chatbot error mean for you as your brands add these conversational tools to their websites? What does it mean for the future of search and how does it impact you when consumers use tools like Google's Gemini and OpenAI's ChatGPT to research your brand?
AI is disrupting Air Canada
AI seems to be the only topic of conversation these days. Clients expect their agencies to use it, as long as that use comes with a big discount on their services. “It’s so simple,” they say. “You must be so happy.”
Boards of startup companies are putting pressure on their management teams in this regard. “Where are we with an AI strategy,” they ask. “It's so easy. Everyone does it.” Even Hollywood artists are hedging their bets by looking at the latest generative AI developments and saying, “Hmmm… Do we really want to invest more in humans?”
Let's all take a breath. People aren't going anywhere. Let me be clear: “AI is NOT a strategy. It's an innovation looking for a strategy.” Air Canada's decision last week could be the first real difference in this regard.
The story begins with a man asking Air Canada's chatbot if he could receive a retroactive refund on his death rate if he provided the appropriate documentation. The chatbot encouraged him to book his flight to his grandmother's funeral and then request a refund of the difference between the full price and the funeral mass within 90 days. The passenger did what the chatbot suggested.
Air Canada refused to issue a refund, citing their policy which specifically states that no refunds will be given for travel after the flight has been booked.
When the passenger sued, Air Canada's refusal to pay became even more interesting. It argued that it should not be responsible because the chatbot was a “separate legal entity” and therefore Air Canada should not be responsible for its actions.
I remember a similar defense in my childhood: “I’m not responsible. My friends made me do it.” To which my mother replied, “Well, if you were told to jump off a bridge, would you do that?”
My favorite part of the case was when a member of the tribunal said what my mother would have said: “Air Canada does not explain why it believes…” why its website entitled “Bereavement Travel” was inherently more trustworthy than its chatbot. “
The BIG flaw in human thinking about AI
That's what's interesting about the current AI challenge. Companies tend to confuse AI with a strategy that needs to be implemented rather than an innovation of a strategy that should be implemented. AI is not the answer for your content strategy. AI is simply a way to improve an existing strategy.
Generative AI is only as good as the content – the data and training – fed to it. Generative AI is a fantastic pattern recognizer and understanding likely next word choice. But it's not about critical thinking. It cannot distinguish what is real and what is fiction.
Think for a moment about your website as a learning model, a kind of brain. How well could it accurately answer questions about the current state of your business? Think about all help documents, manuals, and education and training content. Only by putting all of that – and only that – into an artificial brain can you trust the answers.
Your chatbot would probably produce some great results and some poor answers. The Air Canada case involved a tiny challenge. But imagine if it wasn't a small mistake. And what about the impact of unintended content? Imagine if the AI tool tracked down that stray folder in your customer help repository – the one with all the snarky replies and idiotic answers? Or what happens if the archive listing everything that's wrong with your product or security is found? The AI may not know that you do not want to use this content.
ChatGPT, Gemini and others also pose challenges for the brand
Publicly available generative AI solutions may pose the greatest challenges.
I tested the problematic potential. I asked ChatGPT to tell me the prices for two of the most popular CRM systems. (I'll let you guess which two.) I asked it to compare the prices and features of the two similar packages and tell me which might be a better fit.
I was initially told that none of them could be priced, but the pricing page for each was included in a footnote. I pressed the quote and asked to compare the two packages mentioned. I was then quoted a price that was 30% too high for one of them, without noticing that it was now reduced. And it still couldn't provide a price for the other company, saying the company didn't disclose prices but re-footnoted the pricing page that clearly listed the costs.
In another test, I asked ChatGPT, “What’s so great about their Digital Asset Management (DAM) solution?” [name of tech company]?” I know this company doesn't offer a DAM system, but ChatGPT doesn't.
It came back with a response explaining that this company's DAM solution was a wonderful single source of truth for digital assets and a great system. I was not told that it had paraphrased the response from content on the company's website that highlighted its ability to integrate with a third-party DAM system.
Now these differences are small. I get it. I should also be aware that I got good answers to some of my more difficult questions in my short test. But that's exactly what's so insidious. If users expected answers that were always slightly wrong, they would check their accuracy. However, when the answers seem correct and impressive despite being completely wrong or unintentionally accurate, users trust the entire system.
That is the lesson of Air Canada and the challenges that have followed.
AI is a tool, not a strategy
Remember, AI is not your content strategy. You still have to check it. As has been the case for over 20 years, you must ensure that all your digital assets reflect the current values, integrity, accuracy and trust you want to convey.
AI won't do this for you. It cannot know the value of these things unless you give it the value of these things. Consider AI as a way to innovate your human-centered content strategy. It can express your human story to all your stakeholders in different and potentially faster ways.
But only you can know if it's your story. You have to create it, value it and manage it, and then maybe AI can help you tell it well.
Do you like what you read here? Get a subscription for daily or weekly updates. It's free – and you can change your preferences or unsubscribe at any time.
HANDPICKED RELATED CONTENT:
Cover image by Joseph Kalinowski/Content Marketing Institute
Create your very own Auto Publish News/Blog Site and Earn Passive Income in Just 4 Easy Steps