The limits of an internal‑only AI strategy
Most conversations about AI in business start in the same place: how it can improve internal productivity.
How do we automate workflows, synthesise data, accelerate decisions? These are legitimate questions, but they frame AI as a tool that operates inside the organisation. That framing is already becoming incomplete.
The more consequential shift is what happens when AI starts operating between organisations and the people they serve. That is where the growing strategic questions lie, ones which most leadership teams are not yet asking.
The internal cost of poor information
AI functions as a force multiplier for an organisation’s underlying information. Feed it reliable, well-structured data and it compounds the value, finding patterns and insights that were previously invisible. But the inverse is far more dangerous. Feed it fragmented, inconsistent, or siloed information and the technology doesn't neutralise the mess but scales it.
This is a 21st-century evolution of an old computing principle: Garbage In, Garbage Out (GIGO). While the logic remains the same, AI has fundamentally changed the cost of ignoring it.
In the pre-AI era, bad data usually looked like bad data. A messy spreadsheet, a disjointed database, or a series of conflicting PDFs carried their own warning signs; the lack of structure acted as a friction point that prompted human caution.
AI removes that friction. An LLM can take a chaotic, error-ridden dataset and synthesise it into a polished, persuasive executive summary. It applies a linguistic "varnish" to structural rot. The rough edges and inconsistencies that once signalled “this data isn't quite right” are smoothed away by the model’s probabilistic nature.
The result is a dangerous optical illusion: the output carries a level of authority and confidence that the underlying information never earned.
This creates a "Confidence Gap" that is now manifesting in boardrooms globally. Most organisations have had the high-level AI strategy conversation focusing on which tools to buy and which workflows to automate. Far fewer have had the information governance conversation with the same level of seriousness.
That gap is where avoidable, systemic risk accumulates. When an organisation treats AI as a "plug-and-play" solution without first fixing the "fuel" that powers it, they are building high-velocity decision-making engines on compromised data. The risk isn't just that the AI will be wrong, but it will be convincingly wrong, leading leadership to make "data-driven" decisions based on misinformed foundations.
How AI will reshape the customer relationship
At Lyfeguard, we see the same pattern play out across every significant life event: from house buying and starting a family to incapacitation and bereavement. People have an extraordinarily difficult relationship with their own information. They don't know where it is, can't easily access it, and struggle to understand what it means.
The work of getting on top of it, knowing what you have, where it lives, and whether it's ready for the moment it's needed, gets pushed to the bottom of the priority list not out of negligence, but because engaging with it is genuinely, structurally difficult.
That fragmentation is a human problem first. But it is also, increasingly, a business one, and AI is the reason the stakes have changed.
AI is moving from being a tool that operates inside organisations to one that acts on behalf of individuals. When that happens, AI stops being just a productivity tool. It becomes a market intermediary.
This is already happening. People are using AI assistants to research financial products, compare service providers, interpret complex terms and conditions, and make recommendations they once relied on professional advice or personal research to reach.
A customer choosing a solicitor, assessing a pension provider or evaluating an accountancy firm will increasingly do so through an AI intermediary that synthesises available information and surfaces a view.
When a human customer researches a business, they navigate imperfect information reasonably well. They read between the lines, ask questions, and tolerate ambiguity.
When an AI agent researches a business on a customer’s behalf, it works with what it can find, interpret and trust. Gaps don’t get filled by intuition, inconsistencies don’t get resolved by goodwill, and opacity doesn’t get overcome by a persuasive sales conversation.
The businesses that are legible to AI, whose information is structured, accurate, consistent and accessible, will have a material advantage over those that are not.
Information quality as competitive strategy
It's easy to frame information as a burden and for most people and most organisations, that's exactly how it feels. Something to manage, maintain, file, and worry about. But one of the shifts that studying on the Transforming Customer Experiences course at Alliance Manchester Business School crystallised for me is that this framing is holding businesses back from seeing what good information actually makes possible.
Good information is not a hygiene factor. It is the foundation of genuinely personalised customer experiences. It is what makes the difference between a customer filling in the same form for the fifth time and a business that already knows what they need.
It is the thing that transforms a relationship from transactional to genuinely valuable; where the institution understands the person, anticipates their needs, and serves them in ways that feel effortless rather than effortful.
This reframes the argument entirely. Information governance stops being an IT concern or an internal risk management issue. It becomes a question of whether your organisation is interpretable to the systems that will increasingly mediate your customers’ decisions.
Businesses have spent decades optimising for human attention: clear messaging, strong branding, persuasive salespeople, well-designed customer journeys. That work retains its value. But a new layer of competition is emerging, one where the audience is not just human and the criteria are not just emotional.
Machine interpretability is not the same as human appeal. An organisation can be compelling to a person and opaque to an AI. The inverse is also possible and will increasingly be an advantage. A business that is hard to read won’t prompt an AI agent to try harder. It will simply be omitted.
The organisations that understand this earliest will start making different decisions: about how information is structured and maintained, about consistency across channels and systems, about transparency in how they describe what they do and how they do it. Not because regulators require it, but because the market will increasingly reward it.
The leadership implication
None of this is primarily a technical problem. The decisions that shape information quality, what gets collected, how it is maintained, who is accountable for its accuracy, how it is structured and made accessible, are leadership decisions. They reflect priorities and culture as much as systems and processes. Information quality is not a janitorial task for the IT department. It is the infrastructure of your market position.
The executives who will navigate this well are those who start asking a different set of questions. Not just “what can our AI do?” but “what does our information actually look like to something trying to interpret it from the outside?” Not just “are we using AI effectively internally?” but “are we legible to the AI our customers are already using?”
This is what AI literacy looks like at a strategic level. Not fluency with tools but understanding of what those tools depend on and what it means when the tools belong to your customers, not just to you.
The principle is old. Garbage In, Garbage Out has always applied to machines. What has changed is who the machines work for. As customers increasingly rely on AI to interpret businesses, information quality stops being an internal hygiene factor. It becomes market visibility.
