Blog
Blog

Timeline

Blog

The AI Mirage: Why Automation Strategy is Built on Sand

The AI Mirage: Why Automation Strategy is Built on Sand

I’ve spent the last 20 years + in the working with data and if there’s one thing I have learned, it’s that every few years we hear/learn a new mythical buzz word.

Right now, that phrase is AI and Automation. But as someone who has seen several tech cycles, I can tell you that we are currently sprinting towards an edge.

The problem isn’t the technology itself but the actual fuel. We are seeing a massive rush to use AI into businesses, but hardly anyone is looking under the bonnet to see if the engine can handle what we’re feeding it.

The view from the Boardroom vs. the Tech department

In the boardroom, the propaganda discussions are all about competitive advantage and rapid scaling. Senior management see AI as a plug and play solution. You buy the licence, you hook it up to your database (How, God knows) and as if by magic, all the efficiency and performance increases by x amount.

However, back on the Technology department, the reality is a bit different. Data scientists and engineers are the ones currently firefighting. They are being asked to build Ferraris using scraps or junk. When they try to explain that the data is too messy, inconsistent or just wrong, they’re often met with a blank stare. The phrases, “just make it work” or worse “don’t be a sceptic, it will work” normally follow.

The Garbage In, Garbage Out (GIGO) rule hasn’t disappeared just because the software got smarter. It is in fact worse, AI scales your mistakes. If your underlying data is rubbish, AI just helps you produce rubbish at a speed and volume that was previously impossible.

Why Fast AI is Usually a Bad Idea

Cutting corners on data quality to get an AI tool live will look good in a Board report, but the long-term debt is massive. AI doesn’t usually tell you when the numbers are not making sense, when there is a gap or when it’s confused. It provides a confident, polished answer based on the data it has, that is incorrect. If that answer affects a companywide pivot, you’re in trouble.

Your best employees will leave. Most data engineers don’t want to spend 90% of their time cleaning data that should have been governed properly at the source.

The Black Box Trap – Once you automate a process based on bad data, it becomes incredibly hard to sift through later. You end up with a black box that no one understands, and everyone is afraid to touch.

How to Fix the Foundation (Without Stopping the Train)

You probably can’t tell your CEO to stop the AI rollout for six months while you tidy up. You need to fix the car while it is already on the go. Here are some realistic ways to mitigate the risk:

Data Quality StoreIf you are storing data; store it in a healthy manner. A Data Quality Store is essentially a dashboard or metadata layer that gives every dataset a grade. Before a team loads a dataset into an AI model, they check the “nutritional label”. If the accuracy is only 60%, the model doesn’t run. It makes the invisible problem of bad data visible to everyone. Once the data is sent back and cleaned then it is time to load.

Data ContractsWe need to stop treating data like a byproduct but give it the importance that it deserves. Implement Data Contracts between the people generating and/or inputting the data (like sales or OPS teams) and the people using it. If the data doesn’t meet the agreed standard, then the pipeline rejects it automatically. It’s better to have a broken pipeline than a broken business decision.

Data Observability – In the old days, we did batch testing, data audits, data checks, suitability exercises and the lot. In the AI era, you need automated observability. You need tools that alert you the second a data trend looks weird. The three weeks later batch test when the automated reports have already been sent out, is dead weight and wasted time.

Pilots – Divide and ConquerInstead of a company-wide AI transformation, pick one specific, high-quality data stream. Build a small, successful automation there. Use that success to prove that the quality of the data was the reason it worked, not just the AI itself.

The next few years won’t be won by the companies with the fanciest AI models. They will be won by the companies that had the discipline to fix their data foundations. We need to stop talking about AI as a magic wand and start talking about it as a high-precision tool that requires high-precision fuel. If we don’t, we’re just automating our own obsolescence.