15 things I have learned launching AI projects - Part 1

Posted on Mar 30, 2026

The context in which we build AI

This is a little preamble because it’s mostly about how to frame AI in a way that makes you… forget about AI.

1: “AI” is a fuzzy, evolving label

Today, when we say “AI”, we mostly mean Large Language Models, chat interfaces, agents, and so on. But this wasn’t always the case. It makes me smile – and feel a little old – when we refer to image pattern recognition as “traditional AI”.

The reality of AI is that it is an ever evolving label with fuzzy contours. Since I started being interested in the magic world of computers, “AI” has changed meaning multiple times. When I started university, AI mostly referred to probabilistic rule-based interfaces and logic programming; neural networks were an interesting but odd research area with little hope of application. By the time I had started my (failed) PhD, there was a strong interest in biological inspired system that had little to do with the AI models we have today, such us the famous Ant Colony Optimisation. Then we had deep learning open up a whole new range of algorithms, which led to where we are today with language models.

Acknowledging AI as this shapeshifting “thing”, matters if you are interested in adoption as opposed to frontier research, because your key question should be how you make things better by using this new exciting “thing”, rather than developing the “thing” itself.

I have a very simple definition of AI that has stuck with me through these generational changes: AI is where data meets product. How are you using the algorithm to satisfy a user need, address a pain point, reframe a business process, matters more, if you are an AI adopter, than the underlying technology.

The label “AI” has changed, and will likely change again. But as an adopter, I’ve learned not to be overly concerned about what AI is and isn’t. That’s not to say that you can’t get excited about the underlying science, and even spend your time (as I do) playing with the tech; this can, in fact, make you better at understanding and exploiting its potential. But it’s not an end in itself if you’re an AI adopter.

2: Before you build anything, ask what’s the user need

One big issue with AI systems today is that a lot of people are literally throwing them at problems that don’t need AI. The same problem was seen a decade ago when the answer to all problems was data.

The key question to ask, which links to the previous point, is: is this use case I’m trying to address a good match for this solution? There’s always a strong temptation to use complication wherever something simple – but boring – could give you the answers. Or, worse, sometimes we have solutions looking for a problem (blockchain anyone?), which rarely builds lasting positive change.

As much as I’m a techie, experience has taught me to never start from tech. I’ve become a big adopter of agile ways of working and user-centric design as a way to prioritise and speed up work. Yes: agile done well means faster delivery, not never-ending sprints. Starting from a problem ensures the best outcome.

3: If you can’t measure it, you can’t justify it

A feature of successful delivery is a culture of measuring it, The availability of performance evaluation, metrics, measurements, all concur to success. Ideally, you can identify metrics that matter, take your measurements, and compare to the existing process you’re replacing, the service you’re redesigning, or something similar and close to it. Good metrics may include costs per transation, the accuracy of an AI model in comparison to a human baseline, or the time saved.

The best organisations have a no blame culture around delivery: they measure and learn to change based on what they measure. So if you’re adopting an AI solution, your first question should be whether it outperforms the status quo.

Obviously, especially at the pioneering end of the adoption curve, it is perfectly justifiable to adopt a test and learn approach, and deploy a new service that isn’t doing better than an existing tech; an AI deployment that gets your workforce engage with AI solutions and advances them to the ability to be more informed adopter is a good step, but this rarely works in isolation if not part of a broader plan with measurable impacts.

In Part 2, I’ll share some practical lessons about AI adoption.

And if you’d like to get updates, please join my weekly newsletter about data.