15 things I have learned launching AI projects - Part 2

Posted on Mar 30, 2026

What does AI adoption means in practice

Part 1 was a preamble based on framing AI in the context of building solutions. Here, we get a bit more practical detail.

4: AI speeds up iteration, but the slow part was always the human bit

A word of warning: this might apply more in the public sector than in the private sector, but I’ve seen different versions of this problem in both.
As I described earlier, AI to me is a combination of data and product. There is a lot of hope of how AI can speed up or redesign legacy processes. What very few discuss is the speed of adoption. The fact that AI can be super-humanly good at certain tasks does not mean that implementing an AI-driven process will be fast, whether it is about software engineering, deploying agents, or creating an AI-staffed call centre. And that’s because the complex, slow, frustrating bits of development are the human bits that AI can’t replace.

As people who code or who understand code, we are rightly impressed by the speed of coding LLMs. These are now able to churn working, tested code while you sleep, reducing by a huge factor the time required to bring software products to life. This is great, except the slow bits of developing a digital service was never about the engineering in its own right. AI doesn’t create shortcuts, nor it replaces the needs for user research (the end user is still a human), governance (my lawyers would sue me if I didn’t say this), audits of all kinds (accessibility? data protection? IT security?), or good design. Some of these are necessary to deliver dependable digital services.

Yes, AI has made the speed of the engineering bit of each iteration much faster. We can prototype, test and learn, much faster than ever before. But we should not be delusional about how those speed gains apply to an end-to-end process built to provide assurance. This has little to do with the bits of that process that AI is good at.

In fact, AI can actually increase the design burden: one of the key elements of LLMs is their probabilistic nature. This requires an article in itself, but one of the major issues with AI deployments today is that we often try to apply a non-deterministic technology to a deterministic process; and, as a consequence, we spend a lot of time designing guardrails to make sure probabilistic outputs can be fed into a deterministic process. (Obviously, the positive counterargument of this is: AI will be an excellent tool when we learn to properly harvest its non-deterministic capabilities, that work well in a variety of settings.)

5: You probably don’t need an AI specialist

Ah, I loved the good times I had in the AI Lab trying to hire AI specialists. We tried so hard to get cutting-edge scientists, although often we couldn’t compete with the private sector salaries of the time. We had some great folks, don’t get me wrong, many of whom I’d hire again. But there was a temptation to aim for the best model guru, a research scientist who built models from scratch, or for an engineer hyper-focused on the most popular model of the time.

The mundane reality is that as an AI adopter, you rarely need that degree of hyper-focused technical specialism. You do need people with a solid technical base, curious about applying models to solve problems. But in most cases, what creates business value is to look for technical generalists of the Von Neumann type: people curious to understand the true nature of problems and learn how to apply any new tech to solving that problem.

To be entirely clear: most AI problems that fail, do so because there was no service design involved, rather than because the AI wasn’t good enough. Multidisciplinary teams can work to bring the best tools – which, in this era, are often AI-powered – to solve problems in ways that are seamless to the end user. This is the sort of hiring that you need to make: problem-solvers who are good both at tech and people.

6: If you use AI for coding, it will over-engineer everything

If you code with LLMs, I have one word of advice: spend a good chunk of your time experimenting with prompts of different types, and see how your coding buddy does. You will likely encounter an enthusiastic, talented, overeager coder who overengineers everything.

I saw this for myself. I needed a simple editor to manually check a CSV before turning it into a JSON file to power my newsletter-writing pipeline. All I needed in the first step of the pipeline was the ability to inspect and, if needed, edit the CSV. I had to ask a few times to redo this, because my LLM coder would, every time, try and create a tidy point and click editor rather than just allow me to edit free text, ignoring my requirements.

AI-generated code reaches for complexity; sometimes this is ok when you need to get “to the next level” for your product, but most times you don’t need that complexity, and this disconnect creates a maintenance burden.

7: AI will get things wrong. So do humans.

Humans get things wrong, measurably. And that measurement should be the starting point for a comparison that makes AI augment human processes, rather than compound human errors.

I loved the Wimbledon line-calling controversy a few years ago. A perfectly reasonable AI-driven line-calling “eye” model was deployed with, I think, not much service design involved. This caused the insurgence of a situation in which the eye was wrong but the umpire had no powers to rectify it, unlike the human version of the same process. This matters when adopting AI because we shouldn’t be treating it as an error-less magic wand, something that destroys trust and risks making processes more error-prone and frustrating rather than less.

We often say that AI hallucinates; don’t forget that humans can hallucinate, too. We build our processes in order to correct that when it happens. We should do the same with AI. Service design is key.

In Part 3 COMING SOON, I’ll discuss some systemic issues in AI adoption.

And if you’d like to get updates, please join my weekly newsletter about data.