15 things I have learned launching AI projects - Part 3

Posted on Apr 7, 2026

Systemic issues in AI adoption

In the previous issues of this multi-part blog post I shared some lessons in AI adoption that pertain to definition and framing of the opportunity. We’re now looking at the systemic, structural issues that can impede proper adoption. To note, I don’t go into technical details in this blog series (maybe one for later?). But if you want to learn more about the technical issues and opportunities that a technical adopter may face, head to Simon Willison’s blog. I’ve learned a lot of my tech skills in AI from him.

8: Bias shows up in ways you won’t expect, so you design for it

Bias in AI manifests in multiple ways, from biased model to biased service design. An episode I keep telling from my time in the NHS is when a dermatology team came to me with a really good question: “human diagnosis of pressure ulcers is biased, with medics proving less accurate with darker skins; can you build an AI that compensates this?”. It was a worthy question for a good problem. And yet, the team soon realised that an AI model would reproduce that bias because the data itself was biased: very few pictures of dark-skinned people with the condition were found. As a consequence, we agreed to pivot to help the dermatologist start a new image collection project.

In some cases, there are guardrails that can be put in place to detect and correct bias. Service design makes this possible, and a key ingredient of good service design is intersectional analysis. Far from being a political position, intersectionality is, as the Wikipedia definition says, an analytical framework to assess how the multiple features that make a person engage with the context around them to in order to understand and address exclusion-by-design. Obviously there are also legal requirements in this space, but beyond them this analysis allows for a more inclusive and effective use of AI – if you believe in the power of data, then unbiased AI == effective AI.

9: “AI is bad” usually means data is bad, which means systems are bad, which means procurement is bad

Failure of AI adoption today is often a classic Russian doll. Take a predictive model that fails to predict; you’ll frequently find out that the AI was pretty reasonable, but trained on bad data; in turn, when you look into the data, you’ll often find out that the quality of that data was pretty poor; and, most likely than not, you’ll discover that getting hold of that data was made it complicated by the IT system from which the data had to be extracted, sometimes in multiple steps; and it was so complicated, because the systems had been procured without any thinking whatsoever about data quality, or service design for the people accessing it.

A silo culture is frustrating, but it also produces troubling processes and results. All organisations have legacy contracts preventing data sharing and use, and good AI adoption is the latest victim. In other words, if you want effective AI you’ll also need to look at the commercial rules, and evolve them. You can’t blame AI for decades-old infrastructure failures; but, and this is the positive, the thinking required to adopt AI will shed some light on the issues around these systemic failures and help address them, if there’s (corporate) will.

10: Going AI-first sounds great until you try to act on the output

Another story that I always share in my talks is about the immensely exciting “length of stay prediction model” we developed for a hospital. It was amazing because the predictive power of the model – a neural network trained on one million admissions, each carrying 300+ data points – was pretty good and could forecast over 70% of the long stayers. The problem was apparent soon: what do you do with that prediction?

There’s often a massive gap between the ability to use AI to identify an issue, predict an outcome, forecast a success rate, and the ability of the organisation to actually do something differently as a result. We forget what problem we’re trying to solve. The solution to this is not to stop experimenting with AI – which helps understanding the possibilities – but to ground deployments in the good old-fashion arts of problem-first prioritisation and user-centric designs.

11: You regulate use cases, not technology

Those who are, rightly, concerned about AI often raise the point of “regulating AI”. This is an overly generic statement that won’t produce anything tangible or enforceable. In every industry, you rarely regulate a technology in its own right; regulation is about how to use it, how not to use it, who can use it, and for what purpose. This happens to varying degrees, according to the dangers involved. For example, you can synthesise medicines in your own kitchen if you want, but you can’t sell them in pharmacies without going through stringent processes. I can own a radio (I do), but can only transmit once I’ve proven I can handle the electricity and magnetism involved and have a licence (yes, I have one). I can’t own a gun without authorisation (this one, I don’t own). Regulation is a spectrum. So “are we allowed to use AI?” as a general question is not necessarily the right question, and we should assess what use cases can be used with what guardrails. I don’t have an opinion about the EU AI Act in itself, by I do think that its framing of different rules for different risk levels is a good approach.

12: Transparency isn’t optional in public services

This goes without saying. Lack of transparency is what kills trust. Explainability of AI solutions is very important. One element of this is to acknowledge the probabilistic nature of AI solutions. At this stage of AI development, a legitimate way of using it is to make processes faster and more efficient by using AI. This is great, and it has a lot of potential, but we’re often trying to use a probabilistic solution in deterministic processes. Explainability and transparency means, to me, to be clear about this, engaging with the users and discussing the potential and risks; evaluation is key.

In the public sector, we have rules to follow about algorithmic transparency, citizens’ right to understand decisions that affect them, and the right of redress, for example. Any organisations that wishes to adopt AI in a good way must be thinking about these elements of transparency and process design.

13: Your staff are already using AI — govern it or lose control

Running away from AI is a dangerous mistake. The technology is everywhere, it’s being user by increasing numbers of people, and in ways that can’t be stopped. So it’s better to make sure you have adoption plans and creating safe channels to use it, rather than pretending it can be ignored. This may involve having new types of conversation with staff members, including talking about harnesses and system prompting, few shot prompts, and the like. It’s about learning a new vocabulary.

In Part 4 COMING SOON, I’ll share some practical lessons about AI adoption.

And if you’d like to get updates, please join my weekly newsletter about data.