In the quiet corners of our technological revolution, there is a crisis no one is naming. We are building systems designed to augment human capability whilst simultaneously dismantling the very expertise needed to govern them. This is not a story about job displacement. It is about a category of work we have not yet learnt to value.
Press enter or click to view image in full size
The cake you cannot fix
I often explain responsible AI development through baking. If you bake a cake with salt instead of sugar, you cannot fix it afterwards. You have to start over.
The metaphor lands because everyone understands irreversibility. Yet I keep meeting consulting firms and product teams, many in highly regulated sectors, who assume they can retrofit compliance later. Build fast, they say. Worry about transparency and accountability once the system works.
They cannot.
Most AI applications being developed today, especially those built on opaque foundation models with no visibility on data provenance, bias controls, or training processes, cannot be retrofitted to meet regulatory requirements. The EU AI Act does not allow you to add documentation, traceability, and human oversight as afterthoughts. These must be baked into the architecture from the beginning.
The salt is already in the mixture. Organisations are only just beginning to taste it.
The crisis in plain sight
Whilst headlines obsess over which jobs AI will eliminate, something more consequential is happening. We are creating an entire ecosystem of AI systems with no one qualified to govern them: a recent study by Fluree shows that the average enterprise runs an 367 (!) SaaS apps, most of which have AI integrated.
The issue isn’t that the technology is ungovernable, it’s that we’re not cultivating the expertise required to do the governing.
Think about what deploying a high-risk AI system actually requires. You need someone who can question whether the training data reflects reality or bias. Someone who can challenge whether the optimisation function aligns with human values. Someone who can monitor whether a model performing at 95% accuracy today has quietly degraded. Someone who can explain to a regulator, or an affected citizen, why the algorithm made the decision it did.
These are not technical problems. They are judgement problems.
And judgement cannot be automated.
Here is the paradox. AI has expertise without context. It can process patterns across vast datasets but cannot understand why or when that knowledge matters. It can identify correlations in historical hiring data but cannot tell you whether those correlations reflect meritocracy or structural discrimination. It can optimise for efficiency but cannot weigh the moral implications of sacrificing fairness to do so.
That gap between machine capability and human judgement is where the most important work of the next decade will happen.
The roles we are not creating
This crisis is not theoretical. It is already creating demand for a new category of roles. Roles that do not just manage AI, but ensure it serves human values.
Consider what Actor-Network Theory has long taught us. AI systems are not neutral tools. They are embedded in networks of human actors, institutional power structures, regulatory frameworks, and social values. Governing AI is not just about managing algorithms. It is about navigating these complex, interconnected systems.
Seven distinct professional roles are emerging. Not technical positions requiring coding expertise. Knowledge positions requiring deep domain expertise, critical thinking, and systems thinking. The ability to see how changes in one part ripple through the whole.
1. The AI Decision Auditor questions and validates AI before implementation. When a hiring algorithm rejects a candidate, someone must verify the logic was not simply reproducing historical discrimination. When a diagnostic tool recommends treatment, someone must ensure it did not hallucinate a correlation. The AI Act’s Article 14 requires human oversight for high-risk systems, but human oversight without the expertise to question is theatre, not governance.
2. The Human Accountability Architect designs processes that preserve human responsibility even as systems become more autonomous. As multi-agent AI proliferates, accountability becomes genuinely unclear. Someone must ensure that when things fail, and they will, there is an answer to the regulator’s inevitable question. Who is responsible?
3. The Multimodal Interaction Designer orchestrates how multiple AI agents collaborate with human oversight. Modern systems are ecosystems of agents, interfaces, and decision points that must work coherently. Article 13 requires transparency about how AI systems operate, but transparency means nothing if the experience is incomprehensible.
4. The AI Risk Steward monitors systems for drift, degradation, and bias emergence. AI does not break loudly. It degrades quietly. Someone must watch for the silent erosion before damage compounds.
5. The Responsible AI Implementation Strategist translates between technical teams, who build, and legal teams, who interpret, ensuring compliance is not theatre but embedded practice.
6. The AI Drift and Integrity Analyst tracks model performance over time, identifying when systems produce unreliable results before regulatory audits discover the problem. Article 72’s post-market monitoring obligations assume someone is actually monitoring. Most organisations are not.
7. The Transparency and Explainability Designer translates AI decision-making into understandable explanations. Technical accuracy means nothing if humans cannot comprehend it.
This roles are innovative in this form, but not new. For instance, most commercial aircraft have sophisticated autopilot systems. They can handle take-off, maintain altitude, navigate routes, even land in certain conditions. Yet no airline would dream of removing the pilots from the cockpit. No passenger would board a flight knowing there was no one able to take the controls.
Why? Because automation is not the same as autonomy.
Because systems fail in unexpected ways. Because edge cases require human judgement. Because someone must remain responsible.
AI systems are the same. These seven roles are the pilots in the cockpit. They do not fly the plane manually, but they remain essential. Monitoring, overriding when necessary, taking control when the automated systems encounter situations they were not designed to handle.
These roles all require human judgement to check, guide, and refine AI outputs. They cannot be performed by AI itself. That would be asking the algorithm to audit the algorithm.
The illusion of competence
But the crisis runs deeper than a lack of roles. It is also about the erosion of expertise, and the dangerous illusion that anyone can govern AI.
Consider the current enthusiasm for vibe coding. The idea that anyone can now build functional applications through natural language prompts, no programming knowledge required.
There is truth here. Barriers have lowered. But there is also a dangerous fallacy. We are conflating the ability to generate code with the expertise to evaluate whether that code is fit for purpose.
What happens when the AI hallucinates? When it generates plausible-looking code with subtle security vulnerabilities? When it optimises for the wrong objective because the prompt was not precise enough?
The person with no coding expertise cannot spot these failures. They have outsourced not just the manual work of typing code, but the expertise required to recognise when the output is wrong.
We are creating a generation seduced by the illusion of capability without competence.
And what happens when the AI models disappear? Not because of some apocalyptic scenario, but simply because a service shuts down, a company pivots, a model becomes obsolete. If we have outsourced expertise to systems we do not understand, what knowledge remains?
Our wetware, our brain, our lived experience, our ability to connect and contextualise knowledge, this is what makes us uniquely suited to ask the right questions for any given situation.
That cannot be delegated. It can only be cultivated.
Two hundred and fifty files
Now let me tell you something that should keep every AI executive awake at night.
Recent research by Anthropic, in partnership with the UK AI Safety Institute and the Alan Turing Institute, revealed that just 250 malicious files can compromise even the largest language models. A fraction so small, roughly 0.00016% of the training data, that it is effectively a rounding error.
Models ranging from 600 million to 13 billion parameters, systems trained on datasets so vast they would take human lifetimes to read, can be poisoned by a rounding error’s worth of corrupted data.
Data poisoning works by injecting malicious inputs, often indistinguishable from legitimate data, into a model’s training set. These inputs create hidden vulnerabilities like backdoors or biased outputs that only activate under specific conditions. A model might perform perfectly during testing, only to produce harmful outputs when it encounters a particular trigger phrase in production. Because the poisoning is subtle and the training datasets are vast, it evades detection until it is too late. By the time you discover the problem, the corruption is already baked into the model’s weights.
The assumption that bigger model equals safer model has collapsed.
And data poisoning is already happening. Adversarial actors are already exploiting these weaknesses. And most organisations deploying AI systems have no idea whether their training data has been compromised.
For every interface your design team polishes, for every user experience your product team optimises, there is an invisible substrate of risk embedded in the data that feeds your AI. You can make the chatbot delightful. You cannot make it trustworthy if the foundation is poisoned.
Training data hygiene matters far more than sheer computational scale. You can have billions of parameters and state-of-the-art architecture, but if 250 malicious documents made it into your training pipeline, you have built an expensive liability.
And here is what makes this discovery so devastating for the retrofit compliance later crowd. You cannot audit what you cannot see. You cannot fix what was baked into the model’s weights during training.
The salt is in the mixture. No amount of post-hoc filtering will extract it.
This is precisely why the seven roles I have outlined are not optional enhancements. They are the minimum viable governance for responsible deployment. The AI Decision Auditor who can question whether an unexpected output reflects genuine intelligence or a dormant trigger phrase. The Human Accountability Architect who ensures someone can be held responsible when a poisoned model makes harmful decisions. The AI Risk Steward who monitors for the subtle drift that signals data degradation.
Without these capabilities, you are deploying systems you cannot trust and defending against threats you cannot see.
Trust, in this context, is earned through design, not assumed by default.
The investment we are not making
Yet here is the reality. We are not investing in the next generation who will fill these custodial roles.
Organisations hesitate to hire junior staff because they expect roles to change. But that is precisely why we need them. What makes junior roles sustainable is not teaching people to use today’s tools. It is giving them solid theoretical and analytical grounding so they can adapt as tools evolve, challenge AI outputs with informed scepticism, and bring human judgement into the collaboration.
And universities? They are still teaching AI ethics as an elective, not a core competency. The ability to audit an algorithm for bias or design an accountability framework should be as fundamental as calculus for engineers, as essential as anatomy for doctors. Or we risk graduating a generation of builders with no guardians.
Instead, we are still producing graduates who can train neural networks but cannot explain why a model’s accuracy dropped, or what fair even means in the context of algorithmic decision-making.
They certainly cannot identify the signs of data poisoning before it is too late.
Schools must treat AI governance as a core competency, not an afterthought. Courses should cover:
- How to challenge algorithms, not just build them. Questioning the assumptions embedded in training data and optimisation functions. Understanding when correlation is not causation, when efficiency creates injustice, when accuracy masks harm.
- How to design accountability frameworks for sociotechnical systems. Grounded in both regulatory requirements like the AI Act and human-centred design principles. Students should graduate knowing how to trace decisions through complex systems, how to preserve human responsibility even as automation proliferates.
- Ethics and scenario modelling as core competencies. The ability to anticipate second-order effects and unintended consequences. To ask not just “can we build this?” but “should we?” and “what happens when this fails?”
- The distinction between legal compliance and moral responsibility. Between what systems can do and what they should do. Between meeting the letter of regulation and honouring the spirit of human dignity.
- Systems thinking that recognises AI as part of complex networks. Involving human actors, institutional structures, and social contexts. How to conduct adversarial thinking, imagining how systems might be exploited and building defences before vulnerabilities are discovered in production.
- Provenance tracking and dataset auditing as foundational skills. Not specialised techniques reserved for security teams, but basic competencies for anyone working with AI. If you cannot trace where your training data came from, you cannot trust what your model produces.
Press enter or click to view image in full size
Sociotechnical systems theory reminds us that every technological system is also a social system. Shaped by power, policy, and human judgement. Building AI without recognising this is not innovation. It is amnesia.
What makes us irreplaceable
Historically, physical labour gave way to intellectual work. Intellectual work is now giving way to oversight and accountability. We are becoming the moral anchors of a machine-operated world.
This pattern is not new. Every technological revolution creates categories of work that did not exist before. Not by eliminating human contribution, but by redefining what makes that contribution irreplaceable.
Consider Mary Smith, the knocker-upper who woke factory workers each morning. The alarm clock eliminated her profession. But it did not eliminate the need for humans in the morning economy. It simply shifted where human judgement mattered.
AI is following the same path. Machines make outputs. Humans must own outcomes.
That is not poetic language. It is the legal and ethical line the AI Act draws.
The question is not whether these roles will exist. They are already emerging, driven by regulatory pressure, operational necessity, and the simple reality that 250 poisoned files can compromise billions of parameters.
The question is whether we will invest in developing the expertise to fill them before systems fail, reputations collapse, or regulators intervene.
The choice before us
So here we stand, at the intersection of human intelligence and artificial systems, facing a choice that organisations have not yet recognised as urgent.
Build responsible AI from the ground up, or waste resources building systems that cannot comply with regulation and cannot be trusted with high-stakes decisions. Cultivate the next generation of AI custodians, or outsource our collective future to algorithms with no one qualified to question them.
The future of responsible AI is not technical. It is human.
It requires investing in roles that blend deep domain expertise with critical thinking. It requires recognising that governance is not overhead but foundation. It requires remembering what centuries of technological evolution have demonstrated. Machines amplify human choices. They do not render those choices irrelevant. They make them more consequential than ever.
True innovation requires more than computational power. It demands wisdom.
Where to start?
If you are leading an organisation deploying AI systems, ask yourself these questions.
- Who in your organisation can challenge an algorithm’s reasoning before it affects real people? Not describe how it works technically, but question whether it should work that way at all.
- Who is responsible when your AI system degrades and no one notices until the damage compounds?
- Who will explain to a regulator, or to a person denied credit, employment, or healthcare, why the algorithm made the decision it did?
- Who can identify whether 250 malicious files have compromised your training data before you have deployed a billion-parameter liability into production?
If you cannot answer these questions with names and capabilities, you do not have an AI strategy. You have an AI experiment unfolding in production with no one steering it.
The seven roles outlined here are not future speculation. The AI Act has made them mandatory for high-risk systems. Market reality is making them essential for competitive advantage. And the simple mathematical fact that 0.00016% contamination can compromise the largest models has made them existentially urgent.
The salt is in the mixture. The only question remaining is whether you will recognise it before you serve the cake, or after your customers taste it.
The guardians we choose
The future of AI is not just about what machines can do. It is about what humans must become. The custodians of context. The challengers of logic. The guardians of outcomes.
Every algorithm carries within it the echoes of human decision-making. Our challenge is not to fear these technologies, but to consciously shape their ethical foundations. To invest in the people capable of doing that shaping.
If your organisation is not creating these roles today, you are not just falling behind. You are building a ticking time bomb. Because when systems fail, and they will, no one will ask what the algorithm decided. They will ask who was responsible for letting it decide.
The question is not whether you can afford to create these positions.
It is whether you can afford not to.
Consider the alarm clock that replaced Mary Smith’s profession. Not just a technological replacement, but a metaphor for human adaptability. Each technological shift does not eliminate human potential. It reshapes it.
Our deepest innovations emerge not from machines, but from the human capacity to reimagine possibility.
The future of AI will not belong to the coders alone.
It will belong to the guardians we choose to train.