You walk into a manufacturing environment for the first time with years of enterprise IT experience behind you. You've hardened perimeters, managed SOCs, run incident response exercises...
Then someone on the plant floor tells you that if you take their SCADA system offline to patch it, you'll shut down a production line that costs many thousands of pounds an hour to restart.
And suddenly you realise your entire mental model is upside down.
The Triad Flip
In enterprise IT, we learn security through the CIA triad. Confidentiality first, protect the data. Integrity second, make sure it hasn't been tampered with. Availability third, keep it running.
Operational technology flips this on its head.
In OT environments, availability is everything. A production line that stops costs money. A power station that goes dark affects communities. A water treatment system that fails has consequences that go well beyond a data breach.
The priority becomes AIC: Availability, Integrity, Confidentiality. Not because confidentiality doesn't matter, but because in a world where systems control physical processes, keeping things running safely takes precedence over keeping things secret.
This isn't an obscure academic distinction. It's recognised in frameworks like NIST SP 800-82 and ISA/IEC 62443. It's well established. And yet, I've watched experienced IT security professionals walk into OT environments and apply enterprise thinking without pausing to ask whether their assumptions still hold.
And there's something else that sits above both triads in operational technology: safety. Safety doesn't appear in the CIA model at all. In OT, it dominates everything. A safety incident, an uncontrolled chemical release, a failed pressure vessel, a runaway process, has consequences that dwarf any data breach. When safety is at stake, availability, integrity, and confidentiality all become secondary.
What Are You Actually Protecting?
This leads to a harder question: what's the asset?
In IT, the answer is straightforward. You're protecting data. Customer records, financial transactions, intellectual property. The entire security apparatus is built around the principle that information is the thing of value.
In OT, ask five different people and you'll get five different answers. Industry literature commonly defines the OT asset as "people, hardware and the environment." Standards bodies talk about "processes and systems." Both are right, but neither captures the full picture.
In regulated manufacturing, pharmaceuticals, food production, medical devices, the real asset hierarchy runs deeper. You're protecting a process. That process produces a product. And that product ends up with a person: a patient taking medication, a consumer eating food, a surgeon relying on a medical device.
Process. Product. Person.
That changes the risk calculus fundamentally. A data breach in enterprise IT is embarrassing and expensive. A compromised process in pharmaceutical manufacturing could mean contaminated medication reaching patients. The stakes aren't just financial. They're human.
And here's what makes this genuinely difficult. The same piece of equipment means something completely different to each team looking at it. The IT security team sees a Windows endpoint that needs patching. The process engineer sees a validated system that controls a £2 million batch. The quality team sees a GxP-regulated asset with a change control record stretching back seven years. They're all looking at the same server. They're seeing entirely different things.
We like to think we assess things objectively. We don't. We assess them through the lens of what they mean to us, and in converging IT/OT environments, the meaning varies dramatically depending on which side of the divide you sit.
You Can't Just Patch
This is where the instincts that serve you well in enterprise IT need careful recalibration.
In IT, the patching cycle is a well-oiled machine. Vulnerability disclosed, patch released, test it, deploy it, move on. There are SLAs, there are patch windows, there are automated deployment tools. The rhythm is well established.
In regulated OT environments, you often can't just patch.
Consider pharmaceutical manufacturing. Systems that control production processes are validated under FDA 21 CFR Part 11 in the United States and EU GMP Annex 11 in Europe. In the UK, the MHRA applies its own regulatory expectations post-Brexit. These aren't guidelines. They're legal requirements.
A validated system has been through formal testing to demonstrate it performs its intended function reliably and reproducibly. Any change to that system, including a security patch, requires formal change control. Impact assessment. Risk-based testing. And depending on the scope of the change, potentially revalidation. All formally documented.
Applying a critical patch to a validated system can take weeks or months of formal change control. And during that time, the system remains vulnerable.
So what do you do? You can't patch. You can't leave it exposed. The answer lies in compensating controls: network segmentation to isolate vulnerable systems, enhanced monitoring to detect anomalous behaviour, application whitelisting to prevent unauthorised code execution, and strict access controls to limit who and what can interact with the system.
And here's the part that catches people out: even the defensive tools themselves can cause harm. Modern intrusion prevention systems designed for enterprise networks can disrupt older OT environments. Active scanning has been known to crash PLCs. Deep packet inspection introduces latency into control loops where timing is critical. Security tools that are entirely appropriate on a corporate network can cause a process upset on a factory floor. The instinct to "deploy the standard security stack" needs the same recalibration as the instinct to patch.
It's a fundamentally different approach to vulnerability management. And it requires IT security teams to think in terms of risk mitigation rather than remediation. That's an uncomfortable shift for people trained in a world where "patch it" is always the right answer.
Two Tribes
There's a human dimension to this that rarely gets discussed in the technical literature.
IT and OT have evolved as separate disciplines with separate cultures, separate vocabularies, and separate definitions of success. Bringing them together runs into deep tribal resistance that no amount of policy-writing can resolve on its own.
IT teams measure success by compliance scores, patch coverage, and incident response times. OT teams measure it in uptime, batch yields, and safety records. IT communicates in tickets and change requests. OT communicates in process flows and alarm states. The language gap alone creates misunderstanding before anyone has said anything controversial.
But it goes deeper than language. There's a real tension around risk and accountability.
In enterprise IT, there's a well-understood pattern: follow the standard playbook. Deploy the vendor's recommended patches, implement the framework controls, follow the industry-standard approach. If something still goes wrong, you're defensible. You did what everyone else does. In most professional contexts, it's safer to follow the conventional approach and fail than to try something unconventional and succeed. The penalty for being "wrong" in an obvious way is far greater than the penalty for being unimaginative.
But in OT, the standard IT playbook might shut down a production line or invalidate a regulatory submission. The OT engineer who pushed back on an IT-mandated patch wasn't being obstructive. They understood something the IT team didn't: that the "standard" approach, applied without adaptation, was the wrong thing to do in this environment.
Both sides are managing risk. They're just managing different risks, and neither side always recognises what the other is protecting.
And there's a quieter dynamic at play too. When things go wrong in these situations, both sides tend to minimise blame rather than minimise risk. The IT team points to their compliance with policy. The OT team points to their process uptime. Nobody is lying. But nobody is seeing the whole picture either. The instinct to protect your professional position, to demonstrate that you followed the accepted approach, can quietly override the more important question of whether the right outcome was achieved.
The Normalisation Problem
In environments where IT and OT have historically been separated, air-gapped networks, isolated control systems, dedicated hardware, there's a persistent belief that the separation itself provides security.
"We're air-gapped. We're safe."
Except increasingly, you're not. The air gap has been quietly eroded by remote access requirements, vendor support connections, USB drives, and the steady march of network convergence. Many organisations that believe they have isolated OT networks actually have connections they've forgotten about or never formally documented.
I've seen what happens when that complacency meets reality. When environments that haven't been properly maintained because "they're not connected to anything" suddenly turn out to be reachable. The scramble that follows is punishing, and it's avoidable.
The challenge is that normalisation of deviance works slowly. Each small concession, a temporary remote access point that becomes permanent, a vendor laptop connected "just this once," a firewall rule added for testing and never removed, seems reasonable in isolation. It's the accumulation that creates the exposure.
Becoming Increasingly Integrated
IT and OT are no longer separate worlds. They are becoming increasingly integrated, driven by Industry 4.0 initiatives, IoT deployments, cloud-based analytics, digital twins, and the growing application of machine learning and AI to operational data.
This integration brings genuine benefits. Predictive maintenance that reduces downtime. Digital twins that allow process simulation without production risk. AI-driven quality analytics that catch deviations earlier. Supply chain visibility that improves planning. Energy management that reduces costs.
But it also means that every IT vulnerability now has a potential pathway into the operational environment. And every OT system that was designed for a closed, trusted network is now, whether anyone intended it or not, part of a broader connected ecosystem.
The organisations handling this well aren't treating it as an IT project or an OT project. They're treating it as a shared responsibility with a governance model that reflects the reality of integrated environments. Shared risk registers. Joint incident response plans. Cross-functional teams that include both IT security expertise and process engineering knowledge.
The ones struggling are the ones where IT and OT still report through separate structures with separate budgets, separate risk frameworks, and separate definitions of what "security" means.
What I'd Tell My Earlier Self
If I could go back to my first day walking into an operational technology environment, carrying all my enterprise IT assumptions, here's what I'd want to know.
Check your assumptions at the door. The security principles you've learned aren't wrong, they're incomplete. The CIA triad is valid, it's just not the only ordering. Before you propose anything, understand what matters most in this environment and why.
Understand the asset. Don't assume it's data. Ask what the process does, what it produces, and who depends on the output. That conversation will reshape your entire approach to risk.
Meet the people before you propose the technology. The tribal divide between IT and OT is real, and it doesn't resolve through policy mandates. Build relationships. Learn the vocabulary. Demonstrate that you understand what the operational team is protecting before you start telling them how to protect it.
Accept that your playbook needs adapting. "Patch everything within 30 days" is a valid policy in enterprise IT. In a GxP-validated pharmaceutical plant, a NERC CIP-regulated energy facility, or a safety-instrumented chemical process, it might be impossible. Your job is to find the right risk mitigation, not to force the standard answer.
Design for the integration that's already happening. The air gap is largely a myth. Build security architectures that assume connectivity, because even if you think the networks are separated today, they probably won't be tomorrow.
Dealing with a transformation that's gone sideways? I work with organisations as an interim leader to get programmes back on track. Let's have an honest conversation about where you are.
Start a Conversation →
About Paradigm-ICT
Paradigm-ICT is an interim IT transformation consultancy specialising in programme recovery, complex transition delivery, and pragmatic technology enablement across manufacturing, utilities, retail, and financial services.
Founded on 30 years of hands-on operational experience — starting in business operations, not IT — we bring a business-first perspective to technology leadership that most consultancies can’t.
Learn more →“I need something that actually works the way I work.”
“The CRMs that fail are the ones built for reporting, not for doing.”
“If the people doing the work won’t use it on a bad day, it’s not a tool — it’s a chore.”