Why I’m Not Losing Sleep Over AI
Artificial intelligence is cybersecurity’s latest boogeyman. And with the latest introduction and massive hype of China’s DeepSeek AI, I can feel the panic setting in further.
People worry AI-powered attacks will overwhelm defenses and that cybercrime will become more dangerous than ever.
But while the headlines sound scary, I’m not losing sleep over AI.
Zero Trust makes AI-powered threats less worrisome. Whether attackers use AI, quantum computing, or an army of cybercriminals in a dark room, Zero Trust greatly reduces their ability to succeed.
Attackers need permission to succeed — Zero Trust denies it
Every breach has one thing in common: there was a policy that allowed it. All bad things happen inside of an allow rule. That’s the hard truth of cybersecurity.
No matter how sophisticated the attack is, it only works if there’s an open door. The problem is that traditional security models assume everything inside the perimeter is safe. That’s a mistake.
Zero Trust flips this mindset. It denies all access by default. If someone or something isn’t explicitly allowed, it’s blocked.
This means an attacker using AI to craft the most advanced phishing email or the most convincing deepfake still runs into the same problem. They don’t have permission to access what matters: the Protect Surface. Their AI-powered attack is useless if it can’t reach the target.
AI can’t break the laws of cyber physics
People treat AI like magic, as if it can bypass all security barriers effortlessly. It can’t.
AI still operates within the constraints of cybersecurity’s foundational rules — protocols like TCP/IP.
Think of it like bowling. If you roll a ball down the lane, it follows the defined path. You can’t magically teleport the ball five lanes over to knock down another set of pins. In the same way, attackers using AI can’t escape the reality of network protocols.
Zero Trust policies act like the bumpers in a bowling alley, keeping everything in strict lanes. Attackers can try all the tricks they want, but if the policy says “no,” the attack goes straight into the gutter.
The real AI risk? Not securing your own AI systems
While I don’t worry about AI-powered attacks breaking through Zero Trust, I do worry about organizations failing to protect their own AI models.
AI isn’t just a tool for attackers. It’s a critical asset for defenders.
AI helps security teams analyze data, detect anomalies, and refine Zero Trust policies. But if an organization doesn’t treat its AI models as Protect Surfaces, they risk being manipulated, poisoned, or outright stolen.
That’s why AI itself must be secured within a Zero Trust framework. Organizations need to:
- Identify AI models as Protect Surfaces and apply least-privilege access controls.
- Monitor AI inputs and outputs to prevent poisoning attacks.
- Segment AI systems so that even if attackers breach one part of the network, they can’t move laterally to compromise AI-driven decision-making.
Zero Trust ensures AI strengthens security rather than becoming a liability.
AI doesn't change the game because Zero Trust already did
Cybercriminals will always evolve. AI just makes their job easier — but only if organizations continue relying on outdated security models.
Zero Trust changes the game by eliminating the attacker’s ability to move freely. It doesn’t care how an attacker operates, whether they’re using AI, brute force, or social engineering. If they don’t have explicit access, they don’t get in. Period.
So no, I’m not losing sleep over AI. Because with Zero Trust, attackers won’t win.

John Kindervag
Chief Evangelist