Jurassic park Dinosaur Enclosures and Electric Fence.

Everything feels uncertain right now. Security is no different.

Here’s what’s been nagging at me. For all the talk about AI transforming cybersecurity, we’re not seeing the attacks we should be seeing. Not given what the technology can do. The phishing is better, the reconnaissance is faster, but the truly AI-native attacks, bespoke exploit chains, automated zero-day discovery across entire ecosystems, haven’t materialised at scale. Not yet.

There’s a scene in Jurassic Park where the electric fences go down. The dinosaurs don’t charge through. They don’t know the fence is off. They’ve been conditioned to stay put, so they do. For a while.

That’s where we are. The fence is down. Most threat actors just haven’t copped on yet.

This week, Anthropic published the results of a security collaboration with Mozilla. They pointed Claude Opus 4.6 at the Firefox codebase and in two weeks it found 22 previously unknown vulnerabilities, 14 classified as high-severity. That’s nearly a fifth of all high-severity Firefox vulnerabilities remediated in 2025. Found in a fortnight by a model. When they tested whether Claude could also write working exploits, it managed only two crude successes across several hundred attempts. Discovery is dramatically cheaper and more effective than exploitation right now. That’s good news for defenders. Anthropic themselves noted that gap is unlikely to last, but right now the window favours defence.

That’s the good news.

The bad news is what’s coming. Alex Stamos laid this out at Reddit’s SnooSec conference in his talk AI is Eating Security. Open-source models are less than a year behind frontier models in bug discovery. Once widely available, thousands of adversary groups will have tooling that was recently the preserve of nation-states. As Stamos put it: we have no historical precedent for this many adversary groups having this kind of capability while we have a massive dearth of skilled defenders. Attacker benefits are going exponential. Defender benefits are geometric.

The answer isn’t more headcount. Using AI leads to a different model. Smaller, narrower teams. Fewer but more senior people on top of AI agents. AI coding tools already make security engineering dramatically faster. The organisations that make this transition keep pace, AI gets you there faster. The ones that don’t get left operating at human speed against machine-speed attackers.

If you’re a security leader, the harder part is getting the rest of leadership there. Especially when you’re asking for more money or restructured teams. The Anthropic blog post helps. It’s 22 zero-days in a production browser, found by a model in two weeks. That cuts through boardroom scepticism.

We might be in for a tough time ahead, but in the immortal words of Toto: hold the line, love isn’t always on time.