Pre-emptive Detection Without Prediction

December 30, 20255mRad KawarStrategy

Security relies on prediction. Every detection rule implies a belief about what attackers will do. Every threat model assumes certain behaviours. The question is whether those predictions hold.

Most don't. Not because security teams lack skill, but because the learning process itself is compromised.

The Feedback Loop

Consider how detection improves:

  1. An attack triggers an alert
  2. The attack becomes threat intelligence
  3. Intelligence guides new detection engineering
  4. New detections catch similar attacks
  5. Return to step 1

This looks like progress. Each iteration adds coverage. The detection portfolio grows. Metrics improve.

But notice what's missing: attacks that never triggered alerts never enter the loop. They don't become intelligence. They don't guide detection. They remain invisible.

Diagram

The visible loop is self-reinforcing. You observe attacks, you learn from them, you catch more attacks like them. Progress feels real because it is real - within the loop.

The silent loop is also self-reinforcing. Attacks that evade detection continue to evade detection. Nothing forces them into visibility. Nothing corrects for their absence.

The result: you're converging on better detection of detectable attacks. Not better security. These are different things, and the gap between them grows with each iteration.

This isn't a failure of execution. It's structural. The data you learn from is biased by what you already catch. Working harder within this loop doesn't fix the loop - it deepens the rut.

Fragile and Robust Predictions

Not all detection bets are equal. Some break when conditions shift. Others persist.

A fragile prediction depends on specific attacker behaviour: this threat actor uses this callback pattern, this malware variant communicates on these ports, this campaign exploits this vulnerability. When behaviour shifts - and in adversarial systems, it always shifts - the prediction breaks. Signatures match nothing. Detections fire on ghosts. Intelligence describes last quarter's campaign.

A robust prediction depends on structural properties: adversaries interact with authentication systems, movement through networks requires some form of reconnaissance, access to resources requires some form of authorisation. These aren't predictions about any particular attacker. They're predictions about how computing works.

The distinction isn't binary. "Adversaries need credentials" is robust but not absolute - session hijacking, supply chain compromise, and trusted-environment exploitation exist. The claim is softer: most adversaries usually interact with authentication or authorisation systems at some point in most attacks. Still useful for betting. Just squishier than it first appears.

The question for any detection: what are you betting on? If you're betting on specific attacker behaviour remaining stable, you're making a fragile bet. If you're betting on structural properties of computing that have held for decades, you're making a more robust bet.

Fragile bets require constant maintenance. They fail silently when adversaries adapt. Robust bets persist because the structure persists.

Via Negativa

There's an alternative to predicting what attackers will do: define what should never happen.

Traditional detection is via positiva - identify what malicious activity looks like. This requires modelling attacker behaviour, which requires predictions, which fail for all the reasons above.

Via negativa inverts this. Instead of asking "what does malicious look like?", ask "what should never occur?" A credential placed where no workflow uses it. A resource that exists but serves no business function. An action that no legitimate process performs.

Any interaction with these is signal. Not "suspicious activity that might warrant investigation" - definitive signal. When it fires, someone's there. The prediction required isn't about attacker behaviour. It's about your own environment: does this thing have legitimate use? If the answer is no, any use is unauthorised.

This doesn't solve silence. A resource that's never touched has the same ambiguity as a SIEM with no alerts: nothing happened, or something happened and you didn't see it. Attackers who recognise and avoid your traps remain invisible. You've created a different cemetery, not eliminated the cemetery.

What changes is the cost of staying silent. Traditional detection requires attackers to evade behavioural models. Via negativa requires them to also avoid or recognise resources with no legitimate purpose. They can't trust what they find. Every credential might be monitored. The cost of maintaining silence increases - attackers must be more paranoid, more methodical, slower. This is the reflexive game working in your favour.

This isn't free either. Defining "what should never happen" requires understanding your environment deeply enough to make that claim. In complex environments with legacy systems, undocumented integrations, and shadow IT, that knowledge is expensive. Via negativa trades prediction about attackers for knowledge about yourself. The second is more stable, but it's not zero-cost.

The value is that this knowledge - understanding your own systems - doesn't decay the way threat intelligence does. Attacker TTPs shift quarterly. Your environment's structure changes more slowly. The bet is more durable. And when the bet pays off, the signal is unambiguous.

When Predictions Fail

The question isn't whether your predictions will fail. In adversarial systems with adaptive opponents, predictions fail. That's the nature of the domain.

The question is whether you've built controls that assume prediction or controls that assume surprise.

Via negativa offers detection that doesn't depend on foreknowledge of attack patterns. You're not predicting the attack. You're instrumenting the fragility that any attack must eventually touch.

The principle applies wherever you can define impossibility - credentials, resources, actions. The implementation changes. The epistemology doesn't.

An alert you can trust.

Deploy your first token in minutes, or book a demo to see it in action.