Cyber Deception
Cyber deception represents a shift in how we think about detection. Instead of identifying malicious patterns in legitimate activity, deception creates resources where any activity indicates unauthorised access.
Detection has a fundamental problem. Your controls look for malicious patterns - malware signatures, anomalous network traffic, suspicious log entries. Sophisticated attackers have learned to avoid creating these patterns. They use valid credentials, legitimate tools, and native APIs. Their activity looks identical to authorised users.
Deception inverts the model. Instead of identifying malicious patterns in legitimate activity, you create resources where any activity is malicious. A credential no one should use. A server no one should access. A document no one should read. Any interaction is a binary signal of unauthorised access.
Why do we need deception?
If your red team keeps winning, this is why. You have EDR, NDR, and SIEM. Your team is skilled. But sophisticated attackers still get through - because they are not running malware or making anomalous network connections. They are reading your documentation, querying your APIs, and harvesting credentials from your vaults.
This is the post-compromise gap. Your prevention controls cover initial access. Your response controls cover impact. Attackers spend 90% of their time between these phases, operating in a window where their activity looks identical to legitimate users.
The Post-Compromise Gap: Why Mature Adversaries Keep Winning
Rad Kawar / 12m
What is cyber deception?
Deception Assets + Communication Channels = Desired Effect
The asset is what you deploy - a honey token (also known as a canary token), a tripwire, a decoy credential. The channel is how attackers find it - a configuration file, an internal wiki, a code repository. The effect is why you bothered: detection, deterrence, or intelligence.
Attackers cannot help but form hypotheses when they encounter information. They discover something through standard reconnaissance, compare it against their experience, and decide whether to interact. The asset and channel must both align with what they expect to find.
Deception Fundamentals: The Missing Piece in Your Security Strategy
Rad Kawar / 8m
How do we talk about deception?
Honeypots, honey tokens (also known as canary tokens), tripwires, decoys - these terms are used interchangeably. A honey token is a credential that alerts when used. A tripwire is an infrastructure resource that alerts when accessed. A breadcrumb leads attackers toward other deception assets.
Deception Taxonomy: A Common Language
Rad Kawar / 6m
How mature is your deception programme?
Deception programmes evolve through predictable stages. Most organisations start with a few honey tokens scattered across critical systems, deployed manually, with alerts going to a shared inbox. This provides value but does not scale.
Mature programmes automate deployment and measure coverage by asking: which attack paths would trigger an alert? They rotate decoys so attackers cannot learn to recognise them. The outcome is fewer alerts that matter more - every one represents unauthorised access, not noise to be triaged.
The Cyber Deception Maturity Model: Where Does Your Organization Stand?
Rad Kawar / 14m
How do you think about your adversary?
Security teams study malware, vulnerabilities, and attack techniques. The adversary themselves often receives less attention. But behind every intrusion is a person with objectives, constraints, and patterns of behaviour shaped by their situation.
Ransomware operators function as criminal entrepreneurs. Every hour represents operational cost. Their calculations are economic - will this target yield faster returns than alternatives? Nation-state actors measure success in years. An innocuous foothold today might enable strategic operations years hence.
These differences shape deception design. Ransomware operators under time pressure take shortcuts - decoys that look like quick wins become irresistible. Nation-state actors with patience validate everything - but even careful validation takes time they would rather spend elsewhere.
Understanding Your Adversary: The Human Side of Threat Intelligence
Rad Kawar / 8m
What is the game theory behind deception?
Deception introduces a meta-game into the adversary relationship. Once attackers know (or suspect) that an organisation deploys deception, their behaviour changes. They move more slowly. They validate more carefully. They second-guess what they find.
This is the reflexive game - each side modelling the other's model of themselves. The defender thinks: "What will the attacker do?" The attacker thinks: "What does the defender think I will do?" The defender thinks: "What does the attacker think I think they will do?" And so on.
The asymmetry favours defenders. Attackers must be right every time - every credential they use, every resource they access, every document they read. Defenders only need them to be wrong once. Even if attackers avoid every decoy, the effort of checking each resource slows them down.
The Reflexive Game: Why Deception Operates on Minds, Not Systems
Rad Kawar / 6m
An alert you can trust.
Deploy your first token in minutes, or book a demo to see it in action.