Digest this: “Humans are incapable of securely storing high-quality cryptographic keys, and they have unacceptable speed and accuracy… They are also large, expensive to maintain, difficult to manage, and they pollute the environment. It is astonishing that these devices continue to be manufactured and deployed. But they are sufficiently pervasive that we must design our protocols around their limitations.”
This fun quote by Charlie Kaufman, Radia Perlman and Mike Speciner from Network Security: Private Communication in a Public World, captures the essence of the insider threat behind cyberattacks. While IT professionals recognise that the human element is their weakest link, they often fail to address the root cause of this cyber actor. The fundamental problem in achieving cyber resilience is not about the technology, it’s about how the technology is being used.
Humans communicate with technology through signals – warnings and notices, for example – that prompt them to make a decision. There are ‘hidden forces’ that shape these decisions and offer a rational explanation to humans’ ‘predictable irrationality’.
Think of these hidden forces as psychological biases and heuristic patterns that humans tend to have around risk vulnerability and their cognitive capabilities.
Let’s take a look at some examples:
Most people tend to think that they are at less risk than others and that a cyberattack is unlikely to happen to them.
Similarly, most people believe they can control the outcome if they are the main decision-maker: if a security warning message pops up on the screen and we have the power to decide to bypass it, for instance, it gives us an illusion that we are in control.
Over time, frequent exposure to security warning messages causes the brain to pay less attention to these notifications and to disregard any potential threats they may signal.
Normalisation of deviance
The amount of stress placed on working memory makes us favour quick decisions that are based on learned rules instead of elaborate analysis. If a user has clicked on a link and it has not caused a data breach, he/she is more likely to repeat this behaviour over and over again. It simply becomes the norm.
Path of least resistance
Most people tend to choose the action that requires the minimum amount of effort. Combine that with multitasking and scanning a file prior to downloading it becomes the least of the user’s concern.
Taking these hidden forces into account, it’s clear that no amount of technology – no matter how sophisticated it is – can stop humans from making a costly mistake. So what can be done to design a cyberdefence around their limitations?
Here are five proven strategies to improve security decision-making:
1. Frame cyber risks as a tangible outcome of technology use
There is usually a divergence between the feeling and the reality of security. Although the concept of security is abstract, it can be made more tangible by making users aware of it. This is how this may work:
- Embed regular system reports around cyber breaches and near-misses in your awareness communication and warning messages;
- Security warnings should clearly state possible negative outcomes that are real and personal rather than conceptual;
- Frame security compliance rules as a gain (protect personal data, keep privileged access, etc) rather than a loss and inconvenience.
2. Minimise decision-making steps when and where possible
Humans have limited amount of working memory and attention span. Consider minimising decision-making when users are trying to focus on their day-to-day tasks by defaulting external emails to be filed as spam. This ‘choice architecture’ can help alert users and raise their awareness when they are going through their emails.
But what do you do when you can’t apply choice architecture? Keep users alert by frequently refreshing the look and feel of your security warning messages (play on colour, message placement on the screen, etc) and avoid the use of ‘Yes’ as a command to bypass warnings. All too often we click on ‘yes’ without reading the details of the action we are trying to perform simply because that is the command we use the most when we interact with technology.
3. Design your cyber protocols and compliance procedures to counteract heuristics
Now that we have a better understanding of human interaction with technology, it’s time to rethink what used to be known as best practice. Is it necessary to mandate password changes every 60 or 90 days? Research has shown that frequent changes to a memorised item interfere with remembering its new version. This is why you end up with passwords written on a sticky note visible to anyone walking by or hidden under a keyboard for good measure.
When required to repeatedly change their passwords, users tend to create passwords that follow predictable patterns, called ‘transformations’. This translates into using essentially the same password while incrementing a number or changing a letter to a similar-looking symbol. This is a known fact to hackers and explains easily-guessed passwords.
With CEO fraud on the rise, do you have a protocol in place that requires users to verify the identity of the caller prior to initiating any system transactions? Most humans respond obediently to authority and will jump into action before taking precautionary measures.
4. Monitor users’ activities as diligently as your network and infrastructure
Social media platforms are prime targets for cyber criminals because they can easily map out users with higher-access privileges. These highly-sophisticated attacks – while they take months to execute – are very common because of their success rate and underlying rewards. An innocent post about a promotion or an upcoming holiday can have a snowball effect that leads to a cyber breach.
5. Partner with your vendors and infrastructure providers to keep humans at the centre of your cyber defence
With everything being pushed to the cloud, companies are losing visibility into how their data and assets are being secured. This remains one of the biggest concerns for now since it offers humans a more complex, yet accessible landscape to interact with technology.
Working with vendors to understand the risks is crucial to minimise error. It can be as simple as remediating the basic misconception that phones and tablets can get hacked into and should be treated as a computer.
Since human error remains one of the most challenging threats to cyber defence, it is imperative to design it around existing limitations as opposed to against it. We can’t stop humans from interacting with the technology, but we can certainly help them make better choices to minimise irrational decisions.