Recently, we reviewed a report with a customer and received some interesting feedback regarding issues with mitigation.
Some of the issues they were having trouble mitigating were related to supporting HTTP, and TLS versions below 1.3 instead of forcing HTTPS with TLS 1.3.
Another issue was the use of Digest Authentication, which over HTTP is a very bad idea even with the opaque feature (an obfuscation mechanism) deployed, as it allows an attacker the opportunity to capture credentials.
The issues they presented all dealt with clients that had antiquated systems/infrastructures that needed backward compatibility, for automated deployment or to integrate with existing infrastructure.
This raised a question in my mind of whether this type of thinking is the reason security takes so long to catch up, even to the point that once it does, the “new” technology is old and broken. As an example, TLS 1.1 was released in 2006, and we regularly came across web servers ten-plus years later that still supported SSL 2 and 3. These protocols were superseded in 1999 with the advent of TLS 1. Even now, seventeen years later, there are no modern browsers that are incapable of supporting TLS 1.3 (released in 2018), and yet we still see web servers with TLS 1.0, and 1.1. Some even still support SSL 3. Thankfully, the dominant browsers do not support SSL without configuration changes. TLS 1.2 was released about 15 years ago; why do we still support it? If it had not been for Google’s willingness to risk their market share by marking websites as insecure when they used these outdated protocols and ciphers, we would likely still see SSL 3 in the wild on a regular basis. Why does it take a global powerhouse in an industry to force these changes?
We have organizations like Electronic Frontier Foundation that champion efforts to encrypt the internet and assist users with verifying the identity of the servers they are connecting to with projects like LetsEncrypt that provide free certificates, and yet we still see tons of self-signed certificates facing the public on servers with fully qualified domain names (FQDNs). Even modern-day firewalls come with self-signed certs and do not do much in the way of pushing users to install their own.
I think the issue comes down to ease of use. And at the forefront of this issue, in my humble opinion, is that software developers are under pressure to, as quickly as possible, make installations as easy as possible for as many people as possible, while accommodating the needs of as many users as possible. This is to say that developers are forced to design a product to the lowest common security posture to ensure the widest range of happy customers. This is understandable considering the backlash that Microsoft got from their Vista OS. It was secure, but it was not easy to use. But we end up with a population that is ignorant of cyber security and trusts that the systems they purchase are secure by default. I mean, why would a company put out a product that was configured to be vulnerable to known exploits, right?
What is my point?
Well, let us look at the development of IoT devices with web interfaces and no screen or monitor with which to interact. How do you make it so that your customer can pull the device out of the box quickly and easily deploy it? How do you ensure that the customer is made aware that an update is available or that a reboot is necessary to complete the update? Many of the simple but effective security measures out there are ease of use prohibitive or require educating the users.
For example, not setting a default password means you must provide the user with a way to log in and force them to set a password that they will promptly forget because they aren’t using a password manager. So now you get the joy of fielding a lot of support tickets on “I forgot my password, how do I reset it?” and the like. We underestimate users’ ability to learn and adapt. Why not take the opportunity to educate users on password management and download a password manager at the time they are creating a new password if they don’t already have one?
Forcing a browser to only use HTTPS when the devices are using IP addresses with self-signed certificates instead of FQDNs with a Trusted Certificate Authority-issued certificate is only a partial solution, as it does not help you confirm the server’s identity and still leaves you vulnerable to eavesdropping or attacker-in-the-middle type exploitation. Additionally, it means admins must field questions from users who don’t just click accept on the security exception pop-up, thereby training users to ignore these error messages in the process. We should encourage users to assign an FQDN and provide a means to easily attain a properly signed certificate.
The certificate system is designed for DNS based identification. This means if you are only using an IP address you have no viable means of confirming the identity of your server. So why, then, do we allow users to use IP addresses only? Could we not offer a DNS entry, or even a dynamicDNS entry and a built-in mechanism for acquiring a Let’s Encrypt certificate.
Automatic updates mean you need an infrastructure that can support the bandwidth of constant queries to check for updates and the downloading of updates when they are released. Plus, you need to design your devices to handle live upgrades to prevent downtime. So, instead we depend upon the user to periodically check for updates, which we know they don’t do.
It is time we start forcing users to be more secure. Start making more secure configurations the default and make a user jump through hoops to downgrade it. Generate reminder alerts that constantly remind users that their system is not configured securely. Maybe even offer up links to short videos that explain the risks and offer solutions. The default security level should be the industry standard not the bare minimum or less.
Jon Ford, CTO