Have you ever been confronted by a decision that seemed so ill-advised or just plain wrong that you couldn’t believe anyone would make it? Has that been something either directly related to, or impacting security? If so, then there’s probably something on this list that will ring a bell and bring back ‘fond’ memories. We’re all human, we all make mistakes and, sometimes, we do that while thinking that what we are doing is in the best interest of all concerned, or mandated to us as policy; but what’s really happening is that we’re setting ourselves and others up for really bad things. This list can either be a stroll down memory lane or a warning to help you avoid what have to be some of the dumbest security decisions ever made.
Good intentions, bad executions
Our first set of security decisions come from admins or managers who, no matter how well intentioned they may have been, just didn’t think things through. I expect you’ll recognize all of these.
1. Saying no
Some security guys just say no the first time, to get it out of the way. They’re like an insurance company who wants to see if you’re serious. If you go away after the first no, they don’t have to deal with you.
The best security solutions are the simple ones. Make something too complex, and there’s either going to be a mistake, an oversight, or a conscious effort to circumvent it because it’s just too difficult. The KISS (Keep it Simple Stupid) principle should be tattooed on every security admin somewhere conspicuous.
Pride goeth before the fall, and too many security admins think they are all that, and couldn’t possibly be hacked. These days it’s not if, it’s when, and the quicker you realize that, the better off you’ll be.
4. Not engineering for the weakest link
Our users are the weakest link in our security. They are also the reason we have jobs. When you fail to engineer solutions to take into account the human element, that weakest link, you’re doing your users a disservice, and you’re setting yourself up for failure. Too many security admins think regular users are IT pros – they’re not. Stop building solutions as if they are.
5. Bolting security on at the end
This is usually not a failing of the security admin, but rather the project manager or director who forgot to bring security in early, and thinks they can just bolt on security to a project that is otherwise ready to go. It always takes longer and costs more to add security on instead of including it from the beginning.
6. Security through obscurity
This is more a mindset than a specific action. It’s the sort of mindset that needs fixing, quickly. Security through obscurity is more about hiding the holes than doing anything to fix them.
7. Do as I say, not as I do
Security admins who require their users to do one thing, while they themselves do something else, don’t deserve their jobs. All security personnel should lead by example, not bypass the firewall or the proxy because they can.
8. Irresponsible disclosure
If you find a flaw, you don’t go post it on all the public websites and forums until you have notified the vendor so they can fix it. When you go public first, you are not trying to help others, you are trying to get people’s attention. Of course, there are vendors who fall into the next category, and if you did try to notify them first, do what you have to.
9. Ignoring security reports
If someone notifies you they found a flaw in your product, you darn well need to listen. Engage in a dialogue, consider what they have to tell you, and then verify it. If there is a problem, GO FIX IT. If you ignore them, or secretly take what they said without acknowledging them, then you deserve it when they go public.
10. Covering up breaches
Security breaches are bad. They are also learning opportunities. It is far better to own up to them, let everyone know about them, and use them to teach a lesson, than to cover them up and hope no one saw that.
11. Consider security policies secret
Procedures are proprietary, policies are not. You should not only publish your security policies, they should be considered as required reading for all IT employees, with a reasonable subset for the rest of your company that makes clear what is, and is not, allowed, and how to protect the company and its customers.
12. Claiming anything is against policy, without having a policy to back it up
This is a cardinal sin. There will always be new things that come up and policies will have to be created or edited to address new concerns, but no one should ever claim something is against a non-existent policy. If it’s bad, but there’s no policy for it, say it’s bad. End the behaviour, but don’t invoke a policy that doesn’t exist.
Users and Authentication
Our next category is all about users and authentication. How many of the below have you encountered?
13. Leave default credentials in place
Security audits are always fun, especially when you find access to systems using default accounts and passwords. Really? How hard is it to change a password from default? Apparently it is really hard for some to remember to do so.
14. Use dictionary words for passwords
Most password crackers can run through the dictionary faster than you can read this paragraph, which makes it mind boggling to me how many users use passwords that are in dictionary lists.
15. Requiring a particular password pattern
Complexity is good. Patterns are bad. It’s great to require that a password contains upper and lowercase letters, numbers and punctuation, and have a minimum number of characters. It’s just plain stupid to say a password must START with a capital letter, contain a number and be eight characters long.
16. Password requirements that force users to write down passwords
Remember our weakest links from above? Help your users maintain strong passwords. Don’t set requirements like “change every two weeks”, “must be at least 25 characters long” or “system generated random string” unless you really want them to write the passwords down.
Why on earth would you want to restrict users to a smaller password than they might choose otherwise? Longer is typically better, so make sure your password fields have enough buffer to hold as long a password as users want to use.
18. Using the same password everywhere
Having one master password that gets you in to every system you access is scary when you consider how many of those systems may have had a compromise at some point in the past. If your email address is your username, and you have the same password across multiple systems, a compromise on one may well lead to a compromise to many.
19. Emailing credentials
Sometimes there is just no other way to distribute creds than by email, but the norm is to require a user to reset their password at next logon. Emailing sensitive credentials for privileged accounts is just asking for trouble.
20. Using standalone credentials
Which is more secure? Giving each user an account in a central repository that can be audited and disabled in one action, or giving each user an account for each and every system, all of which must be maintained, audited, and if that user and the company go their separate ways, spending several hours trying to disable them all? Why any administrator thinks standalone accounts is a good idea is something I will never understand.
21. Sharing credentials
Even more of a bad idea? Having all admins share the same admin account and password. No individual accountability, no way to tell who did what. Basically, chaos. It’s no better doing this with regular users. EVERY user gets their own account. NOBODY shares, ever.
22. Using the second factor without the first
The whole point of two factor authentication is to require two factors; something you know, and something you have. Using access tokens and smart cards without a pin misses the entire point.
23. Forcing security questions that can be easily solved
Password reset questions should be easy for the actual user to solve, but not for anyone else. Asking questions that can be answered using Google or Facebook kind of defeats the purpose. Want really good password reset questions? Let the user create their own. Just make sure you give them an easy-to-understand explanation about the purpose so they can create good questions.
Networking, firewalls and encryption algorithms
24. Replacing old encryption with bad encryption
Recently Cisco chose to replace their type 5 password algorithm, which ran 1000 iterations of hashing with a salt, with a new type 4 password algorithm, which ran a single iteration and one salt. Yes, they replaced an old but strong encryption method with one that is comparatively trivial to crack. I think that when they used a lower number to indicate a newer method, it should have given us all a clue it was a step in the wrong direction!
25. Proprietary encryption algorithms
Open, standard encryption algorithms have been heavily analyzed and scrutinized by cryptography experts, mathematicians and the security community at large. Good algorithms have stood the test of time. Proprietary algorithms are unknown factors, and while they might be good, they might also have serious flaws. And yet every so often you run into a product where the vendor decided to develop their own algorithm instead of using a standard one.
26. Not salting your hashes
Just as with the Cisco example above, establishing hashes without salting makes it much easier for attackers to crack. Salts make each hash unique to the device upon which it is created, so even if you and I have the same password, our hashes won’t be the same.
27. Blocked SMTP/TLS
TLS makes SMTP connections more secure, by authenticating the remote server and encrypting the transport. So why would anyone allow servers to send email, but then block SMTP/TLS? It’s like allowing web access, but blocking HTTPS.
28. Put firewalls between domain controllers
Microsoft has been very clear about this from the early days of NT. Active Directory is an internal service, and is not designed to operate with firewalls separating domain controllers. And yet, every week, we see customers putting firewalls between DCs, and then wondering why their replication is broken.
29. Put a domain controller in the DMZ
Even worse, we also see customers putting a DC in the DMZ, where any attacker who compromised an Internet accessible system can go to town on AD.
30. Put firewalls between Exchange servers
Just like domain controllers, Exchange servers are not built to work with firewalls separating them from other Exchange servers, or from domain controllers. It’s explicitly not supported, but people try to do it anyway. Why?
31. Insist upon network ACLs for cloud services
According to NIST, the third essential characteristic of cloud computing is resource pooling, where “…resources are dynamically assigned and reassigned” and where this is a “sense of location independence”. For most, that translates into services being available over huge swaths of IP space, and which can change quickly and without advanced notice. And yet, customers want to create network ACLs based on IP ranges that they want to treat as fixed. It doesn’t work that way.
32. Changing applications from their default ports
There’s a reason those ports are both “registered” and “well-known”. It’s because the apps, and more to the point, the clients, expect a particular service to be on a specific port. Using non-standard ports is guaranteed to cause client heartburn and support issues, and might break the app for some users. Don’t do that.
This happens far too often. A datacenter move has to happen on short notice. There’s no time to work out the firewall rules, so management says to permit IP any, and figure out the rules later. This never ends well, but it keeps happening!
Remote users tend to get a raw deal when it comes to security. If you’re guilty of any of these, go fix them, NOW!
34. VPN timeouts for remote users
You have a population of users who work remotely. Maybe they are road warriors, maybe they are on site at a customer, or maybe they work from home. Whatever the situation, if you want them to connect via VPN to use corporate resources, let them stay connected!!! Cancelling their VPN after X hours is a great way to cause data loss and unhappy users.
35. Split tunnelling VPN
Ten years ago, when your Internet connection was a fractional DS1, you might have had no choice. Ten years ago, you didn’t have half your users connecting to unsecure wireless networks either. Today, you have plenty of bandwidth and plenty of remote users on open Wi-Fi. Secure them by turning off split tunnel, so all their traffic is protected by the VPN.
36. One hour max RDP sessions with forced logoff
If a user chronically disconnects an RDP session without logging off, go beat that user! Don’t penalize all the other admins by time limiting their connections. There’s plenty of administrative tasks that could take all day to complete, and logging them off only guarantees they will never get the job done.
What were they thinking?
The last of these just fall into the category “What were they thinking?” Oh, silly me, they weren’t!
37. DDoS’ed production systems during the business day because “attackers don’t wait for maintenance windows”
Take this scenario, trying to help a customer deploy a time sensitive project, only to watch all the servers being built-out suddenly become unavailable. As it turns out, security would decide to do some ethical hacking right in the middle of the business day. I’m all for security scans and pen tests, but not in the middle of the business day. Yes, attackers will try to break in 24/7, but they don’t have Gigabit connectivity and admin creds to your systems!
38. Grant users local admin rights, then locking down every single setting in Windows using GPOs
This is another one that baffles me. Companies give their users admin rights so they can run a program that wasn’t well written, but then they deploy a GPO that locks down every single function on the system, essentially breaking every user function, instead of figuring out how to make the app work without admin privileges.
39. Deploy internal HTTPS sites, with self-signed certs
The absolute worst thing you can do is train your users to ignore warnings, especially certificate warnings thrown by their browser. Considering how easy it is to deploy an internal CA and issue certs to all your apps that your workstations will trust, there is no excuse for using self-signed certs and telling users to just “click past that”.
40. Deploying antivirus software without configuring required exceptions
The only thing worse than no antivirus is misconfigured antivirus. There are plenty of applications, both workstation and server-based, that don’t play well with antivirus in default configurations. Too many security admins think antivirus configuration is a one-size fits all situation, which leads to poor app performance at best; other admins disabling antivirus at worst.
41. Thinking the number of boxes ticked is directly related to how secure you are
And our final entry, which happens to be a pet peeve of mine, is the security admin or AD admin who finds out about GPOs, creates one to “lock down” desktops, and then goes through and clicks every single option available to them. After all, the more things you turn off, the more secure you are, right? Wrong! Admins need to find the balance between security and accessibility, and understand that just because they can click something isn’t enough of a reason for them to actually click it!
Now it’s your turn. What’s the dumbest security decision you have ever seen, encountered, or been forced to deal with? Sound off with a comment and let us share your pain!
Like our posts? Subscribe to our RSS feed or email feed (on the right hand side) now, and be the first to get them!