face-palmYou’ve probably heard the phrase “The Good, the Bad, and the Ugly”. I want to introduce another in the same vein as that – “The Dumb, the Stupid, and the Unbelievable”. I’ve been an IT consultant for many years, and I’ve seen my customers do a variety of strange things. Some were shocking, some were scary, and some were so out of whack with common sense that I just wanted to facepalm and walk away.

Here the best of the worst – the 31 worst facepalm moments:

1. On a project where I was a consultant, security declared a system in violation of security policy several weeks into the project. They had been advised of the overall plan before it began. When I asked to see the security policy, the security guru said the policy was confidential and couldn’t be shown to any non-employee. 95% of IT were contractors.

2. The project lead on another client engagement read an article about Java saying it was the next big thing. He declared that the project should switch to Java. What he didn’t understand was that Java was being shown as the next big thing…for security exploits!

3. Another customer’s PMO scheduled an all-hands meeting to go over the project plan. It was mandatory attendance for all team members on the project. The room they scheduled was too small by half, and had no projector. Picture if you can 12 people looking over someone’s shoulder at MS Project on a 14″ laptop screen.

4. I once saw management at another company approve a mid-day DNS change upon being told it was zero risk. The DNS admin made the change to the zone, but forgot the trailing dot. As a result, it took down email for 20,000 people. It was the last mid-day change ever approved at that company.

5. Another customer’s ISP had a major outage which took down their entire office. My contact asked me why their office was down, so I explained that because they had opted not to provision a backup circuit from another provider, they were down until the ISP got things fixed. His first request was that I call the second provider I had recommended and get them to install a new circuit that day. The second was to see if they could borrow some bandwidth from my company’s connection. My office was across town. The Internet –it’s like a cup of sugar.

6. Here’s one for the Netware folks. I once was called in to a client to figure out why their server crashed and no one could access any data. It turns out that a junior admin saw that Z:, Y:, and X: all had exactly the same content. EXACTLY. THE. SAME. To save space on the file server, he went into X: and deleted everything there. He then switched to Y: only to see that it was now empty too. He probably would have then looked in Z: except that suddenly everyone starting complaining that their systems crashed.

7. Here’s another one for the Netware folks. I was called in to try to help figure out why Internet access suddenly stopped for one of the two offices this customer had. It turns out this is what happened. They built a Netware 4.11 server in one office. They configured IP and IPX, got everything the way they wanted it, and then drove it to the other office and plugged it in. They configured IPX for the new location, assigned it an ip.addr for that datacenter, and thought they were done. Suddenly, no one in the office could reach the original site, or the Internet. Packets kept coming back destination unreachable. The server might have had a new ip.addr, but it was running RIP, and considered that it was directly connected to the original network. Since that was also the default route to the Internet, it effectively took down an entire office until I finally figured out it was the new server as the source of the destination unreachable messages.

8. At another customer, an admin who wanted to experiment with P2V virtualized a physical host, pulled the network cable out of the back of the original and brought up the guest on a new VMware server. All went well for a couple of weeks until he went on vacation. As it turns out, he never licensed the VMware server, so it stopped functioning at the 30-day mark. Since none of his co-workers knew about his little experiment, all they knew was that a critical server went down. They rushed to the server room to find that the network cable had been pulled. It was quick to fix, but of course, this new server was now a month out of date for all data/changes to the application. A two-fold inquisition commenced to review the access logs and video to find who entered the server room to pull the cable out that morning, and also to find who restored a month old backup to this server. Three days later the original admin got back and sheepishly confessed all.

9. I once was working with a client that deployed a full rack of servers in datacenter A, for eventual deployment in datacenter B. When it came time to move them, I urged them to unrack the servers before transport. The shipping company assured my client it was going to be okay, so they loaded up the rack and sent it on its way. They got the rack to the datacenter B, started to lower the rack on the truck’s tail lift, when the whole thing overbalanced and dropped six feet to the pavement. Only two 1U servers survived to boot up.

10. This one is probably one many of you can relate to. A client had a new datacenter going in, and the cabling vendor was giving the orientation tour of what they had done. As they all leaned inwards to see a particular area, one of the admins started to lose his balance and reached out for the wall to support himself. Of course, as you can imagine, his hand came down right on top of the emergency actuator for the fire suppression system. I was in my client’s office at the time, heard the “BANG” of the actuation charge, and spun around to see them coming running out of the datacenter with a bank of fog rolling out behind them. That cover that should have been over the emergency fire actuator? It was on backorder and due to arrive the following week.

11. A similar story was relayed to me by another customer. A vendor was running some new fiber in the datacenter. As the two techs were laying out the fiber run, and one was slowly backing up as they unspooled and laid out the fiber, he backed right into the emergency power off switch for the datacenter. Again, there was no cover protecting the EPO from accidents.

12. Several years ago I was consulting with another customer on a video conferencing pilot. The team needed new hardware so we submitted a requisition to purchasing to obtain 20 new laptops for the video conferencing project. We provided the specific model number and the breakdown of all components, and explained it was for the video conferencing project in the justification paragraph. They placed the order, but when the laptops arrived, none of them had the built-in webcams that were specified. My client assumed the vendor screwed up and went to tell purchasing they needed to get it fixed, only to have them proudly tell him they saved the project $50 per laptop by not ordering webcams, since the business didn’t have any video conferencing deployed and there wasn’t a need for webcams.

13. I once consulted for a company that wanted to put in Cisco Telepresence into a new office. The vendor came in to do all their physical and sound measurements and certification criteria, and finally signed off on the room. The next team arrived a week later to do the install, only to declare the room unsatisfactory. Apparently, the first guy measured sound while the office was essentially empty. Before the second team arrived, the cooling units for the new datacenter were installed and turned on, and the hum of that system was too loud for Telepresence.

14. At the same company, and with the same Telepresence setup. For the very first conference between this office and HQ, one of the senior leadership could not make it, and wanted to dial in. Back then, Telepresence was a completely closed system. They had to steal a speaker phone from another room, quickly make a 60 foot Ethernet cable, and run it down the hall from the closest Ethernet drop so this remote VP could listen in.

15. I once was a consultant at a company that had dozens and dozens of conference rooms. Every one of them had a projector. None of them had a screen. They were glass boxes in the middle of the floor, with windows all around all four walls. They were called fish bowls for a reason. They used to have to take large pieces of easel paper and tape them to the windows to create a projection surface.

16. At another company I consulted with, their IT team wanted to manage all their DMZ servers using Active Directory, but didn’t trust traffic coming into the internal network from the DMZ. So they moved a domain controller out to the DMZ. Because that’s more secure.

17. I once consulted for another company that was in the process of deploying Lotus Notes. They couldn’t figure out how to get the client to work correctly, so they made everyone a domain admin. Every. Single. User. Domain Admin.

18. Another company I worked with needed to move their entire datacenter, essentially in a weekend, in what these days we call a forklift operation. Rather than planning out the required connectivity and security, management made the decision to just permit IP ANY between the DMZ, the database servers network, and the internal network, planning to “clean it all up later.” Later took over two years to finally arrive.

19. I once was a consultant for a company that was trying to adopt Oracle across all seven of its business units. They literally had over 100 consultants in a giant war room for over two years trying to get it all working. One CIO change later, the entire lot were sent home and the effort was abandoned. It’s not the choice to give up that was the face palm… that should have happened ages prior. It was that two years x 100 consultants x $$$ that was wasted because the previous CIO just couldn’t admit he had made a mistake.

20. An admin with too much power deleted a user account from Active Directory by mistake. In an attempt to “fix it” this admin chose to do an authoritative restore of AD from a one month old system state backup. For those of you who aren’t AD savvy, this had the effect of restoring the entire forest back to the state it was in a month ago. In a company of 30,000 users, you can imagine how much fun that was.

21. That same admin once configured restricted groups in AD, because he read that it helped with security. He added domain admins and enterprise admins to that list, but didn’t put any users into the Member Of tab.

22. I once saw a company implement a GPO to provide one user with rights to use RDP to connect to a server, with the unintended consequence of locking every other admin in the company out of every single server in the domain. It was a simple edit, to the Default Domain Policy.

23. I once provided some security consulting to a company that refused to implement any kind of proxy server, web content filtering, or anything else that would help control outbound Internet access. However, every year the CIO would demand that the security team found a way to block access to the Final Four basketball tournament.

24. Fireproof safes are fireproof, not heat proof. A business down the street from where I once worked found that out the hard way when, following a fire in the office, they found their backup tapes so much melted slag.

25. I understand why end users want to turn off antivirus software when their machines seem slow, but why oh why would an admin do that? The worst virus outbreak I ever saw came about at a customer I regularly worked with, because the SQL team had disabled a/v on all their servers because it “slowed them down.” Then SQL Slammer hit. Slow got redefined that day.

26. I once had to help a client clean up the devastation caused when a user created a print ready PDF to send to a distribution list of around 10,000 users. That print ready PDF was about 48 MB, and their mail system had neither attachment size limits, nor recipient count limits. We’re still not sure how many got out before the entire mail system came to a halt, but we spent the next 8 hours watching the inbound servers dying from all the NDRs, purging the queues, restarting, and waiting for the next wave to come crashing in. The worst part of it? Once the mess was finally cleaned up, we took the print ready PDF and saved it for web publishing. It came in at just under 800 KB!

27. At another customer, I once saw their shiny new fiber-optic switches stacked, one atop the next, on the floor in the corner. There was no rack, no cooling, and no support. Just 48U of switches freely standing, powered up and running, without even air handling to help keep them cool.

28. In this same datacenter, the “doorbell” you had to ring to gain access was literally two low voltage wires sticking out of the wall in the corner. You pressed them together to ring the bell, and separated them to silence the bell. There’s nothing like a 50 volt jolt to make you ask yourself if you really want to go in.

29. And again in the same datacenter, to that same stack of fiber switches, the trunk fiber to connect the stack to the main network was zip-tied up the wall and across the ceiling. Some zip ties were so loose the fiber was swinging in the breeze, while other zip ties were so tight it must have been fracturing the cores.

30. At another organization I used to work with, they had central IT for seven different companies all tied together by the parent’s ownership. All change requests had to be approved by unanimous vote. It took the company almost a year to replace a failed switch because it would cause an outage (in the middle of the night, during a weekend change window) because one org kept vetoing in case it caused them problems during their testing.

31. A company I consulted for had outsourced their IT to a major provider about a year before. I discovered that not a single patch had been applied to any of the 100+ servers in that entire time, because the outsource provider didn’t think patching was a part of server maintenance and support.

While these facepalm moments were all personally experienced by me, I bet you have some you’d like to share. Leave a comment and let us know the best (or the worst) facepalm moment you’ve ever experienced. We’d love to hear about them!

Like our posts? Subscribe to our RSS feed or email feed (on the right hand side) now, and be the first to get them!

Get your free 30-day GFI LanGuard trial

Get immediate results. Identify where you’re vulnerable with your first scan on your first day of a 30-day trial. Take the necessary steps to fix all issues.