data“Knowledge is power” – scientia potential est in Latin – is an aphorism commonly attributed to Sir Francis Bacon. It’s true, but what we sometimes forget is that information (much less raw data) does not equal knowledge. Data consists of disorganized facts; when you sort and organize those facts, you have information. It’s only when those facts are interpreted in a way that makes them of practical use that you have knowledge.

“TMI” is a popular acronym that means “too much information”. It’s usually used in reference to people who are less than discreet about their personal lives, but it’s a concept that’s also applicable to the business world. Too much information is a bad thing because it obscures what’s important: the knowledge on which effective decision-making must be based.

What does that have to do with IT? Many datacenters today have the problem of TMI/NEK in the form of huge amounts of data collected in log files. Automatic logging is a highly useful feature that’s incorporated into operating systems and applications and can be performed and/or enhanced by third party utilities. Logs can be invaluable in troubleshooting problems, tracking down anomalies and monitoring security events. However, sometimes the amount of logged data can be overwhelming. When that happens, it can be a hindrance rather than helpful.

Particularly when it comes to security issues, it does little good to have the information that tells you what’s happening sitting around in a format that won’t be seen until after the fact, or have the relevant facts and events buried in a sea of TMI. You need knowledge – and you need it in real time. All that information must be monitored, sorted and managed in a way that’s thoroughly integrated into your IT infrastructure and operations.

Understanding the implications of log data requires careful analysis. This is all part of the relatively new concept of SIEM – Security Information and Event Management. Instead of having to keep up with a myriad of disparate security alerts generated by dozens of different software and hardware components on your network, a SIEM solution can aggregate data from multiple sources and sort the events into correlating groups, in real time. You get both immediate notification of significant events and long-term storage of data for historical comparisons and to meet compliance requirements.

Another big drawback of TMI is the load that an excess of information puts on your system resources, which can result in serious performance degradation. One of your criteria in selecting the best SIEM solution for your organization’s need should be a balance between comprehensive data collection/analysis and conservation of system resources.

We all know that automating tedious processes can reduce costs and save the company money. Log data management is an area that’s especially appropriate for automation because the huge volume and the sometimes obscure nature of the data makes wading through it manually a heinous process that’s prone to the overlooking of important information. The most effective use of log data is in a proactive approach that allows you to identify potential problems before they can negatively impact your business. That requires a solution that goes beyond the basic SIEM functionality and hooks into your infrastructure seamlessly.

Such a solution will be able to monitor all the devices, workstations and servers on your network and will not be limited to interacting with just one or two types of log files. So another thing to look for is compatibility with as many log file types as possible.

While we think of security being confined to certain types of events that signal an imminent or occurring intrusion or attack, the security of your network is also intimately tied to availability and performance, because users who can’t get their work done due to downtime of critical systems, applications and services will often resort to the use of “workarounds” – connecting through personal devices, accessing data via removable drives and so forth – that present entirely new security issues of their own. Thus a truly comprehensive solution will also include monitoring for hardware failures and other events that may not seem directly related to security.

We talk a lot about business intelligence (BI) these days, and yet when it comes to monitoring our networks we often neglect to use the same types of data mining and prescriptive and predictive analytics that we apply to making other, more broadly scoped business decisions. By treating our log files as the rich data sources that they are, we can turn too much information into the knowledge we need to keep our networks secure at a lower cost and with less work.

Like our posts? Subscribe to our RSS feed or email feed (on the right hand side) now, and be the first to get them!

Get your free 30-day GFI LanGuard trial

Get immediate results. Identify where you’re vulnerable with your first scan on your first day of a 30-day trial. Take the necessary steps to fix all issues.