In the previous blog post we saw how identity theft in a corporate environment can be a huge risk for a corporate environment and how the possible injection of malicious code can easily take place and go unnoticed on a client machine that is part of a corporate network.

There are many ways in which malicious code can be inserted successfully without anyone noticing.

  1. Software is never bug free and software patches are not immediately available by the manufacturer.  That’s why any type of severe vulnerabilities in software is exploited immediately by hackers.
  2. Common software is very popular. Injection of malicious code in common software can reach millions of victims in a very short amount of time.
  3. Trusted websites attract billions of visitors. Smart injection of malicious code remains unnoticed by the website administrator.
  4. Standard protocols such as HTTP or FTP are open for use by default. A firewall does not block them by default, because internet browsing requires HTTP for data exchange. That’s why malicious code mostly exchange sensible data through standard protocol like HTTP.

Running malicious code on remote machines is one big risk, but data theft is understood as the bigger risk for a corporate organization.

What can a systems administrator do to monitor, track and block such intrusion attempts around the clock?

Internet monitoring is one solution that extends the strengths of common firewalls. Internet monitoring does not only include the manual monitoring of in- and outbound data transfers, it should also include features such as multiple antivirus engines that scan and automatically control downloads requested by client machines.

Downloads can be anything, such as a malformed image, that has been requested by a software application running on a remote machine. If the download is controlled by professional web monitoring software, then this approach would contribute to reduce the risk of the insertion of malicious code into a corporate network. As a download control is fully automated on a 24/7 basis it saves time and reduces the worries of a systems administrator.

A web filtering module (that complements the web monitoring module) prevents the access of “bad” websites before it comes to download the webpage from the bad URL. But what happens if malicious web content starts to appear on good websites? It isn’t sufficient if web filtering only works on categories like “yes” or “no”.

A good web filtering module should be able to update its database automatically and dynamically detect malicious code on good and bad sites automatically. Such features would be very innovative and would benefit web control. Not all web risks can be fully controlled and blocked by web monitoring software so the manual intervention of a web administrator is required to fully archive the reduction of web threats.

Reporting is a basic foundation of web monitoring software to evaluate the performance of the defence software but also helps to detects new anomalies in corporate environments.