Every business IT infrastructure requires tools. Obviously no one company develops every tool it needs in-house and at one time or another every company will end up buying software. Sometimes we take the software we buy for granted and this can be a security risk. When we buy software do we stop and see how it works, what ports it opens and listens on, what interfaces it provides for users to interact with? Do we study the implications (security wise) that this new software creates for our environment? If we don’t then we’re at risk, a risk that Google only recently came face to face with.

Recently one of Google’s source code repositories got hacked. The hackers stole some Google code including the source code for the company global password system. We don’t know what happened exactly but speculation is that hackers targeted flaws in the SCM solution (source code configuration management systems) which Google was using.

Google’s experience is not that difficult to imagine. When software is deployed generally the focus is on getting it up and running and not on analysing what potential issues to security it might present. Obviously no one is expecting administrators to run full penetration testing against each and every application they deploy but even a small analysis can make a huge difference.

The one essential thing to keep in mind is that there is no such thing as secure software, even if the company developing the software took good care to ensure it is also secure and not just bug free, there could still have some undiscovered vulnerabilities. My advice is always assume everything is vulnerable and act accordingly.

So what should one do when deploying a new application?

First step is to secure the new environment. We achieve this by installing our new application and setting it up. Once it is running we analyse it a bit. Run port scanners, check out its interface. While the documentation might provide details such as what ports the application listens on, I would still take the time to check it myself in case there is a mistake or the possibility of the manual not being completely up to date. If this application is to have a direct connection to the internet it is important to ensure that a firewall will restrict access to those ports to only IP addresses that will need them.  It is a good idea to also do this when the application resides on the internal network alone; as this will limit the area of attack should any internal machine be compromised (like the case of Google’s attack).  If this application is critical, such as for example a source control system, limit access to it from only those clients that really require access.

If the application has a web interface then we will need to run additional tests. Check each input for proper input sanitization. Check that user input is not vulnerable to cross site scripting attack.  We need to do this on each and every input. In order to check for such issues we start by first checking out the page source code.  We seek out every input tag or any other html control that accepts input on the web page generated by the application.

Let’s take the following tag as an example: <input name=query value=””> if the script generating this page is vulnerable, whatever we enter could be entered as the value field of our tag called query. This means if we tried to post something along the lines of:
“><script>alert(‘we have a problem’)</script>

to the script under the variable query it is possible that a vulnerable script would generate the following code instead:
<input name=query value=”“><script>alert(‘we have a problem’)</script>”>.

This will of course make the web browser displace a dialog box saying “we have a problem” which we would indeed. Cross site scripting is a nasty issue and you should demand that the vendor fixes it.

If one doesn’t wish to do this process manually there are applications available that perform test web interfaces for cross site scripting attacks.

While it is true that most of this testing should be done by the vendor, there is no way we can know for sure and it is important to keep track of any change to our environment in any case. After all if you keep a baseline of every system most of these steps will be required to update said baseline, so the impact should not be that large and it can save a lot of work later trying to recover from an attack should the unfortunate happen.