Security vulnerabilities: all software has them, many of them. Thanks to a plethora of security researchers who spend their days actively hunting them down, new ones are being discovered all the time.
For software vendors, this is both a blessing and a curse. Outside researchers do some of the work – at no cost – that their own internal personnel does. There is no possible way that a vendor can test every possible software configuration, and some vulnerabilities may be revealed only in very specific setups.
It’s far better for a vendor if the flaws are found first by responsible researchers rather than by the hackers and attackers. The impact of a major zero day attack can create a ton of negative publicity for a vendor and enough of them can even drive customers to switch to competing products.
The vast majority of vulnerabilities that are found by outsider researchers are reported privately to the software vendor and fixed before their details are ever made public. Sometimes, however, researchers make public disclosures of the vulnerabilities before the vendor comes up with a patch, citing the public’s “right to know.”
Public disclosure is a controversial practice because once the details become known, attackers who might not have been aware of the vulnerability can rush to exploit it, creating a “zero day” situation. On the other hand, researchers say when vendors are reluctant or slow to provide a fix, or disagree that the vulnerability needs to be fixed at all, public disclosure is the only way to prod vendors into getting patches out in a timely manner.
Google created big headlines last week when, as part of their Project Zero, they published a public disclosure of an elevation of privilege vulnerability that they discovered in ahcache.sys/NtApphelpCacheControl in Windows 8.1. Not only did they post details about the vulnerability; they included the instructions for exploiting it.
Google had reported the vulnerability to Microsoft three months previously. The Google security team’s policy is that if a patch is not issued within 90 days, the report is automatically “derestricted” and goes public.
Not everyone thought this was a good idea. One of the first comments on the bug report release called it “incredibly irresponsible” and pointed out that the holiday season and resultant short staffing could have slowed Microsoft’s response, and many subsequent commenters agreed.
Those on the other side said Google’s push was necessary and that Microsoft had been downplaying the seriousness of the issue. Shortly after Google posted the disclosure and exploit, Microsoft issued a statement saying “We are working to release a security update to address an Elevation of Privilege issue. It is important to note that for a would-be attacker to potentially exploit a system, they would first need to have valid logon credentials and be able to log on locally to a targeted machine.”
Those conditions make it more difficult for an attacker to exploit the vulnerability. First, the attacker would need to devise some way to get a legitimate user’s user name and password and then the attacker would have to have physical access to the computer. The most likely scenario would be an “inside job” – an employee with a grudge against the company or who was engaging in corporate espionage.
That’s not to say the vulnerability shouldn’t be fixed; it’s the “zero tolerance” policy that Google follows by setting a hard 90-day deadline and releasing the info to the public when that time passes regardless of circumstances that many question.
There are other entities that have shorter deadlines – for example, the CERT division at Carnegie Mellon has a 45-day disclosure policy, but disclosure isn’t automatic. The CERT web site notes that “there may often be circumstances that will cause us to adjust our publication schedule” and “threats that require “hard” changes (changes to standards, changes to core operating system components) will cause us to extend our publication schedule.”
In addition, the CERT site states that it will not distribute exploits for the vulnerabilities it publishes, saying “In our experience, the number of people who can benefit from the availability of exploits is small compared to the number of people who are harmed by people who use exploits maliciously.” This middle-of-the-road, case-by-case approach seems more reasonable to many in the security industry.
Just last month, Yahoo announced that it would adopt the same 90-day disclosure timetable as Google in relation to vulnerabilities discovered by its security team, but also said the company will reserve the right to extend or shorten the timeline based on extenuating circumstances.
The debate over full disclosure has been raging for a long time. Back in 2007, advocates such as Bruce Schneier were calling it a good idea and arguing that keeping vulnerabilities secret doesn’t keep them out of the hands of hackers. Others contend that partial disclosure – providing enough information for users to mitigate the threat but not handing the details over to cyber criminals on a silver platter) is the more responsible choice. This paper discusses the ethical dilemmas associated with disclosure: Ethics of Full Disclosure concerning Security Vulnerabilities.
It’s one of those issues on which many intelligent people disagree, and isn’t likely to be resolved one way or the other. Unless governments pass legislation either prohibiting the disclosure of vulnerability details or requiring that vulnerabilities be disclosed, security researchers will continue to hold the ultimate power to decide when, to whom and in how much detail they publicize the existence of security holes in software, and will have to live with any consequences that result from publishing or withholding that information.