data-disclosureToday’s public demands far more transparency than in the past – from government agencies, publicly traded corporations, even privately held companies and individuals. The clamor for “full disclosure” comes from both sides of the political aisle and extends across a wide range of industries. We want to know everything about everything: top secret war plans, business financials, what celebrities wear (or don’t wear) to bed – and yes, what security vulnerabilities have been discovered in computer software.

Perhaps this penchant for more information came with the advent of 24 hour cable news networks that feed us facts and opinions (relevant and not) around the clock. Whatever the reason, it has become accepted belief that, as citizens or customers or just interested onlookers, we have an inherent right to any information that could possibly affect our lives in any way – and even to some that doesn’t.

In this vein, some security researchers believe that, in the interest of users’ “right to know”, they should release information about software vulnerabilities as soon as they find them. Other times, the software vendor itself puts out an advisory warning about a newly discovered vulnerability before the company has a patch ready to address it, as Microsoft did recently with the Internet Explorer remote code execution bug.

Not everybody agrees with this full disclosure policy, though. Even though this flaw was already being exploited (in limited cases) in the wild, I heard one commentator express the opinion that it was irresponsible to make it front page news before a fix was available, since the announcement was like an invitation to other attackers to “get it while you can.”  That brings up an interesting and important question for researchers and software makers. Does premature disclosure put more users at risk, or is failure to disclose immediately a sin of omission that creates a greater threat?

Those in favor of disclosure argue that vendors won’t be motivated to issue fixes, or at least won’t do it as quickly as they can, unless they are threatened with disclosure. Many researchers have policies that designate a time period after privately reporting vulnerabilities to give the vendors a chance to address the problem before proceeding to public disclosure. For example, the CERT (Computer Emergency Readiness Team) division of Carnegie Mellon’s Software Engineering Institute states that:

Vulnerabilities reported to the CERT/CC will be disclosed to the public 45 days after the initial report, regardless of the existence or availability of patches or workarounds from affected vendors. Extenuating circumstances, such as active exploitation, threats of an especially serious (or trivial) nature, or situations that require changes to an established standard may result in earlier or later disclosure. 

Security expert Bruce Schneier came out many years ago on the side of full disclosure, saying “secrecy only makes us less secure” and opining that full disclosure is the only reason vendors routinely patch their systems.

Some hackers want to go much further than just letting the public know that vulnerabilities exist; when they discover security flaws, they not only publish all the details but even publish information about how to create exploits that take advantage of those flaws. Some even sell the exploits to others. On the other hand, some black hat hackers hunt for vulnerabilities and then don’t disclose their findings to either the vendor or the public, instead surreptitiously exploiting the flaw for their own benefit (financial, political, personal or otherwise).

Whether or not a third party should disclosure vulnerabilities at all is only the first point of controversy. Among those who advocate “responsible disclosure,” timing is the second issue. Is 45 days too long to give vendors to respond? Or do vendors need more time than that to create a patch and do sufficient testing to ensure that the fix doesn’t just create more problems? As might be expected, independent researchers and software vendors tend to be on opposite sides of that question.

Logic would dictate that there is no “one size fits all” answer because a reasonable time frame differs depending on the complexity of the code, the potential ramifications of an exploit, whether exploits already exist in the wild, the scope of the risk (obviously a vulnerability in the Windows operating system or in an application such as Adobe Flash that is installed on millions of computers poses a greater threat than one in an obscure application with a small installed base).

Another issue in favor of at least partial disclosure prior to patch issuance is that there may be mitigations and workarounds that individual users and/or administrators can implement to reduce or eliminate the risk attached to a vulnerability. That could be as drastic as ceasing to use that software until a patch comes out, or it could be as simple as changing a setting in the software (which might impact its functionality in either a limited or far-reaching way).

There can be legal issues involved in the decision to publicly disclose vulnerabilities, especially when the disclosure includes proof of concept code that demonstrates how to exploit the vulnerability. Disclosing an exploitable vulnerability to persons who are likely to use it to break the law could be seen by some as aiding and abetting a crime. In some cases, disclosures could be viewed as proof of violation of the anti-circumvention provisions of the DMCA (Digital Millennium Copyright Act). The software vendor could conceivably sue a third party discloser for theft of trade secrets, violation of patent law, or breach of contract under the provisions of the EULA (End User License Agreement) or a non-disclosure agreement (NDA). Researchers could even be charged with extortion if they request money or something of value from the vendor in exchange for not making a public disclosure.

On the other hand, the right to publish information about vulnerabilities could also be regarded as a First Amendment free of speech issue. That protection is less likely to be extended to publication of actual exploit code. It’s also important to remember that we live and operate on a global basis and many countries don’t have the same freedom of speech protections as the U.S.

As you can see, whether full disclosure of vulnerabilities is a good thing or a bad thing isn’t an easy question to answer. As with so many things in the IT world, the most honest answer is “it depends.”

 

+ END +

 

For testing: Learn more about good storytelling podcasts. Weekly stories from award-winning writers

Get your free 30-day GFI LanGuard trial

Get immediate results. Identify where you’re vulnerable with your first scan on your first day of a 30-day trial. Take the necessary steps to fix all issues.