What Constitutes Responsible Disclosure?

responsible disclosure

And like many highly charged issues, the debate comes with high stakes. Publicly disclose a vulnerability, and risk the outbreak of a widespread zero-day attack. Reveal it in private, and risk the vendor sweeping it under the carpet for months or years.

And in recent years, the issue has come with added dimensions as more vendors develop processes for handling vulnerabilities while providing cash and other incentives for keeping mum on public disclosure.

“Companies have become very proactive with bug bounties that are more generic in nature,” said Chris Hoff, chief security architect for Juniper Networks. “What has soured those scenarios is when a researcher’s expectations are out of whack -- they don’t’ get recognition or sometimes they don’t get paid. It depends on the motivation and what the researcher expects. That’s where we’ve had a lot of departure points.”

In an effort to protect the public, the concept of responsible disclosure was informally standardized in 2000 by a hacker known by the handle, Rain Forest Puppy (RFP), who attempted to establish guidelines and procedures for disclosing vulnerabilities, as well as to present alternative actions if the vendor failed to respond.

id
unit-1659132512259
type
Sponsored post

Among other things, the guidelines, known as the RFPolicy, stipulate that researchers should attempt to contact the vendor in private once a vulnerability had been detected, allowing the vendor sufficient time to produce a fix or workaround.

After a given window of time, researchers could then freely engage in full disclosure with impunity.

But just how long that window of time should remain open is a matter of debate at the heart of the definition of “responsible.”

Hoff said that he usually gives vendors around 30 days to respond once he has detected vulnerabilities in their products. “That’s fair to suggest that for getting a response and understanding what the vendor’s response is going to be,” Hoff said. “Some of these things are very complex. These vulnerabilities end up being very far reaching and impactful.”

The allotted 30 days are usually enough time for vendors to understand the vulnerability, lay out a remediation plan, and schedule how and when the patch will be released, he said.

“This also comes down to the researcher understanding a potential impact,” he said. “The worst thing for an end customer is to learn about a vulnerability in the face of an actual attack in the wild, and being unable to understand what the response might me.”

Other researchers contend that the window of time given to vendors should remain open as long as possible. Jose Nazario, senior manager of security research at Arbor Networks, said that disclosing a vulnerability too soon could serve to harm the public by giving cyber criminals the tools and a wide berth to launch malicious attacks before a vendor could issue a fix repairing the flaw.

Next: Vendors Maturing In Adderssing Security Vulnerabilities“There is the expectation that you recognize the information could be used for harm and want to make the best effort to give the vendor time to do something about this,” Nazario said. “You’ve got to remember that they’ve got other things to do. They’re not sitting around twiddling their thumbs. You have very little visibility into what they’re doing. You’ll find when you’re on the other end, it’s quite a bit different.”

But gving vendors the benefit of the doubt sometimes has its drawbacks. In the past, many vendors regularly addressed security flaws by simply failing to patch or notifying users -- an approach known pejoratively in the industry as “security by obscurity. This practice puts users increasingly at risk by sitting on bug that may already be exploited in the wild or may soon be detected by someone with the intent to launch attacks, Hoff said.

While for some that remains the modus operandi, in general the industry has been maturing in how they deal with security vulnerabilities, he added.

Many vendors have developed “bug bounty” programs and “zero-day initiatives” that reward researchers for disclosing vulnerabilities privately to the vendors with cash or credit incentives.

The most recent iteration was unveiled in August, when Facebook rolled out a Bug Bounty program , incenting security experts to break into its system with sizeable monetary rewards.

The program has already forked over more than $40,000 to inquisitive researchers during its first three weeks, while one person received more than $7,000 for alerting the company to six different security issues.

“We know and have relationships with a large number of security experts, but this program has kicked off dialogue with a whole new and ever expanding set of people across the globe in over 16 countries, from Turkey to Poland who are passionate about Internet security,” said Joe Sullivan, Facebook chief security officer in a blog post. “The program has also been great because it has made our site more secure -- by surfacing issues large and small, introducing us to novel attack vectors, and helping us improve lots of corners in our code.”

While offering cash as a way to motivate private disclosure has stirred up debate within the security community, many researchers contend that it aims to acknowledge the time, effort and skill it takes to find vulnerabilities, while providing a strong incentive for researchers to keep the vulnerabilities private.

“We’re adding value to your process. We want to be compensated for that,” Hoff said.

At the very least, more vendors are giving researchers credit for submitting vulnerabilities as a way to acknowledge their hard work, said Oliver Lavery, director of security research for security firm nCircle .

“That credit is fantastic for helping people build their careers. The work that goes into finding a vulnerability can be quite enormous,” he said.

But like many highly charged issues, the responsible disclosure debate often becomes contentious once researchers step outside the parameters of what many deem as “responsible.”

Last year, Google security engineer Tavis Ormandy brought the responsible disclosure issue center stage when he published details of a gaping Windows Help and Support Center flaw and proof-of-concept code to the Full Disclosure mailing list just four days after first revealing the vulnerability to Microsoft.

Next: Ormandy's Disclosure Raised Microsoft Ire The Windows flaw in question was a critical XP vulnerability enabling hackers to execute remote code attacks on users’ computers via popular browsers such as Internet Explorer, Firefox and Safari, as well as Windows Media Player.

The four-day window left Redmond little time to assess the situation, let alone remediate the flaw, eliciting a firestorm of criticism from Microsoft and much of the security community who contended that the disclosure put the public at severe risk.

“One of the main reasons we and many others across the industry advocate responsible disclosure is that the software vendor who wrote the code is in the best position to fully understand the root cause,” said Mike Reavey, director of Microsoft Security Response Center, in a Microsoft security advisory following the disclosure. “We recognize that researchers across the entire industry are a vital part of identifying issues and continually improving security, and we continue to ask researchers to work with us through responsible disclosure to help minimize the risk to customers while improving security.”

For some, the issue served as a powerful reminder that responsible disclosure was a highly subjective and evolving paradigm that carried highly charged opinions on both sides of the fence. While some maintained that Ormandy was within his right to bring the vulnerability to the public’s attention, others questioned his motivation, contending that he put the public at risk by giving potential hackers the keys to the kingdom without an adequate defense mechanism in place to shield them from attacks.

Lavery said that Ormandy’s decision was a “questionable move” that didn’t allow Microsoft time to deal with the issue.

“You don’t want to disclose the issue until there’s something the public can do to help themselves,” he said.

Nazario said that in the past he had “severed professional ties” with researchers who had announced bugs on a Friday afternoon, or given the affected vendor just a few days before going public with the vulnerability.

“I don’t think that’s responsible,” he said. “With things like targeted attacks, where we face issues that really cut across the line of fairness, if you discus it publicly that can affect stock prices and jobs, or worse. You are also potentially interfering with the investigations of various authorities.”

However, responsible disclosure is also incumbent upon the vendor, researchers say.

“There are two halves to it -- responsible disclosure and responsible response,” Hoff said. “Generally we turn the spotlight and perspective to that of a research discovering vulnerabilities. The other side of that coin is once reported, how the vendors tend to respond.”

And what happens when vendors fail to address a vulnerability or postpones their response indefinitely?

That’s where the issue of “full vs. responsible disclosure” starts to heat up, researchers say.

“At the end of the day, if a researcher has done everything he or she agreed to do and the vendor has not, the researcher has options,” Hoff said. “It’s unfortunate when it comes down to that. Ultimately the arms race starts the moment the disclosure process starts.”

Next: Vendors Prompted To Fix Flaws With Public Disclosure Lavery contended that most vendors will be prompted to address a fix once it’s been made public in an effort to salvage brand reputation and protect customers. Subsequently, researchers can often compel action by leveraging the threat of going public if the flaw isn’t addressed in a timely manner.

And once a vulnerability is revealed in public, vendors will usually scramble to release a fix in order to thwart an attack in the wild, Lavery added.

“Full disclosure is absolutely necessary. The public needs to know and vendors have to be held accountable to fix those vulnerabilities,” he said. “Every company will do what is perceived as doing what it best for itself. The threat of public disclosure is necessary for companies to provide an incentive to fix it.”

Microsoft, for example, issued a patch for the critical Support and Help Center Flaw, reported in June of 2010, just weeks later in the July Patch Tuesday update.

Another option for researchers is what’s known as partial disclosure, revealing a vulnerability without releasing exploit code or other details that could be used by cyber criminals in an attack.

Nazario said that partial disclosure, if done strategically, could be effective in compelling action without harming the public.

“There’s a balance struck by what you disclose for it to be useful,” he said. “Partial disclosure goes a long way. It tells the right people there is a problem afoot and here’s how you can mitigate it -- when done right."

But it’s a balancing act, Nazario added, that can “sometimes leads to people discover thing the attack and discussing it publicly. It’s a risky situation.”

Lavery, however, said that when confronted with the option of exposing a bug, it was by far better to go the “full disclosure” route. Once a bug was known, he said, it was only a matter of time before hackers figured out how to exploit it. And partial disclosure often reveals vulnerabilities but leaves out crucial details that would equip a user with the tools to mitigate the problem, he said.

“Partially disclosing the details doesn’t help as long as you know there’s a vulnerability. It’s not hard to reproduce the research and find the vulnerability,” he said. “I can break into your house. That’s far more terrifying than telling you that you left your window unlocked.”