You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Table of Contents

Preface
Acknowledgments
Executive Summary
1 Introduction
1.1 Coordinated Vulnerability Disclosure is a Process, Not an Event
1.2 CVD Context and Terminology Notes
1.2.1 Vulnerability
1.2.2 Exploits, Malware, and Incidents
1.2.3 Vulnerability Response (VR)
1.2.4 Vulnerability Discovery
1.2.5 Coordinated Vulnerability Disclosure
1.2.6 Vulnerability Management (VM)
1.2.7 Products and Instances
1.2.8 Incident vs. Vulnerability Response
1.3 Why Coordinate Vulnerability Disclosures?
1.4 Previewing the Remainder of this Document
2 Principles of Coordinated Vulnerability Disclosure
2.1 Reduce Harm
2.2 Presume Benevolence
2.3 Avoid Surprise
2.4 Incentivize Desired Behavior
2.5 Ethical Considerations
2.5.1 Ethics in Related Professional Societies
2.5.2 Journalism Ethics
2.6 Process Improvement
2.6.1 CVD and the Security Feedback Loop
2.6.2 Improving the CVD Process Itself
2.7 CVD as a Wicked Problem
3 Roles in CVD
3.1 Finder
3.2 Reporter
3.3 Vendor
3.3.1 Vendor as the Introducer of Vulnerabilities
3.3.2 Vendor Vulnerability Response Process
3.3.3 Vendor Sub-Roles
3.4 Deployer
3.4.1 Deployer Vulnerability Response Process
3.5 Coordinator
3.5.1 Computer Security Incident Response Team (CSIRT)
3.5.2 CSIRT with National Responsibility
3.5.3 Product Security Incident Response Team (PSIRT)
3.5.4 Security Research Organizations
3.5.5 Bug Bounties and Commercial Brokers
3.5.6 Information Sharing and Analysis Organizations (ISAOs) and Centers (ISACs)
3.5.7 Reasons to Engage a Coordinator
3.6 Other Roles and Variations
3.6.1 Users
3.6.2 Integrator
3.6.3 Cloud and Application Service Providers
3.6.4 Internet of Things
3.6.5 Mobile Platforms and Applications
3.6.6 Governments
4 Phases of CVD
4.1 Discovery
4.1.1 Why Look for Vulnerabilities?
4.1.2 Avoid Unnecessary Risk in Finding Vulnerabilities
4.2 Reporting
4.2.1 Create Secure Channels for Reporting
4.2.2 Encourage Reporting
4.2.3 Reduce Friction in the Reporting Process
4.2.4 Providing Useful Information
4.3 Validation and Triage
4.3.1 Validating Reports
4.3.2 Triage Heuristics
4.4 Remediation
4.4.1 Isolating the Problem
4.4.2 Fix the Problem
4.4.3 Mitigate What You Cannot Fix
4.5 Gaining Public Awareness
4.5.1 Prepare and Circulate a Draft
4.5.2 Publishing
4.5.3 Vulnerability Identifiers Improve Response
4.5.4 Where to Publish
4.6 Promote Deployment
4.6.1 Amplify the Message
4.6.2 Post-Publication Monitoring
5 Process Variation Points
5.1 Choosing a Disclosure Policy
5.2 Disclosure Choices
5.3 Two-Party CVD
5.4 Multiparty CVD
5.4.1 Multiple Finders / Reporters
5.4.2 Complicated Supply Chains
5.4.3 Mass Notifications for Multiparty CVD
5.5 Response Pacing and Synchronization
5.5.1 When One Party Wants to Release Early
5.5.2 Communication Topology
5.5.3 Motivating Synchronized Release
5.6 Maintaining Pre-Disclosure Secrecy
5.6.1 Coordinating Further Downstream
5.6.2 Do You Include Deployers?
5.6.3 Complex Communications Reduce Trust
5.7 Disclosure Timing
5.7.1 Conference Schedules and Disclosure Timing
5.7.2 Vendor Reputation and Willingness to Cooperate
5.7.3 Declarative Disclosure Policies Reduce Uncertainty
5.7.4 Diverting from the Plan
5.7.5 Releasing Partial Information Can Help Adversaries
6 Troubleshooting CVD
6.1 Unable to Find Vendor Contact
6.2 Unresponsive Vendor
6.3 Somebody Stops Replying
6.4 Intentional or Accidental Leaks
6.5 Independent Discovery
6.6 Active Exploitation
6.7 Relationships that Go Sideways
6.8 Hype, Marketing, and Unwanted Attention
6.8.1 The Streisand Effect
6.9 What to Do When Things Go Wrong
6.9.1 Keep Calm and Carry On
6.9.2 Avoid Legal Entanglements
6.9.3 Recognize the Helpers
6.9.4 Consider Publishing Early
6.9.5 Engage a Third-Party Coordinator
6.9.6 Learn from the Experience
7 Operational Considerations
7.1 Tools of the Trade
7.1.1 Secure Communication Channels
7.1.2 Contact Management
7.1.3 Bug Bounty Platforms
7.1.4 Case and Bug Tracking
7.1.5 Code and System Inventories
7.1.6 Test Bench and Virtualization
7.2 Operational Security
7.2.1 PGP/GPG Key Management
7.2.2 Handling Sensitive Data
7.2.3 Don't Automatically Trust Reports
7.3 CVD Staffing Considerations
7.3.1 Beware Analyst Burnout
8 Open Problems in CVD
8.1 Vulnerability IDs and DBs
8.1.1 On the Complexities of Vulnerability Identity
8.1.2 What CVE Isn't
8.1.3 Every Vulnerability Database Makes Choices
8.1.4 Where We Are vs. Where We Need to Be
8.1.5 Vulnerability IDs, Fast and Slow
8.1.6 A Path Toward VDB Interoperability
8.1.7 Looking Ahead
8.2 IoT and CVD
8.2.1 Black Boxes
8.2.2 Unrecognized Subcomponents
8.2.3 Long-Lived and Hard-to-Patch
8.2.4 New Interfaces Bring New Threats
8.2.5 Summarizing the IoT's Impact on CVD
9 Conclusion
Appendix A – On the Internet of Things and Vulnerability Analysis
Appendix B – Traffic Light Protocol
Appendix C – Sample Vulnerability Report Form
Appendix D – Sample Vulnerability Disclosure Document
Appendix E – Disclosure Policy Templates
Bibliography
List of Figures
Figure 1: CVD Role Relationships
Figure 2: Coordination Communication Topologies

List of Tables
[Table 1: I Am the Cavalry's Finder / Reporter Motivations ]
[Table 2: Mapping CVD Roles to Phases ]
<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="1674b8e7-1a1d-4652-a183-d3cf0d752030"><ac:parameter ac:name="">_Toc533302240</ac:parameter></ac:structured-macro>
<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="2d14e74c-0da2-41b0-9df9-6a758064345d"><ac:parameter ac:name="">_Toc489873130</ac:parameter></ac:structured-macro>Preface
Software and software-based products have vulnerabilities. Left unaddressed, those vulnerabilities expose to risk the systems on which they are deployed and the people who depend on them. In order for vulnerable systems to be fixed, those vulnerabilities must first be found. Once found, the vulnerable code must be patched or configurations must be modified. Patches must be distributed and deployed. Coordinated Vulnerability Disclosure (CVD) is a process intended to ensure that these steps occur in a way that minimizes the harm to society posed by vulnerable products. This guide provides an introduction to the key concepts, principles, and roles necessary to establish a successful CVD process. It also provides insights into how CVD can go awry and how to respond when it does so.
In a nutshell, CVD can be thought of as an iterative process that begins with someone finding a vulnerability, then repeatedly asking "what should I do with this information?" and "who else should I tell?" until the answers are "nothing," and "no one." But different parties have different perspectives and opinions on how those questions should be answered. These differences are what led us to write this guide.
The CERT Coordination Center has been coordinating the disclosure of vulnerability reports since its inception in 1988. Although both our organization and the Internet have grown and changed in the intervening decades, many of the charges of our initial charter remain central to our mission: to facilitate communication among experts working to solve security problems; to serve as a central point for identifying and correcting vulnerabilities in computer systems; to maintain close ties with research activities and conduct research to improve the security of existing systems; and to serve as a model for other incident response organizations.
If we have learned anything in nearly three decades of coordinating vulnerability disclosures at the CERT/CC, it is that there is no single right answer to many of the questions and controversies surrounding the disclosure of information about software and system vulnerabilities. In the traditional computing arena, most vendors and researchers have settled into a reasonable rhythm of allowing the vendor some time to fix vulnerabilities prior to publishing a vulnerability report more widely. Software as a service (SAAS) and software distributed through app stores can often fix and deploy patches to most customers quickly. On the opposite end of the spectrum, we find many Internet of Things (IoT) and embedded device vendors for whom fixing a vulnerability might require a firmware upgrade or even physical replacement of affected devices, neither of which can be expected to happen quickly (if at all). This diversity of requirements forces vendors and researchers alike to reconsider their expectations with respect to the timing and level of detail provided in vulnerability reports. Coupled with the proliferation of vendors who are relative novices at internet-enabled devices and are just becoming exposed to the world of vulnerability research and disclosure, the shift toward IoT can be expected to reinvigorate numerous disclosure debates as the various stakeholders work out their newfound positions.
Here's just one example: in 2004, it was considered controversial [1]<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="fabb44b9-7056-4af8-a8c7-89c145c0b37f"><ac:parameter ac:name="">Preface-1</ac:parameter></ac:structured-macro> when the CERT/CC advised users to "use a different browser" in response to a vulnerability in the most popular browser of the day (VU#713878) [2]. However, consider the implications today if we were to issue similar advice: "use a different phone," "drive a different car," or "use a different bank." If those phrases give you pause (as they do us), you have recognized how the importance of this issue has grown.
We often find that vendors of software-centric products are not prepared to receive and handle vulnerability reports from outside parties, such as the security research community. Many also lack the ability to perform their own vulnerability discovery within their development lifecycles. These difficulties tend to arise from one of two causes: (a) the vendor is comparatively small or new and has yet to form a product security incident response capability or (b) the vendor has deep engineering experience in its traditional product domain but has not fully incorporated the effect of network enabling its products into its engineering quality assurance practice. Typically, vendors in the latter group may have very strong skills in safety engineering or regulatory compliance, yet their internet security capability is lacking.
Our experience is that many novice vendors are surprised by the vulnerability disclosure process. We frequently find ourselves having conversations that rehash decades of vulnerability coordination and disclosure conversations with vendors who appear to experience something similar to the Kübler-Ross stages of grief (denial, anger, bargaining, depression, and acceptance) during the process.
Furthermore, we have observed that overly optimistic threat models are de rigueur among IoT products. Many IoT products are developed with what can only be described as naïve threat models that drastically underestimate the hostility of the environments into which the product will be deployed.
Even in cases where developers are security-knowledgeable, often they are composing systems out of components or libraries that may not have been developed with the same degree of security consideration. This weakness is especially pernicious in power- or bandwidth-constrained products and services where the goal of providing lightweight implementations can supersede the need to provide a minimum level of security. We believe this is a false economy that only defers a much larger cost when the product or service has been deployed, vulnerabilities are discovered, and remediation is difficult.
We anticipate that many of the current gaps in security analysis knowledge and tools surrounding the emergence of IoT devices will begin to close over the next few years. However, it may be some time before we can fully understand how the products already available today, let alone tomorrow, will impact the security of the networks onto which they are placed. The scope of the problem does not appear to contract any time soon.
We already live in a world where mobile devices outnumber traditional computers, and IoT stands to dwarf mobile computing in terms of the sheer number of devices within the next few years. As vulnerability discovery tools and techniques evolve into this space, so must our tools and processes for coordination and disclosure. Assumptions built into many vulnerability handling processes about disclosure timing, coordination channels, development cycles, scanning, patching, and so forth will need to be reevaluated in the light of hardware-based systems that are likely to dominate the future internet.
About This Report
This is not a technical document. You will not learn anything new about fuzzing, debugging, ROP gadgets, exploit mitigations, heap spraying, exception handling, or anything about how computers work by reading this report. What you will learn is what happens to that knowledge and how its dissemination is affected by the human processes of communications and social behavior in the context of remediating security vulnerabilities.
This is not a history. We won't spend much time at all on the history of disclosure debates, or the fine details of whether collecting or dropping zero-days is always good or always bad. We will touch on these ideas only insofar as they intersect with the current topic of coordinated vulnerability disclosure.
This is not an indictment. We are not seeking to place blame on one party or another for the success or failure of any given vulnerability disclosure process. We've seen enough disclosure cases to know that people make choices based on their own values coupled with their assessment of a situation, and that even in cases where everyone agrees on what should happen, mistakes and unforeseeable events sometimes alter the trajectory from the plan.
This is not a standard. We assert no authority to bless the information here as "the way things ought to be done." In cases where standards exist, we refer to them, and this report is informed by them. In fact, we've been involved in creating some of them. But the recommendations made in this report should not be construed as "proper," "correct," or "ideal" in any way. As we'll show, disclosing vulnerabilities presents a number of difficult challenges, with long-reaching effects. The recommendations found here do, however, reflect our observation over the past few decades of what works (and what doesn't) in the pursuit of reducing the vulnerability of software and related products.
This is a summary of what we know about a complex social process that surrounds humans trying to make the software and systems they use more secure. It's about what to do (and what not to) when you find a vulnerability, or when you find out about a vulnerability. It's written for vulnerability analysts, security researchers, developers, and deployers; it's for both technical staff and their management alike. While we discuss a variety of roles that play a part in the process, we intentionally chose not to focus on any one role; instead we wrote for any party that might find itself engaged in coordinating a vulnerability disclosure.
We wrote it in an informal tone to make the content more approachable, since many readers' interest in this document may have been prompted by their first encounter with a vulnerability in a product they created or care about. The informality of our writing should not be construed as a lack of seriousness about the topic, however.
In a sense, this report is a travel guide for what might seem a foreign territory. Maybe you've passed through once or twice. Maybe you've only heard about the bad parts. You may be uncertain of what to do next, nervous about making a mistake, or even fearful of what might befall you. If you count yourself as one of those individuals, we want to reassure you that you are not alone; you are not the first to experience events like these or even your reaction to them. We're locals. We've been doing this for a while. Here's what we know.
<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="5df8ff02-99f9-4d97-8f3e-9a8de3ee8c8b"><ac:parameter ac:name="">_Toc489873131</ac:parameter></ac:structured-macro>Acknowledgments
<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="22fc8059-a76f-403e-8b71-27d5b7bc3159"><ac:parameter ac:name="">Acknowledgements</ac:parameter></ac:structured-macro>The material in the CERT® Guide to Coordinated Vulnerability Disclosure inherits from 29 years of analyzing vulnerabilities and navigating vulnerability disclosure issues at the CERT Coordination Center (CERT/CC). While a few of us may be the proximate authors of the words you are reading, many of the ideas these words represent have been bouncing around at CERT for years in one brain or another. We'd like to acknowledge those who contributed their part to this endeavor, whether knowingly or not:
Jared Allar, Jeff Carpenter, Cory Cohen, Roman Danyliw, Will Dormann, Chad Dougherty, James T. Ellis, Ian Finlay, Bill Fithen, Jonathan Foote, Jeff Gennari, Ryan Giobbi, Jeff Havrilla, Shawn Hernan, Allen Householder, Chris King, Dan Klinedinst, Joel Land, Jeff Lanza, Todd Lewellen, Navika Mahal, Art Manion, Joji Montelibano, Trent Novelly, Michael Orlando, Rich Pethia, Jeff Pruzynski, Robert Seacord, Stacey Stewart, David Warren, and Garret Wassermann.
<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="4ed04b33-ecad-4dd1-a699-770fef3510e3"><ac:parameter ac:name="">_Toc489873132</ac:parameter></ac:structured-macro>Executive Summary
Software-based products and services have vulnerabilities—conditions or behaviors that allow the violation of an explicit or implicit security policy. This should come as no surprise to those familiar with software. What many find surprising nowadays is just how many products and services should be considered software based. The devices we depend on to communicate and coordinate our lives, transport us from place to place, and keep us healthy have in recent years become more and more connected both to each other and to the world at large. As a result, society has developed an increasing dependence on software-based products and services along with a commensurate need to address the vulnerabilities that inevitably accompany them.
Adversaries take advantage of vulnerabilities to achieve goals at odds with the developers, deployers, users, and other stakeholders of the systems we depend on. Notifying the public that a problem exists without offering a specific course of action to remediate it can result in giving an adversary the advantage while the remediation gap persists. Yet there is no optimal formula for minimizing the potential for harm to be done short of avoiding the introduction of vulnerabilities in the first place. In short, vulnerability disclosure appears to be a wicked problem. The definition of a wicked problem based on an article by Rittel and Webber [41] is given in Section 2.7.
Coordinated Vulnerability Disclosure (CVD) is a process for reducing adversary advantage while an information security vulnerability is being mitigated. CVD is a process, not an event. Releasing a patch or publishing a document are important events within the process, but do not define it.
CVD participants can be thought of as repeatedly asking these questions: What actions should I take in response to knowledge of this vulnerability in this product? Who else needs to know what, and when do they need to know it? The CVD process for a vulnerability ends when the answers to these questions are nothing, and no one.
CVD should not be confused with Vulnerability Management (VM). VM encompasses the process downstream of CVD, once the vulnerability has been disclosed and deployers must take action to respond. Section 1 introduces the CVD process and provides notes on relevant terminology.
Principles of CVD
Section 2 covers principles of CVD, including the following:

  • Reduce Harm –Decrease the potential for damage by publishing vulnerability information; using exploit mitigation technologies; reducing days of risk; releasing high-quality patches; and automating vulnerable host identification and patch deployment.
  • Presume Benevolence – Assume that any individual who has taken the time and effort to reach out to a vendor or a coordinator to report an issue is likely benevolent and sincerely wishes to reduce the harm of the vulnerability.
  • Avoid Surprise – Surprise tends to increase the risk of a negative outcome from the disclosure of a vulnerability and should be avoided.
  • Incentivize Desired Behavior – It's usually better to reward good behavior than try to punish bad behavior. Incentives are important as they increase the likelihood of future cooperation between security researchers and organizations.
  • Ethical Considerations – A number of ethical guidelines from both technical and journalistic professional societies can find application in the CVD process.
  • Process Improvement – Participants in the CVD process should learn from their experience and improve their process accordingly. CVD can also provide important feedback to an organization's Software Development Lifecycle (SDL).
  • CVD as a Wicked Problem – As we've already mentioned, vulnerability disclosure is a multifaceted problem for which there appear to be no "right" answers, only "better" or "worse" solutions in a given context.

Roles in CVD
CVD begins with finding vulnerabilities and ends with the deployment of patches or mitigations. As a result, several distinct roles and stakeholders are involved in the CVD process. These include the following:

  • Finder (Discoverer) – the individual or organization that identifies the vulnerability
  • Reporter – the individual or organization that notifies the vendor of the vulnerability
  • Vendor – the individual or organization that created or maintains the product that is vulnerable
  • Deployer – the individual or organization that must deploy a patch or take other remediation action
  • Coordinator – an individual or organization that facilitates the coordinated response process

It is possible and often the case that individuals and organizations play multiple roles. For example, a cloud service provider might act as both vendor and deployer, while a researcher might act as both finder and reporter. A vendor may also be both a deployer and a coordinator.
Reasons to engage a coordinator include reporter inexperience, reporter capacity, multiparty coordination cases, disputes among CVD participants, and vulnerabilities having significant infrastructure impacts.
Users, integrators, cloud and application service providers, Internet of Things (IoT) and mobile vendors, and governments are also stakeholders in the CVD process. We cover these roles and stakeholders in more detail in Section 3.
Phases of CVD
The CVD process can be broadly defined as a set of phases, as described in Section 4. Although these phases may sometimes occur out of order, or even recur within the handling of a single vulnerability case (for example, each recipient of a case may need to independently validate a report), they often happen in the following order:

  • Discovery – Someone discovers a vulnerability in a product.
  • Reporting – The product's vendor or a third-party coordinator receives a vulnerability report.
  • Validation and Triage – The receiver of a report validates it to ensure accuracy before prioritizing it for further action.
  • Remediation – A remediation plan (ideally a software patch, but could also be other mechanisms) is developed and tested.
  • Public Awareness – The vulnerability and its remediation plan is disclosed to the public.
  • Deployment – The remediation is applied to deployed systems.

CVD Process Variation
As an endeavor of human coordination at both the individual and organization levels, the CVD process can vary from participant to participant, over time, and in varying contexts. Some points of variation include those below:

  • Choosing a disclosure policy – Disclosure policies may need to be adapted for different organizations, industries, and even products due to variations in business needs such as patch distribution or safety risks.
  • Coordinating among multiple parties – Coordination between a single finder and a single vendor is relatively straightforward, but cases involving multiple finders, or complex supply chains often require extra care.
  • Pacing and synchronization – Different organizations work at different operational tempos, which can increase the difficulty of synchronizing release of vulnerability information along with fixes.
  • Coordination Scope – CVD participants must decide how far to go with the coordination process. For example, it may be preferable to coordinate the disclosure of critical infrastructure vulnerabilities all the way out to the system deployers, while for a mobile application it may be sufficient to notify the developer and simply allow the automatic update process take it from there.

Variation points in the CVD process are covered in Section 5.
Troubleshooting CVD
CVD does not always go the way it's supposed to. We have encountered a number of obstacles along the way, which we describe in Section 6. These are among the things that can go wrong:

  • No vendor contact available – This can occur because a contact could not be found, or the contact is unresponsive.
  • Participants stop responding – Participants in CVD might have other priorities that draw their attention away from completing a CVD process in progress.
  • Information leaks – Whether intentional or not, information that was intended for a private audience can find its way to others not involved in the CVD process.
  • Independent discovery – Any vulnerability that can be found by one individual can be found by another, and not all of them will tell you about it.
  • Active exploitation – Evidence that a vulnerability is being actively exploited by adversaries often implies a need to accelerate the CVD process to reduce users' exposure to risk.
  • Relationships go awry – CVD is a process of coordinating human activities. As such, its success depends on building relationships among the participants.
  • Hype, marketing, and unwanted attention – The reasons for reporting and disclosing vulnerabilities are many, but in some cases they can be used as a tool for marketing. This is not always conducive to the smooth flow of the CVD process.

When things do go askew in the course of the CVD process, it's often best to remain calm, avoid legal entanglements, and recognize that the parties involved are usually trying to do the right thing. In some cases, it may help to consider publishing earlier than originally planned or to engage a third-party coordinator to assist with mediating disputes. Regardless of the resulting action, CVD participants should learn from the experience.
Operational Considerations
Participation in the CVD process can be improved with the support of tools and operational processes such as secure communications (e.g., encrypted email or https-enabled web portals), contact management, case tracking systems, code and system inventories, and test environments such as virtualized labs.
Operational security should also be considered. CVD participants will need to address key management for whatever communications encryption they decide to use. Policy guidelines for handling sensitive data should be clearly articulated within organizations. Furthermore, recipients of vulnerability reports (e.g., vendors and coordinators) should be wary of credulous action in response to reports. Things are often not what they originally seem. Reporters may have misinterpreted the impact of a vulnerability to be more or less severe than it actually is. Adversaries may be probing an organization's vulnerability response process to gain information or to distract from other events.
As happens in many security operations roles, staff burnout is a concern for managers of the CVD process. Job rotations and a sustained focus on CVD process improvement can help.
Further discussion of operational considerations can be found in Section 7.
Open Problems in CVD
Organizations like the CERT Coordination Center have been coordinating vulnerability disclosures for decades, but some issues remain to be addressed. The emergence of a wider diversity of software-based systems in recent years has led to a need to revisit topics once thought nearly resolved. Vulnerability identity has become a resurgent issue in the past few years as the need to identify vulnerabilities for purposes of CVD and vulnerability management has spread far beyond the arena of traditional computing. A number of efforts are currently underway to improve the way forward.
More broadly, the rising prevalence of IoT products and their corresponding reliance on embedded systems with constrained hardware, power, bandwidth, and processing capabilities has led to a need to rethink CVD in light of assumptions that are no longer valid. Patching may be comparatively easy on a Windows system deployed on an enterprise network. Patching the firmware of a home router deployed to all the customers of a regional ISP is decidedly not so simple. The desktop system the doctor uses to write her notes might be patched long before the MRI machine that collected the data she's analyzing. Fixing a vulnerable networked device atop a pipeline in a remote forest might mean sending a human out to touch it. Each of these scenarios comes with an associated cost not usually factored into the CVD process for more traditional systems.
The way industries, governments, and society at large will address these issues remains to be seen. We offer Section 8 in the hope that it sheds some light on what is already known about these problems.
Conclusion and Appendices
Vulnerability disclosure practices no longer affect only the computer users among us. Smart phones, ATMs, MRI machines, security cameras, cars, airplanes, and the like have become network-enabled software-dependent systems, making it nearly impossible to avoid participating in the world without the potential to be affected by security vulnerabilities. CVD is not a perfect solution, but it stands as the best we've found so far. We've compiled this guide to help spread the practice as widely as possible.
Five appendices are provided containing background on IoT vulnerability analysis, Traffic Light Protocol, examples of vulnerability report forms and disclosure templates, and pointers to five publicly available disclosure policy templates. An extensive bibliography is also included.
Abstract
Security vulnerabilities remain a problem for vendors and deployers of software-based systems alike. Vendors play a key role by providing fixes for vulnerabilities, but they have no monopoly on the ability to discover vulnerabilities in their products and services. Knowledge of those vulnerabilities can increase adversarial advantage if deployers are left without recourse to remediate the risks they pose. Coordinated Vulnerability Disclosure (CVD) is the process of gathering information from vulnerability finders, coordinating the sharing of that information between relevant stakeholders, and disclosing the existence of software vulnerabilities and their mitigations to various stakeholders including the public. The CERT Coordination Center has been coordinating the disclosure of software vulnerabilities since its inception in 1988. This document is intended to serve as a guide to those who want to initiate, develop, or improve their own CVD capability. In it, the reader will find an overview of key principles underlying the CVD process, a survey of CVD stakeholders and their roles, and a description of CVD process phases, as well as advice concerning operational considerations and problems that may arise in the provision of CVD and related services.

Authors:

Allen D. Householder
Garret Wassermann
Art Manion
Chris King

Originally Published as CMU/SEI-2017-SR-022

  • No labels