Date: Fri, 29 Mar 2024 06:41:48 -0400 (EDT) Message-ID: <2023288964.15.1711708908864@windcrest.sei.cmu.edu> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_14_519712358.1711708908862" ------=_Part_14_519712358.1711708908862 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
The units of work in CVD are vulnerability reports or cases. How= ever, a single case may actually address multiple vulnerabilities. Teasing = out how many problems are involved in a report can be tricky at times. The = implications of this in terms of the CVD process and the compilation of vul= nerability databases is significant.
This section is adapted from a CERT/CC blog post by Householder [1].
Vulnerability identifiers can serve multiple purposes.
They may be used to identify the following:
Now this isn't really a problem as long as one case describes one vulner= ability and that case results in the creation of one document. But that's n= ot always the case, for a number of reasons, including those below:
Different processes use different abstractions to define what "unit = vulnerability" is. For example, CVE has specific guidance on counting rules= [4].
It's rare for vendors to release single-issue patches. More often th= ey prefer to roll up multiple fixes into a single release, and then publish= a document about the release [5].
In the case of independent discovery, or at least duplicate reportin= g, multiple cases may be opened describing the same vulnerability. In some = instances, this fact may not become obvious until considerable effort has b= een put into isolating the bugs in each report. For example, a single vulne= rability can manifest in different ways depending on how it's triggered. Th= e connection might only be discovered on root cause analysis.
Automated testing such as fuzzing can lead to rapid discovery of ver= y large numbers of unique failure cases that are difficult to resolve into = specific bugs.
Automated testing can also identify so many individual vulnerabilities t= hat human-oriented case handling processes cannot scale to treat each one i= ndividually. Here's an extreme example of this phenomenon: although the CER= T/CC published only a single Vulnerability Note for Android apps that faile= d to validate SSL certificates, in the end it covered 23,667 vulnerable app= s [6,7]. Should each get its own identifier? Yes, and we did assign individ= ual VU# identifiers to each vulnerable app. But this highlights the distinc= tion between the vulnerability and the document that describes it.
As of this writing, work is underway within the Vulnerability Report Dat= a Exchange special interest group (VRDX-SIG) within FIRST [8] on a vulnerab= ility report cross-reference data model that will allow for the expression = of relationships between vulnerability reports. The current work in progres= s can be found at, https://github.com/FIRSTdoto= rg/vrdx-sig-vxref-wip.
In order to make it easier to relate vulnerability reports and records t= o each other, the VRDX work represents the following concepts: "possibly re= lated," "related," "not equal," "equal," "superset," "subset," and "overlap= ."
Because of the prevalence and popular use of CVE IDs in the vulnerabilit= y response space, many people assume that vulnerability identity is synonym= ous with Common Vulnerabilities and Exposures (CVE) [9]. However, let's bri= efly look at some ways in which that assumption is inaccurate:
As the CERT/CC's vulnerability analysis efforts have expanded into vulne= rability coordination for non-traditional computing products (mobile, vehic= les, medical devices, IoT, etc.) [10], we've also begun to hit up against a= nother set of issues affecting vulnerability identities and compatibility a= cross vulnerability databases (VDBs): namely, bias.
Steve Christey Coley and Brian Martin mention a number of biases that af= fect all VDBs in their BlackHat 2013 talk [11]:
In an ideal scientific world, bias can be factored into analytical resul= ts based on the data collected. But VDBs don't exist solely in the service = of scientific purity. Every vulnerability database or catalog makes choices= driven by the business requirements and organizational environments in whi= ch those VDBs operate.
These choices include the following:
Abstraction. What is a "unit" vulnerability? Does t= his report represent one vul or many? That choice depends on what purpose t= he VDB serves. Christey and Martin cover this issue in their list of biases= , describing it as "the most prevalent source of problems for analysis." CV= E has made their abstraction content decision guidance available [12].
<= /li>It's important to note that even if two vulnerability databases agree on= the first four items in the list above (sources to watch, inclusion criter= ia, content detail, and abstraction), over time it's easy to wind up with c= ompletely distinct data sets due to the latter items (uncertainty tolerance= , latency tolerance, capacity constraints, and user needs).
The vulnerability databases you are probably most familiar with, such as= the National Vulnerability Database (NVD) [13], Common Vulnerabilities and= Exposures (CVE) [14], and the CERT Vulnerability Notes Database [15] have = historically focused on vulnerabilities affecting traditional computing pla= tforms (Windows, Linux, OS X, and other Unix-derived operating systems) wit= h only a smattering of coverage for vulnerabilities in other platforms like= mobile or embedded systems, websites, and cloud services. In the case of w= ebsites and cloud services this gap may be acceptable since most such servi= ces are effectively single instances of a large-scale distributed system an= d therefore only the service provider needs to apply a fix. In those cases,= there might not be a need for a common identifier since nobody is trying t= o coordinate efforts across responsible parties. But in the mobile and embe= dded spaces, we definitely see the need for identifiers to serve the needs = of both disclosure coordination and patch deployment.
Furthermore, there is a strong English language and English-speaking cou= ntry bias in the major U.S.-based VDBs (hopefully this isn't terribly surpr= ising). China has not one but two major VDBs: China National Vulnerability = Database of Information Security (CNNVD) [16] and China National Vulnerabil= ity Database (CNVD) [17]. We have been working with CSIRTs around the world= (e.g., JPCERT/CC [18] and NCSC-FI [19]) to coordinate vulnerability respon= se for years and realize the importance of international cooperation and in= teroperability in vulnerability response. Given all the above, and in the c= ontext of the surging prevalence of bug bounty programs, it seems likely th= at in the coming years there will be more, not fewer VDBs around the world = than there are today. We anticipate those VDBs will cover more products, se= ctors, languages, countries, and platforms than VDBs have in the past.
Coordinating vulnerability response at local, national, and global scale= s requires that we have the means to relate vulnerability reports to each o= ther, regardless of the process that originated them. Furthermore, whether = they are driven by national, commercial, or sector-specific interests, ther= e will be a need for interoperability across all those coordination process= es and the VDBs into which they feed.
Over time, it has become clear that the days of the "One Vulnerability I= D to Rule Them All" are coming to a close and we need to start planning for= that change. As we've covered above, one of the key observations we've mad= e has been the growing need for multiple vulnerability identifiers and data= bases that serve different audiences, support diverse business practices, a= nd operate at different characteristic rates.
In his book Thinking, Fast and = Slow, Daniel Kahneman describes human thought processes in terms= of two distinct systems [20]:
Making the analogy to CVD processes, notice that historically there has = been a need for slower, consistently high-quality, authoritative vulnerabil= ity records, trading off higher latency for lower noise. Deconfliction of d= uplicate records happens before an ID record (e.g., a CVE record) is issued= , and reconciliation of errors can be difficult. To date, this practice is = the ideal for which many VDBs have strived. Those VDBs remain a valuable re= source in the defense of systems and networks around the globe.
Yet there is a different ideal, just as valid: one in which vulnerabilit= y IDs are assigned quickly, possibly non-authoritatively, and based on repo= rts of variable quality. This process looks more like "issue, then deconfli= ct." For this new process to work well, post-hoc reconciliation needs to be= come easier. If you're familiar with the gitflow process [21] in software d= evelopment, you might recognize this distinction as analogous to the one be= tween the _develop_ and _master_ branches of a software project. The bulk o= f the work happens in and around the _develop_ branch, and only when things= have settled out does the _master_ branch get updated (and merge conflicts= are as inevitable as death and taxes).
As mentioned above, the FIRST VRDX-SIG is working on a vulnerability cro= ss-reference scheme that would allow for widely distributed vulnerability I= D assignments and VDBs to run at whatever rate is necessary, while enabling= the ability to reconcile them later once the dust clears:
The main idea was that VDB records can be related to each other in one o= f the following ways:
This work builds on both prior work at the CERT/CC and Harold Booth and = Karen Scarfone's October 2013 IETF Draft Vulnerability Data Model [22]. How= ever, while it would be great if we could get to a unified data model like = the IETF draft for vulnerability information exchange eventually, for now t= he simplest thing that could possibly work seemed to be coming up with a wa= y to relate records within or between vulnerability databases that explicit= ly addresses the choices and biases described above. The unified data model= might be a longer way off, and we were anticipating the need to reconcile = VDBs much sooner.
Everything we have discussed in this section is work in progress, and so= me things are changing rapidly on a number of related fronts. Nevertheless,= while it's hard to say how we'll get there, it seems inevitable that we'll= eventually reach a point where vulnerability IDs can be issued (and deconf= licted) at the speed necessary to improve coordinated global vulnerability = response while maintaining our ability to have high-quality, trusted source= s of vulnerability information.
Here in the CERT/CC Vulnerability Analysis team, we recognize the need f= or slower, "correct-then-issue" vulnerability IDs as well as faster moving = "issue-then-correct" IDs. We believe that there is room for both (and in fa= ct many points in between). Our goal in participating in the VRDX-SIG is to= enable interoperability between any willing VDBs. We intend to continue ou= r efforts to build a better way forward that suits everyone who shares our = interest in seeing that vulnerabilities get coordinated and disclosed, and = that patches are created and deployed, all with an eye toward minimizing so= cietal harm in the process.
< 8. Open Problems in CVD | 8.2 IoT and CVD >= p>