[Dailydave] CVSS is the worst compression algorithm ever
adrian.sanabria at gmail.com
Wed Jan 9 03:17:18 UTC 2019
CVSS is useful, but not in isolation.
Let me back up a bit. Apologies, but I'm going to rant a bit and mention my
employer. Not because I want to shill product, but because this issue is
the entire reason I joined this vendor in the first place. I had offers for
a lot more money elsewhere, but this problem pisses me off and I want to
take a stab at solving it before I get old(er) and cranky(ier) and give
this industry the finger for good. No topic gets me more riled up than
vulnerability management, sorry.
CVSS and the scanning vendors are largely the raison d'être for NopSec (the
vendor I'm currently working for) to exist, along with Kenna and the
handful of others that fill out our small niche.
As Bruce mentioned, CVSS doesn't measure risk (which is a conversation in
itself), which is a problem for every scanning vendor and every enterprise
using scanning products that prioritize based on CVSS. Our primary
innovation are machine learning models that use CVSS along with other
inputs (existence of exploits, use in malware, discussion on social media,
etc) to output an *actual* risk score. There are actually two models - one
to prioritize current risk and one to predict future risk based on
historical malicious vuln use and abuse.
It's a tough problem and we still have a ways to go. A lot of the input
(threat intel, especially) contains garbage and that needs to be managed.
Funny enough, the most valuable part of CVSS I've seen is the actual
description. Certain words and phrases that indicate the vulnerability is
RCE, privesc or DoS are particularly useful. It still drives me nuts that
the attack vector is not a standardized part of vulnerability reporting.
Even where it is (ExploitDB has a field for it, for example), I've found it
comically incorrect in my research.
Another thing that has frustrated me is the fact that any experienced
pentester can simply look at a list of CVEs and tell you which are
dangerous and which are fluff. Why isn't this knowledge being captured and
codified into knowledge bases and products? Rapid7 has pentesters and so do
others with an opportunity to crowdsource *actual vulnerability outcomes*.
Our data shows less than 2% of all CVEs are used in attacks. We need a way
to separate out the other 98% and leveraging pentesters seems like a
no-brainer to get this effort started.
Okay, moving on.
Addressing the paper, yes - CVSS can be improved, but if this paper proves
anything, it proves why CVSS should be *abandoned*, not improved. Context
is important, sure. I'm working with companies that have hundreds of
thousands or millions of endpoints. Try telling them that they need to set
asset importance and calculate environmental/temporal scoring on all their
assets. Ain't gonna happen. Do they all have asset mgmt/CMDB? Sure they do.
Have we ever seen one that's even close to accurate? Never. It's an
impossible job for any enterprise with over 1000 assets to manually
perform. Pulling another stat from our research, we found over 40% of vulns
used by malware were ranked low or medium by CVSS. I'm sorry to say, the
base CVSS score is pointless, even with 3.0.
This is an issue I repeatedly see in this industry. How do we capture what
the enterprise cares about and get that into a product, process or system?
They can't just create an FTE that's going to systematically rank the
importance of 100,000 assets. Somehow, the solution has to automate at
least part of this process.
*CVSS needs to change, or we need a new system*
The paper's conclusion nails it.
My own conclusion? We absolutely need vulnerabilities to be catalogued and
detailed somewhere. Vulnerability databases are important. Details are
important. Scoring systems, however, are so faulty that they're literally
worthless. Pointing again to our research, we found that CVSS scoring is
equivalent to picking at random when trying to choose an important
vulnerability to remediate. Our key competitor, Kenna, independently came
to the same exact conclusion.
There's value in gathering details about a vulnerability, but we don't
understand enough about what makes a vulnerability *dangerous* to build a
useful scoring system yet. If we did, we wouldn't have to leverage machine
learning to get as far as we have. Maybe the answer going forward is
involving human analysts. To quote Peter Thiel (co-founder
Paypal/Palantir), "actionable insights can only come from a human analyst."
This is a hard-won lesson for Peter, spending years trying to solve fraud
issues while at Paypal. One of many lessons I think we can learn from
outside our bubble.
On Tue, Jan 8, 2019 at 2:22 PM Konrads Smelkovs <konrads.smelkovs at gmail.com>
> The question is not whether it is a bad metric, but whether it is a useful
> As a lurker on the first.org mailing list for CVSSv3 SIG, I can assure
> you that there are a lot of discussions about edge cases etc. v3 is a
> meaningful improvement over v2. So far, CVSS has allowed industry broadly
> to triage security issues and decide if something can be addressed in next
> image refresh or something that needs to be done now as an emergency, out
> of band maintenance. CMU are actually active contributor to the CVSS
> Konrads Smelkovs
> Applied IT sorcery.
> On Tue, 8 Jan 2019 at 16:23, Dave Aitel <dave.aitel at cyxtera.com> wrote:
>> I wanted to take a few minutes and do a quick highlight of a paper from
>> CMU-CERT which I think most people have missed out on:
>> Towards Improving CVSS - resources.sei.cmu.edu
>> SOFTWARE ENGINEERING INSTITUTE | CARNEGIE MELLON UNIVERSITY
>> REV-03.18.2016.0 Distribution Statement A: Approved for Public Release;
>> Distribution Is Unlimited TOWARDS IMPROVING CVSS
>> It's almost as funny a read as their previous best work on how "clientless
>> HTTPS VPNs are insanely dumb <https://www.kb.cert.org/vuls/id/261869/> what were
>> you thinking omg?"
>> They use a ton of big words in the paper to call CVSS out and give it a
>> shellacking. Like most of you, we have extensive use of CVSS in our
>> consulting practice and I've seen this stuff first hand. CVSS is of course
>> just a buggy compression algorithm for taking complex qualitative data and
>> then putting it on a number line. The paper has three angles here:
>> 1. Qualitative mappings into quantitative numbers are a silly thing
>> to do, like people trying to do "social science" by using SurveyMonkey.
>> 2. We're pretty sure that the compression algorithm is not, in fact,
>> putting higher risk items as bigger numbers, which is the whole point of
>> the thing.
>> 3. Nobody is applying this in any sort of consistent way (which is
>> probably impossible) which is ALSO the whole point of the thing.
>> It's fine to have a lossy compression algorithm that emphasizes certain
>> aspects of the input signal over others, of course, but an additional
>> CERT/CC critique is we have no reason to think CVSS does this in any useful
>> There's definitely people in the CVSS process (who I will avoid calling
>> out by name) who think ANY quantization is good. But read the paper and
>> decide for yourself - because these are probably serious issues that are
>> turning your entire risk org into a Garbage-In-Garbage-Out org...
>> Dailydave mailing list
>> Dailydave at lists.immunityinc.com
> Dailydave mailing list
> Dailydave at lists.immunityinc.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Dailydave