[Dailydave] CVSS is the worst compression algorithm ever

Adrian Sanabria adrian.sanabria at gmail.com
Thu Jan 10 21:20:23 UTC 2019


I understand the limitations and challenges of CVSS. We already do a lot of
what you mentioned to come up with a risk score. Some of it, I'm still
trying to figure out how to do. The bottom line though, is that we find the
factors that go into the score (CIA, exploitability, exploit availability,
attack vector, etc) to be useful. The score *itself*, is what I was talking
about not being terribly useful, though it does go into our model also.
After we run our prioritization engine, 10s turn into zeros, 5s become 10s
and 3s become 7s or 8s on the risk scale. So, in our experience, some of
the scoring is FAR from worst case and are much too conservative. Others
are not conservative enough. Ultimately, we've found CVSS scoring to be no
better than choosing at random and one of our competitors came to the same
conclusion independently.

To summarize, the scoring process and components are useful to us. The
resulting score, not so much - every piece of data we have from our
research shows that it is consistently unreliable.

This is a great list of risk factors and we use many of them. Almost all of
them are things we've discussed in depth or tried to automate in some way,
so I'm going to get pedantic and comment on each, in the hopes that it
clarifies how far down this road the industry has gone. Also, when I say
"we", I'm generally referring to industry work in this area, what I do at
my employer and what our competitors are doing.

On Thu, Jan 10, 2019 at 2:02 PM Monroe, Bruce <bruce.monroe at intel.com>
wrote:

> Uh no. CVSS scores a vulnerability and if it’s a vendor we’re scoring that
> without knowing how you have the vulnerable software/firmware/hardware/ect
> deployed in your environment. It’s why the CVSS Base Score is worst case.
> The resulting CVSS V3 vulnerability score is *one* element you can then
> calculate into your overall risk factoring. It’s the orgs job consuming the
> CVSS V3x vulnerability score to determine their risk and set their patching
> priorities.
>
>
>
> Other Factors to consider for Risk (*not a comprehensive list but it’s a
> good start)*
>
>
>
>    - Ease of exploit
>
> Should the exploitability score not embody this?


>    - Delivery Mechanism (is the vuln available remotely over the network?
>    Do have to have code running locally on the box (if so I’ve already got
>    code running on the platform) The more access I need to the platform the
>    less likely the issue is of being exploited in the wild.
>
> This should also be part of the scoring (Attack Vector) and is one of the
most frustrating and frequently misreported aspects of vulnerabilities. RCE
and DoS are very different outcomes and it's generally one of the first
things I have to go digging for when a big new vuln goes public.


>    - Availability and sophistication level of exploit: Paper, PoC,
>    weaponized exploit, etc…
>
> This is the same as ease of exploit/exploitability, no?


>    - *Detailed Asset knowledge* for the infrastructure and how vulnerable
>    component is deployed. *Hard to determine risk if you don’t know what
>    you have and how you have systems deployed.*
>
> I've worked with enterprises of all sizes over almost 20 years and can say
this is rarely going to happen. We're working on automated ways of
addressing/solving this. RedSeal, Core Security and others tackled attack
path mapping, but this should really be a feature of an enterprise vuln
mgmt solution, not an entirely product to be purchased and managed
separately.


>    - Controls that would mitigate the attack vectors (firewalls,
>    vulnerable component not exposed, etc…)
>
>  This is a Big Deal. I don't think we'll ever see a day where we can patch
vulnerable components quick enough to satisfy risk goals, so there has to
be some plan for mitigation in place - preferably one that works without
requiring prior knowledge of the vulnerability details (mitigations that
address entire attack classes, in other words).


>    - Is your deployment actually exposing the vulnerability – (Example –
>    you may have a library in your application but are not exposing or using
>    the vulnerable function)
>
> This can be addressed in a number of ways. The BAS market and EDR markets
can help with this, for example. We've also experimented with doing
validation using real exploits. What you can do is limited though, because
many exploits aren't safe to run against production systems and "we run
exploits in production" is generally a non-starter with a lot of orgs :)


>    - % of environment that is exposed, # of systems impacted, $ value of
>    those systems in terms of keeping the business running.
>
> Again, accurate, up-to-date asset management/inventory just doesn't exist
most places and I'm not expecting that to change as long as it's a manual
process to do it.


>    - Where are the impacted systems located, if a system has more than
>    one interface choose the worst
>
> Should be the same as the "Detailed Asset knowledge" example, no?


>    - Are the impacted systems multi-homed (laptops, tablets, et al.) Do
>    they live on your network and go home to an ISP that’s the wild west for
>    malware for example.
>
> Multi-homed means something different to me, but your point is a good one
- systems that regularly alternate between untrusted and trusted networks
should be managed and scored differently. Here begins a BeyondCorp/SDP
conversation :)


>    - What are the bottom line consequence to your organization if this
>    issue is successfully exploited (Think $$$$, potential impact to Brand,
>    stock prices…).
>
> Integration with DC/BCP/BIA/GRC and risk assessment output is probably
necessary when we start getting into this stuff. Something you'd only see
more mature orgs attempting (hi, Alex Hutton!)


>    - What is the highest classification of information that could
>    potentially be exposed? PII, Core IP, keys to the kingdom,…
>
> DLP alone is hard and probably has the worst false positive challenge in
the industry, but we've got to get there eventually. Agreed though - this
is important, though I see it as a subset of setting that asset
importance/criticality value, which could be dynamic, not just a static,
"this is a server, so it gets a 7", score.


>    - Business impact if issue is exploited <5 % of business impacted,
>    5-10% of business impacted, 10-25% of business impacted,…,
>
> Another bullet to go under the BIA/GRC/Asset value heading. I think this
is largely a different process altogether. It's a different set of people
that worry about breach impact to the business as a whole, versus the
analyst trying to figure out which vulnerability is most critical and
should get some attention today.


>    - Likelihood of exploit (put on your SWAMI hat here…😉)
>
> We actually have a machine learning model for this. It's not too hard,
when analyzing past vulns that have caused serious damages, to put together
a recipe for what a future EternalBlue/MS08-067/Heartbleed will look like.
No SWAMI hats needed, just some equations and a lot of data :) It would be
funny if someone gave their data scientist the title of CHIEF SWAMI though.


>    - If you ever see a network RCE that is exposed by default that’s a
>    red flag. *We don’t see too many of these today but there is still an
>    occasional one.*
>
> You wouldn't think this would be so hard, but reading through the details
of the Equifax breach, it's easy to see how it happens. In their case, they
got the red flag, it was their ability to use technical tools to tie that
red flag to something running in their environment that failed.


>
> There needs to be significant thought put in to determine your orgs
> overall risk and what bubbles up to the top in terms of patching priority
> for typically limited resources to get those mitigations deployed.
>
>
>
> As mentioned CVSS vulnerability scores *is a single data point* in making
> that assessment. I personally think we need a new tool, mechanism, that is
> an industry standard but that is beyond the scope of CVSS. While some
> improvements are planned for CVSS V4 this is a separate problem that needs
> to be solved outside of CVSS imo.
>

To summarize, we've built that tool and again, we don't just use the score
as a single data point - the score components (e.g.
AV:N/AC:L/Au:N/C:C/I:C/A:C/E:F/RL:OF/RC:C) are equally important.
Personally, I'd rather see this problem solved at the CVE/CVSS level - it
would be a lot easier and less expensive for everyone involved. Another
idea I'm trying to get off the ground that wouldn't require as much effort
on either end is crowdsourcing. Every time a new vulnerability comes out,
thousands of analysts and pentesters analyze it and attempt to exploit/use
it. If that knowledge could be collected anonymously and publicly shared,
about half of the challenges you mention might go away.


>
> CVSS isn’t perfect but it’s pretty good at what it’s targeted to do. Use
> the right tool, if you need a hammer don’t try to use a screwdriver. If
> you’re using CVSS as the end all be all for Risk you’re using it wrong,
> it’s a single element to input into that overall calculation.
>

After going through all this, I'm still unclear at what CVSS is targeted to
do, or how you can declare that it's "pretty good" at it. What process are
you using to determine the accuracy and/or quality of CVSS scores or the
scoring system itself?

Looking at stats like *95% of CVEs rated as high have never been used
maliciously* make it hard to agree with this statement. If 95% of
vulnerabilities with a high score are found to be not vulnerable, I still
have to question the efficacy of the scoring system.


>
> Regards,
>
> Bruce
>
>
>
> Opinions expressed are my own and may not reflect those of my employer.
>
> *From:* Dailydave <dailydave-bounces at lists.immunityinc.com> * On Behalf
> Of *Adrian Sanabria
> *Sent:* Thursday, January 10, 2019 8:02 AM
> *To:* Wim Remes <wremes at gmail.com>
> *Cc:* dailydave at lists.immunityinc.com
> *Subject:* Re: [Dailydave] CVSS is the worst compression algorithm ever
>
>
>
> Okay, we keep touching on this point, that CVSS isn't intended to score
> risk, just vulnerability severity. I'm having a hard time seeing what value
> there is in having a vulnerability score that doesn't reflect risk. What
> use does it have?
>
>
>
> Or is that exactly what we're saying? That since it doesn't reflect risk,
> it's essentially useless. If that's the conclusion, I'm on the same page.
>
>
>
> --Adrian
>
>
>
> On Thu, Jan 10, 2019, 9:56 AM Wim Remes <wremes at gmail.com wrote:
>
> Hi,
>
>
>
> Bruce really hits the nail on the head here. CVSS != Risk. To broaden that
> discussion and not waste too many words, I’ll reference FAIR (Factor
> Analysis of Information Risk, https://www.fairinstitute.org/what-is-fair)
> to indicate where “Vulnerability” contributes to an eventual quantitative
> risk valuation.
>
>
>
> I also always considered CVSS scoring to be qualitative instead of
> quantitative and the numbers to be ordinal. That makes them fine for
> ranking vulnerability, but horrible to perform math on (Jet Fuel x Peanut
> Butter = Shiny — hi Alex Hutton!).
>
>
>
> That said, it all boils down to a point I’ve been rapping on about for a
> long long time now. Organizations should not expect third party
> penetration testers to make an accurate assessment of risk. The data
> provided by a third party penetration tester should feed into your risk
> management framework, that is also fed with internally acquired business
> data, to produce (or adjust) a risk valuation. It would be helpful if we,
> as consultants, wouldn’t pretend that we (a) can come up with any form of
> credible risk score during such assessments and (b) are delivering scoring
> that can help with prioritization in a business context without additional
> effort on the client side. On the other hand, clients that have a risk
> management framework that can actually take vulnerability scores and use
> them to generate risk scores should be clear in what they expect from us.
> If you are asked, whether in an RFP or an SoW, to produce a risk score for
> your findings at the very least you should be returning a question for
> asset valuation and threat community descriptions.
>
>
>
> Cheers,
>
> Wim
>
>
>
>
>
>
>
> On 8 Jan 2019, at 18:33, Monroe, Bruce <bruce.monroe at intel.com> wrote:
>
>
>
> Hi Dave,
>
>
>
> I participate on the CVSS SIG being ran out of FIRST that is working on
> improvements to CVSS. So do a number of people out of CERT CC, NIST,
> MITRE along with a good representation of industry. A number of us
> provided feedback on this paper. CVSS is for scoring the severity of a
> vulnerability. CVSS does not = Risk.
>
>
>
> My understanding is there is a number of government entities that believe
> CVSS does = Risk and are using it in a vacuum for that purpose. While the
> CVSS score is a single component - you also must look at how the vulnerable
> component is deployed, controls in place, value of asset, patching windows,
> likelihood of exploit,ect…there is a lot that goes into determining risk.
>
>
>
> The fact that various USG entities is using CVSS wrong is an education
> issue imo. Yes CVSS has it’s issues with some of it’s elements being
> subjective eye of the beholder type items but that isn’t the reason for
> this paper…they’ve got USG people using it in a vacuum when it’s only a
> single element of determining your orgs risk due to a vulnerability. That
> isn’t a CVSS problem that’s a vulnerability management 101 problem.
>
>
>
> Regards,
>
> Bruce
>
> Intel PSIRT
>
>
>
> Opinions expressed are my own and may not reflect those of my employer.
>
> *From:* Dailydave <dailydave-bounces at lists.immunityinc.com> *On
> Behalf Of *Dave Aitel
> *Sent:* Tuesday, January 08, 2019 8:14 AM
> *To:* dailydave at lists.immunityinc.com
> *Subject:* [Dailydave] CVSS is the worst compression algorithm ever
>
>
>
> I wanted to take a few minutes and do a quick highlight of a paper from
> CMU-CERT which I think most people have missed out on:
> https://resources.sei.cmu.edu/asset_files/WhitePaper/2018_019_001_538372.pdf
>
> Towards Improving CVSS - resources.sei.cmu.edu
> <https://resources.sei.cmu.edu/asset_files/WhitePaper/2018_019_001_538372.pdf>
>
> resources.sei.cmu.edu
>
> SOFTWARE ENGINEERING INSTITUTE | CARNEGIE MELLON UNIVERSITY
> REV-03.18.2016.0 Distribution Statement A: Approved for Public Release;
> Distribution Is Unlimited TOWARDS IMPROVING CVSS
>
> It's almost as funny a read as their previous best work on how "clientless
> HTTPS VPNs are insanely dumb <https://www.kb.cert.org/vuls/id/261869/> what were
> you thinking omg?"
>
>
>
> They use a ton of big words in the paper to call CVSS out and give it a
> shellacking. Like most of you, we have extensive use of CVSS in our
> consulting practice and I've seen this stuff first hand. CVSS is of course
> just a buggy compression algorithm for taking complex qualitative data and
> then putting it on a number line. The paper has three angles here:
>
>    1. Qualitative mappings into quantitative numbers are a silly thing to
>    do, like people trying to do "social science" by using SurveyMonkey.
>    2. We're pretty sure that the compression algorithm is not, in fact,
>    putting higher risk items as bigger numbers, which is the whole point of
>    the thing.
>    3. Nobody is applying this in any sort of consistent way (which is
>    probably impossible) which is ALSO the whole point of the thing.
>
>
>
> It's fine to have a lossy compression algorithm that emphasizes certain
> aspects of the input signal over others, of course, but an additional
> CERT/CC critique is we have no reason to think CVSS does this in any useful
> way.
>
>
>
> There's definitely people in the CVSS process (who I will avoid calling
> out by name) who think ANY quantization is good. But read the paper and
> decide for yourself - because these are probably serious issues that are
> turning your entire risk org into a Garbage-In-Garbage-Out org...
>
>
>
> -dave
>
>
>
> _______________________________________________
> Dailydave mailing list
> Dailydave at lists.immunityinc.com
> https://lists.immunityinc.com/mailman/listinfo/dailydave
>
>
>
> _______________________________________________
> Dailydave mailing list
> Dailydave at lists.immunityinc.com
> https://lists.immunityinc.com/mailman/listinfo/dailydave
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.immunityinc.com/pipermail/dailydave/attachments/20190110/252faa4d/attachment-0001.html>


More information about the Dailydave mailing list