[Dailydave] software security, disclosure, and bug bounties

Dan Guido dguido at gmail.com
Sun Jan 11 10:57:50 EST 2015


> That said, I think the for argument for having a “national bug bounty”
> is to make the choice for independent researchers competitive - do you
> sell to [name a country], or do you sell to US-CERT? What if they’re
> both paying the same price? What if by selling to US-CERT, you get a
> byline on some website and retain the freedom to talk to whoever you
> want about your bug? Then maybe if some random Geohot-alike stumbles
> over a tcpip.sys 0day, they sell it to CERT instead.

Great podcast from NPR Planet Money on exactly a case like this one:
http://www.npr.org/blogs/money/2015/01/09/376164768/episode-596-johnny-mnemonic-s-secret-door

-Dan

On Sun, Nov 23, 2014 at 5:05 PM, Dan Guido <dguido at gmail.com> wrote:
> In reply to an older post:
> https://lists.immunityinc.com/pipermail/dailydave/2014-October/000784.html
>
>> -- once coin-operated vuln finders won't share, the percentage of
>> all attacks that are zero-day attacks must rise, and it has.
>
> Before I begin, I have issues with the premise of this argument. Is
> there evidence that supports the claim that “zero day attacks have
> been on the rise”? By what metric? Are there now more campaigns that
> use 0day? Have more computers been hacked with 0day as compared to
> click_me.scr in the last 5 years? In my experience, our industry finds
> ~30 0day exploits in the wild per year (not counting web applications
> where nearly everything is custom). Even if this number is 2x what it
> was in 2009, it’s nothing but a drop in the bucket for the sum total
> of all attacks. Then, of course, it may be the case that we are
> modestly better at finding 0days now since most people are finally
> aware that they should look.
>
>> In a May article in The Atlantic,[BS] Bruce Schneier asked a cogent
>> first-principles question: Are vulnerabilities in software dense
>> or sparse?  If they are sparse, then every one you find and fix
>> meaningfully lowers the number of avenues of attack that are extant.
>> If they are dense, then finding and fixing one more is essentially
>> irrelevant to security and a waste of the resources spent finding
>> it.  Six-take-away-one is a 15% improvement.  Six-thousand-take-
>> away-one has no detectable value.
>
> Bugs are not sparse or dense. The architecture of an application
> determines the number of bugs present and, as a developer, you can
> control this property.
>
> I think this may be obvious, but there is software on both sides of
> sparse vs dense. On one hand, you have software sparsely populated by
> bugs like vsftpd, openssh, and qmail. It's straightforward to see why:
> the churn of new code is low and they are based on well-understood,
> mostly static specifications. On the other hand, much of the code we
> use is not developed that way and is densely populated with bugs as a
> result. JavaScript interpreters, document readers, Rails, even some
> kernels are so densely populated with bugs that additional individual
> vulnerability disclosures have low incremental value. Software
> companies strongly compete on the speed of their JavaScript
> interpreter and sometimes rewrite them multiple times per year. The
> number of bugs introduced likely matches the number of bugs fixed and
> the overall density remains constant and high.
>
> Although, you don't have to take my word for it. We have empirical
> systems that help us measure the difficulty of finding vulnerabilities
> in software: bug bounties! Rather than think of bug bounties as a
> means to an end, I think of them as a measurement of a current and
> trending state. If you can purchase vulnerabilities in a given
> software for $500 "off the street" and patch > 10 of them each monthly
> patch cycle, then your software is probably dense with bugs. If you
> find yourself in this state, then any additional vulnerability
> purchases are mostly meaningless. You should invest in exploit
> mitigations to stem the bleeding while you re-architect the software
> properly. If you are successful, then you should see the minimum price
> per vulnerability of a given type rise and the quantity acquired per
> time period decrease.
>
>> sharing.  If you are finding vulns for fun and fame, then the minute
>> you find a good one you'll let everybody know just to prevent someone
>> else finding it and beating you to the punch.  If you are doing it
>> for profit, then you don't share.  That's where the side effect is
>
> I think there is another dimension to this trend: for "sparse"
> software, the challenge to identify new vulnerabilities and write
> exploits for them is beyond the capability of most security
> researchers speaking at industry conferences so this happens only
> rarely. Many of the speakers work at consultancies that offer 2-4
> weeks of research time per year and the time-to-bug-discovery for
> sparse targets is more likely measured in years, without even
> considering time-to-exploit.
>
> On the other hand, what is the media value of a conference talk on yet
> another bug in Firefox? It's possible for Mozilla to purchase such
> information nearly at-will for sub $1000, ditto for many other popular
> targets. If, as an organization, you are investing in tools and
> research to find densely populated bugs at increasing rates of speed
> and you widely disseminate the tools and knowledge to do so then you
> are, in effect, arming your adversary since you will likely never
> reach the bottom of the bin. At the same time, hackers from the 90s
> have grown up and there are now professional groups ready to harvest
> this reusable information. At some level, I think many researchers are
> aware their work will get used this way and disclosure, consciously or
> unconsciously, has shifted as a result.
>
> Either way, this seems like nothing to be upset about. For sparse
> targets, the work has been done to secure them to a reasonable degree.
> For dense targets, additional disclosures will do little and I would
> focus on rearchitecting, exploit mitigations, or deployments that
> acknowledge their inherently brittle state (for example "apoptosis,"
> as Dave refers to it). I think this helps explain the movement towards
> "stunt hacking", since consultants are incentivized to focus on
> targets that are dense, that maximize media value, and that minimize
> disclosure risk. On the plus side, the utility of industry conference
> talks to professional attackers appears to have dramatically declined.
>
> I think Microsoft understands this concept based on the design of
> their bug bounties. For example, their IE bounty only ran for 30 days,
> which seems like an acknowledgment that it would be impossible to find
> all the bugs in an application so densely populated by them. Rather,
> as a developer I want to know whether some components are worse than
> others, how many people can find and exploit my software, and what
> skills and tools these people have. I think it’s only a bonus that
> Microsoft patched a few bugs in the process of collecting this
> intelligence about IE.
>
>> If a couple of Texas brothers could corner the world silver market,[HB]
>> there is no doubt that the U.S. Government could openly corner the
>> world vulnerability market, that is we buy them all and we make
>> them all public.  Simply announce "Show us a competing bid, and
>> we'll give you 10x."  Sure, there are some who will say "I hate
>> Americans; I sell only to Ukrainians,"
>
>> This strategy's usefulness comes from two side effects: (1) that
>> by overpaying we enlarge the talent pool of vulnerability finders
>> and (2) that by making public every single vuln the USG buys we
>> devalue them.  Put differently, by overpaying we increase the rate
>> of vuln finding, while by showing everyone what it is that we bought
>> we zero out whatever stockpile of cyber weapons our adversaries
>> have.  We don't need intelligence on what weapons our adversaries
>
> To return to your original question, "should the US create a 10x bug
> bounty?" Since there is software where bugs are dense, I think the
> answer is clearly no. Such a system might even implicitly reward the
> developers of the most horribly architected software. However, I don't
> think it makes sense for the specific targets that are sparse either.
> Is it worth the man-years of effort to find those remaining few bugs
> or would that investment be better spent continuing to refine their
> architecture and pursuing formal verification that proves the absence
> of bugs?
>
> That said, I think the for argument for having a “national bug bounty”
> is to make the choice for independent researchers competitive - do you
> sell to [name a country], or do you sell to US-CERT? What if they’re
> both paying the same price? What if by selling to US-CERT, you get a
> byline on some website and retain the freedom to talk to whoever you
> want about your bug? Then maybe if some random Geohot-alike stumbles
> over a tcpip.sys 0day, they sell it to CERT instead.
>
> I think that one advantage of bug bounties is that they are inherently
> passive. You have to pre-commit the money and vet bugs that come in,
> but you’re not expending effort on a team of people to look for stuff.
> It’s passive and opportunistic.
>
> So what if the US-CERT bug bounty worked like this: we identify
> software that contributes to the “national information infrastructure
> base” which is like, browsers, routers, phones, and Windows. We award
> big bounties for stuff in that category. The rest of the world
> continues finding bugs in whatever it deems necessary. Maybe it takes
> a few years for someone to shake out a new IOS bug, but that’s okay,
> the bounty isn’t paying out over the time of the search.
>
> Although obviously, it’s better if we build it right the first time so
> what I’d offer is why don’t we do both? Having a bug bounty in place
> of real security is obviously foolhardy. What if the government
> identified software that was in the “critical path for civilization”
> and gave the software vendors that created it a choice, provide:
>
> a. A formal proof of correctness for this software, along with
> rigorous testing methodologies and open design / open source so that
> others or at minimum the government could verify the systems
> correctness
>
> b. A ton of money that goes into a US-CERT slush fund to pay out
> bug bounties
>
>> Machine-powered vuln creation might change my analysis
>> though I can't yet say in what direction.
>
> I think that mechanized vulnerability hunting a la DARPA’s Cyber Grand
> Challenge fundamentally changes much of this balancing act. In theory,
> it holds potential for those in ownership of software densely
> populated with bugs to find nearly all of them at once.


More information about the Dailydave mailing list