[Dailydave] The old speak: Wassenaar, Google, and why Spender is right

Bas Alberts bas at collarchoke.org
Sat Aug 1 19:52:48 EDT 2015


This will be a long and ranty one as well as the first DD post I've made
in a non-Immunity capacity (I think).

So anyone that knows me on any personal level knows that I'm a non
disclosure kind of guy. Now I could get into the why and how, but what
it really boils down to is that I subscribe to a fairly peculiar belief
system in which freedom and security are, generally speaking, mutually
exclusive.

I think that in an effort to "secure" the internet, most so called privacy
advocates and full disclosure zealots are actually contributing to a
power structure that promotes totalitarian levels of control.

A secure internet is, by definition, a controlled internet. If you're
talking about software security, anyways. 

I've made that point in various forms before on this mailing list, but
it has become ever so relevant in recent times due to the proposed US
Wassenaar implementation.

I thought it was interesting, if not telling, that the USG aligned
themselves with what is essentially a full disclosure policy. The
proverbial get out of fail free card for the 1st round of proposed
Wassenaar export control legislation was essentially "as long as you
tell all you are okay".

As long as you tell all you are in the green. How on earth does that
sentiment align with any privacy advocacy? It is absurd. Yet we see many
a self confessed full disclosure zealot and privacy advocate froth with
an almost sadistic glee at the idea of a government enforced full
disclosure. Finally all those scumbag xdevs are forced to show their
cards. Finally it will all be out in the open.

Because that is what privacy is about right? Forcing things into the
open through Government control? When I see someone like Chris Evans
essentially cheerleading Full Disclosure as law on his tweeter it
fundamentally rubs me, as an American (HA!), the wrong way.

Then you have people like Chris Soghoian, who's entire pro-Wassenaar
argument was based on non-US companies. Lest we forget that HackingTeam
was actually fully Wassenaar compliant under even the strictest
interpretations. Which demonstrates exactly why and how it is a moot
endeavor.

I also think it's interesting how the HackingTeam thing was performed in
the blackhat tradition of dumping mailspools. What is WikiLeaks if not a
crowdsourced big-data analytic version of ~el8 at this point? 

I think you took a wrong turn somewhere team privacy. But that's just
me, I suppose.

Anyways, both sides of the disclosure fence suffer from one fatal
flaw. A flaw that Brad Spengler AKA Spender has been incessantly
pointing out for years and it's that bugs don't matter. Bugs are
irrelevant. Yet our industry is fatally focused on what is essentially
vulnerability masturbation.

I keep up with the Google Project Zero blog because I think it's
hilarious to see them fawn over bugs like they're actually hacking with
them.

"This is the perfect bug", "This exploit is beautiful", and many other
such paraphrases are rife in a lot of the Project Zero publications.

I suppose that's what happens when you spend a couple of million dollars
on tricking out a team of vulndev mercenaries, most of which were
playing on the other side of the fence for many years before stock
options and bonus plans took precedence over actually hacking (or
facilitating such).

I'm sure there's some true believers at Google. Ben Hawkes, Chris Evans,
Tavis Ormandy. They are ride or die full disclosure zealots (AFAIK) and
I may not agree with them in principle, but I do appreciate and even
respect the strength of their conviction.

Having said that, if you gave me a billion dollars today, what
percentage of the Google security team could I employ tomorrow?

It's an interesting question I think. From an adversarial perspective
that is. Say e.g. the NSA or whoever actually cared about someone fixing
"hundreds!" of bugs in desktop software and the real Internet wasn't a
facsimile of an early 90ies LAN party. Say that was the case.

If "they" got _real_ budget to buy out all the "top researchers" in the
industry, do you honestly think it wouldn't cripple Google's effort
overnight? 

And that's essentially the crux of the problem. You can't fight
religious wars with mercenaries. You need martyrs. When your team is
for sale, it's very hard to align yourself with any sort of ethical,
moral or even altruistic high ground.

And hey, again, not judging. It's a job for most. Myself included. I'm
35 and I could give less of a fuck about whether or not my homies from
whatever Scandinavian country are keeping down their roots this
week. Which, btw, I'm sure they are.

Anyhoo, back to the actual ranting. Ben Hawkes stated that "attack
research in the public domain" is the way forward for security.

The problem with that is that the majority of his team got skilled in
the non-public domain. Attack research doesn't get good in the public
domain, it gets good because it is used to, you know, attack. It has to
jump through hoops and quirks and work over sat hops and against
thousands of targets and do all sorts of weird things that would never
come up in a lab environment.

This whole modern game of public exploit vs mitigation is a circle jerk
based on a seed that came from the dark, and people forget that. A lot
of people currently making their bones killing bugs for Google (or
whoever) got good because they spent time on teams doing
actual attack research for actual attacks. Hell, some of them are near
and dear friends of mine. I suppose it's the elephant in the room that
noone wants to talk about.

You got your ex-vupen, ex-teso, ex-adm, ex ... well you get the idea. 

Anyhoo, back to why we're all wrong and Spender is right.

At the end of the day my team, Google's team, and lots of people's teams
are rooted in a culture of vulnerability masturbation. We fawn over
"beautiful" bugs and OMGWOW primitives and can wax endlessly about how
we understand such and such allocator to the point where you could play
a game of goddamn minecraft with nothing but a heap visualizer and your
allocation/deallocation primitives. 30 page dissertations on
over-indexing an array and hell we'll even hold court about it at
whatevercon for 60 minutes ... autographs at the bar. 

And it's all bullshit. If you care about security that is.

"But to stop exploitation you have to understand it!". Sure. But here's
an inconvenient truth. You are not going to stop exploitation. Ever.

You might stop my exploitation. You might stop my entire generation's
exploitation. But somewhere the dark is seeding away methodologies you
don't know about, and will never know about. Because somewhere hackers
are hacking, and they've got shit to do. None of which includes telling
you about it at blackhat or anywhere else.

That is empirically the truth.

So if you truly, deeply, honestly care about security. Step away from
exploit development. All you're doing is ducking punches that you knew
were coming. It is moot. It is not going to stop anyone from getting
into anything, it's just closing off a singular route. One of many that
ultimately falls through to the proverbial 5 dollar wrench; pending
motivation, time, and available resources.

If someone _REALLY_ wants your shit, they can take a bat to your head
and take it. End of exploit.

I say this as someone who's made a career out of exploit
development. It's been my life for 20 years. But I make no mistake about
it being a labor of love. A function of an OCD-like addiction to solving
puzzles and even though I spend most of my days filling out spreadsheets
these days, I still love me a good 30 page dissertation on world
shattering font bugs ... even though, and trust me if I tell you most of
team Google damn well knows this, many people have sat on the exact same
dissertation for many years.

But if you care about systemic security. The kind where you don't give
two flying fucks if Bob's desktop gets clientsided or Jane's Android
doesn't know how big an mpeg4 frame oughta be, then you will stop circle
jerking over individual vulnerabilities and listen to what Spender has
been saying for years.

Which is: you don't chase and fix vulnerabilities, you design a system
around fundamentally stopping routes of impact. For spender it is
eradicating entire bug classes in his grsecurity project. For network
engineers it is understanding each and every exfiltration path on your
network and segmenting accordingly.

Containment is the name of the game. Not prevention. The compromise is
inevitable and the routes are legion. It is going to happen.

Now as far as a way forward for YOUR security ... well I play on team
offense. I'm allowed to fawn over vulnerabilities and I think xdev and art
often intersect. Pretty polly payload, bro.

But if you're supposedly my adversary (i.e. on team defense) and yet you're
sitting right alongside me going "oooh" and "aaah" at whichever software
vulnerability then you're probably in the wrong place, or ... I suppose
... maybe just in the wrong time.

Love,
Bas

-- 
PGP Pub Key: https://www.collarchoke.org/0xBED727DF.asc
Fingerprint: 5C1A 3641 8542 7DFA F871  441A 03B9 A274 BED7 27DF



More information about the Dailydave mailing list