[Dailydave] Machine Learning and Dimensions and stuff
Oleg Kolesnikov
oleg at prelert.com
Wed Nov 26 18:32:25 EST 2014
> An important detail that many people in "machine learning for
>security" neglect is that the vast majority of ML algorithms were not
> designed for (and will not function well) in an adversarial model.
[TL;DR alert :]
There is definitely a need to design MLA for security with the “adaptable
adversary” assumption in mind to better align with the security attack
detection domain challenges, particularly in terms of its game-theoretic
aspects.
[obvious mode=on] In my experience, as part of developing good long-term ML
defense, it can be critical to understand how attackers act, TTPs they use,
their motivation, and also the semantic/behavioral attack invariants and
constraints involved (elements/features that are harder/impossible for
attackers to modify to blend-in with normal). [obvious mode=off]
[axed]
> attackers haven't felt the need to adapt to anything but AV signatures >
and DNS blacklisting yet
Hmm. This may not be the case for most/basic attacks, but for more advanced
attacks/post-ex activity, Get-NetTCPConnection -State *,
Msf::Payload->badchars,compatible_encoders, Veil Evasion/Artifact-specific
evasion, and other types of fairly basic ad hoc blending based on the local
network profiling/baselining to evade ML (see background work below) may no
longer be sufficient, so more complex techniques may need to be used to
evade anomaly IDS/BDS, including meta-behavior baselining based on MLA
insights, MLA constraints probing, sampling resolution exploits, user
behavior / model drift change exploits etc.
It is also important to take into account the target level of “attack
stealth” and the cost-benefit analysis of the attack activity
e.g. does the AML case involve an implant designed for a high-stealth-level
deployment with an avg. beaconing interval of weeks/months
with the expected MTTD of months/years vs. MITB/with basic ICMP/DNS
asynchronous C2 vs. a quick drive-by reflective DLL in-memory stager that
does not involve a lot of LM/persistence/beaconing behavior and happens
within seconds/minutes. The first two examples may potentially involve more
advanced adversarial blending activity given the timeframe/amount of
learning data available (and potentially more interested in adapting to ML
to blend in) than the last example.
Some relevant background work in this area that might be worth reading:
- MIT Editorial (relatively good summary of background work on AML) -
https://www.ll.mit.edu/mission/cybersec/publications/publication-files/full_papers/2010_10_25_Lippmann_MLJ_FP.pdf
- Mimicry - https://www.cs.berkeley.edu/~daw/papers/mimicry.ps (classic
paper from 2002)
- Polymorphic Blending - https://smartech.gatech.edu/handle/1853/6485,
https://www.usenix.org/legacy/events/sec06/tech/full_papers/fogla/fogla.pdf
- Pattern Classifiers/Attacked -
http://www.mediafire.com/download/86iw2nadavzggwj/JPJ1425+-+Security+Evaluation+of+Pattern.pdf
- Evasion/Counter-evasion - http://www.cs.utsa.edu/~shxu/socs/cns.pdf
Cheers,
Oleg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.immunityinc.com/pipermail/dailydave/attachments/20141126/219d97bc/attachment.html>
More information about the Dailydave
mailing list