[Dailydave] DARPA CGC Recap

Dave Aitel dave.aitel at gmail.com
Thu Apr 20 15:51:53 EDT 2017


Ok, so the questions I have are still unanswered I think, possibly because
it's a lot of work. But I think they're important.

1. Was there any REAL difference between the competitors? Everyone is all
"oooh, ahh" about mayhem. But are there bugs or bugclasses it can find that
open source shellphish or the ToB work cannot? I.E. Is the final score
essentially noise for the thing we actually care about?
2. Is adding the SMT solver to the fuzzer 10% better or ... 1%? Would we be
better just special casing certain things into the fuzzer?
3. What bugs could nobody find? Why?

-dave


On Tue, Apr 18, 2017 at 9:35 AM Chris Eagle <cse.lists at gmail.com> wrote:

> If you want to be able to do all of the performance measurements then yes
> that code is missing. If you want to study the successful PoVs then that
> code is not required. Most of them can be replayed on the publicly
> available VMs. However some of them depend on the specific CPUID values
> returned by the CFE hardware which you might need to emulate somehow. Even
> if all the code used to run the final event was released, the CPUID issue
> would continue to be a problem unless you are able to return the same CPUID
> values that the competitors saw during CFE.
>
> On 4/11/2017 10:38 AM, Ryan Stortz wrote:
> > Notably missing are:
> > * The kernel they ran the final event on
> > * The code they used to measure scores
> >
> > This prevents a lot of analysis.
> >
> _______________________________________________
> Dailydave mailing list
> Dailydave at lists.immunityinc.com
> https://lists.immunityinc.com/mailman/listinfo/dailydave
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.immunityinc.com/pipermail/dailydave/attachments/20170420/008fad31/attachment.html>


More information about the Dailydave mailing list