John Bambenek
[Bambenek Consulting, Ltd.]
John Bambenek is President of Bambenek Consulting, LTD, a PhD student at the University of Illinois at Urbana-Champaign, and a handler with the SANS Internet Storm Centre.

He has over 20 years experience in Information Security and leads several International investigative efforts tracking cybercriminals - some of which have lead to high profile arrests and legal action. He currently tracks neonazi fundraising via cryptocurrency and publishes that online to twitter and has other monitoring solutions for cryptocurrency activity.

He specializes in disruptive activities designed to greatly diminish the effectiveness of online criminal operations. He has produced some of the largest bodies of opensource intelligence, used by thousands of entities across the world.

Tutorial: Practical Cybersecurity Machine learning

Machine learning, particularly in cybersecurity, promises to help relieve overworked security teams. The problem is most models are developed by those who don’t understand the threat landscape which leads to models that are naïve and not tailored to the adversary. Worse yet, our models are essentially public and suffer from the same fundamental security problem every technology has; how to safely process untrusted inputs.

The adversary can and does lie to try to break out systems.

This tutorial will focus on how to create cybersecurity machine learning models that are resilient to adversarial influence and can be relatively safely used in production environments. Specifically, this talk will cover the important and techniques to create robust whitelists for supervised learning, the important of domain knowledge in feature selection, and the importance of third-party enrichment of indicators to better inform training models.

Students will create several models based on test data they can use in their workplaces when they return from the event.

Adversarial Machine Learning and its Impacts on Cybersecurity

Technical Level (3 being the highest score): 2

Machine learning, particularly in cybersecurity, promises to help relieve overworked security teams. The problem is most models are developed by those who don’t understand the threat landscape which leads to models that are naïve and not tailored to the adversary.

Worse yet, our models are essentially public and suffer from the same fundamental security problem every technology has; how to safely process untrusted inputs. The adversary can and does lie to try to break out systems.

A few examples of cybersecurity models that can accurately predict maliciousness will be demonstrated to show the potential benefits of such technologies and how it can be safely deployed. Particularly, these models show a few key behaviors of adversaries that have created insights into attributes that can be used for automatically blocking attackers.

This talk will also cover the concept of adversarial machine learning and how it applies to cybersecurity models. Adversarial machine learning is typically how malicious actors fool image classification systems, but the discipline also applies to cybersecurity machine learning. Some recent attacks used by the adversary will be shown and how these attacks can be defended and mitigated will be shown.

Secure your place now!