My technical roots are in enterprise communication and storage; design, build and sustain. To this day the pallor of my skin is largely due to years spent under the fluros of data centres belonging variously to federal and state government, mining and defence industries.
It was in the latter I became a warden of national security systems. I've met people who think 'NATSEC' sounds sexily clandestine. I'm yet to socially leverage that misconception - but to be fair, the people who harbour it are exactly the type you'd not want to be stuck in an elevator with. In my role NATSEC manifested as an ongoing series of excruciatingly detailed breach investigations.
There amongst the SNMP traps and Zulu timestamps sparked an obsession with security monitoring; I was, and remain, convinced that the writing is on the wall long before we see the symptoms. I began to read, get certified and immerse myself in the business of security. In 2010 I was one of three who conceived and launched the security capability of our organisation.
In 2011 I co-founded a security interest group called Plus61 that pulls together a community of security professionals to network and share our experiences and varied talents in a place that coincidently happens to be licensed.
Today my career wave function has collapsed: I now work for Stratsec surrounded by people with the same fervour for security. I’m personally embedded in the capability development arm where we build systems designed to inoculate, bolster resilience and foster security across the common elements of every organisation; people, process and technology.
IDS: We're doing it wrong
Multi-national corporations with considerable information security budgets employ a bristling array of cutting-edge appliances - but they lose their trophy data all the same. Defence, despite military grade detection systems and staggering budgets, are toppling as well.
Security detection mechanisms have never been as capable as they are today, so why do they continue to fail us?
The vast majority of detection appliances are event-driven. They examine occurrences (events such as log entries) and consider them in isolation - is this packet's destination on my blacklist, does this string match any virus definitions? These are extremely valuable questions to ask. But they don't go far enough. They omit to ask: what is the context? Have we seen this behaviour before? What happened immediately prior? What is the relationship amongst other occurrences?
Additionally, they all are prone to a poor signal-to-noise ratio. It is entirely appropriate to have these devices in your perimeter, providing they are appropriately tuned. The on-going tuning of these devices is labour intensive, and rarely are they are adequately maintained as your security staff are busy fighting other battles. You might be getting value from the device, but not the coverage you need to stay out of the papers.
Intrinsically linked is the misconception that we can detect all incursions at line speed. While true for some cases - two flaws are apparent.
The first is that some nasties are only detectable through aggregation and/or consulting of history.
The second is that the really tricky stuff is homebrewed, customised or polymorphic - and you can't write a rule to catch that stuff. The first time that bit of nasty code ever sees the wild is when it is slipping past your blithely unaware IDS.
The modern response to the zero-day issue is sandboxing and it’s on the money, but comes with some fine print. A computer might be able to recognise that it is observing something suspicious even never having seen it before. What it cannot do is make a determination on its nature. Once again we turn to context - what is the malware trying to do? Where was it heading? Where was it from? Is this an isolated incident?
There needs to be a change in the way we think about detection.
Firstly, truly sophisticated detection is about entities and their relationships over time, not just events in isolation. The temporal aspect gives context and allows us to ask some interesting questions like 'what is the first internet activity of the day for this workstation?' and 'have these two email addresses communicated previously?'
What temporality really breaks is real-time analysis. The codification of all of these analysis questions into an expert system is impractical. The computing power required to correlate an entity against every previously observed entity at line-speed does not currently exist.
Cultural changes are required. We can improve our chances of detection if we let go of some of our traditional preconceptions.
One: We cannot rely on extemporal event-driven appliances. Without context, without consideration of the past we cannot interpret the present.
Two: We cannot do everything at line-speed. Appliances should not be fire and forget - we need clever people sitting behind them, constantly tuning and maintaining them to really harness their capabilities to our organisations.
Don't mistake my message as a slander on appliances. They are intrinsically not the problem - it’s our expectation that is flawed. It's my belief that adopting the right principles, we can start to make it right again.
Copyright © 2012 The University of Queensland, authorised by AusCERT Program Committee, maintained by: firstname.lastname@example.org