It is risky business to use risk assessment systems which
produce alarms at a pace faster than the organization can evaluate and handle
them.
The problem often takes one of these two forms: giant batch reports with so many pages it is an almost impossible task for analysts to manage each scored risk item; or real-time alerts that pile up in queues whereby analysts pick off the top while the backlog keeps growing.
All of these potential alarms lying around in reports or queues – each possibly a smoking gun – really must be viewed as potential evidence that could be used against you if stuff goes wrong. These unhandled alarms each evidence that the organization did not take a risk seriously and evidence that suggests someone was incompetent or negligent for not amassing the necessary resources to evaluate each and every alarm in a reasonable timeframe. Judgment day arrives after something bad happens when this incriminating – or even just suspect – evidence comes to light during forensic analysis. “Look the bad actor is right here on page 32,912 … had you appropriately staffed, this disaster would have never happened!”
I have a few suggestions for those organizations dealing with this problem already and those that would like to avoid this dilemma in the first place.
1. Favor the false negative – wherever possible don’t err on the side of alarm. While, obviously, you can’t ignore what appears to be a real, serious threat, err on the side of innocent behavior. Let new observations catch the false negatives. More about this here: How to Use a Glue Gun to Catch a Liar.
2. How one calculates relevance and when to register an
alarm is crucial. On the technology side the objective
is: the very next item in an alarm queue is the most important item for
review. And
in this model, no matter what … there is no reason to produce more alarms than
there are available resources (e.g., analysts, systems) to deal with
them. Therefore,
especially on day one, configure your risk assessment engine to produce alarms
appropriate to your individualized risk, staffing, and ability to respond.
Then, as your resources have increasing bandwidth, consider increasing the
alarm sensitivity. 3. When it comes to
addressing the unresolved ambiguity that percolates under the alarm
sensitivity, use tertiary data for automated disposition. Instead
of having an army of analysts wrestling each “maybe” to the ground it makes
more sense to locate what data is needed and let that data disambiguate these
low grade risks en masse. Most risk ambiguity will be released as
harmless, while a far smaller percentage will promote up to an actionable
alarm. Example,
if the phone book holds the evidence necessary to resolve some risk ambiguity
(i.e., to learn the middle names are different thus concluding two identities
are unrelated) – and a person would normally look here – then maybe the system
can look there on its own. Lots of ambiguity can be solved with lots of
data – but the question is what data and with what legal and policy
justification? 4. The most “bulletproof” approach involves ...
lawyers. In
this case, one would contract with a law firm with expertise in information
security law to bring expert engineers aboard for the analysis, technical
testing and findings, and the ensuing debate which results in new corporate
policy. All
‘discovery’ – including of negative findings that could be used against an
organization in the future are then protected by attorney-client privilege and,
therefore, are most likely to be protected from disclosure. And
by the way, this also creates an “advice of counsel” defense to any future
litigation. If
you want to go this route I’ve got a few names, just ask. My advice: Rarely is it a good idea to ask for a detailed
report of every possible risk. So be careful what you ask for. OTHER RELATED PAPERS: Correcting False Positives: Redress and the Watch List
Conundrum RELATED POSTS: Consequences of False Positives in Government
Surveillance Systems Sometimes a Big Picture is Worth a 1,000 False Positives
On Point 2....wouldn't this be an excellent method to get aditional budget to staff appropriately, instead of suppressing the alarms? In most organisations, if there are no alarms identified, no additional money will be spent assuming that nothing needs to be done!!
On point 3......if a human resolves a false-negative/positive by asking for additional data (for example you gave the search for middle name as concluding proof), why don't systems include all the logical steps one would take to resolve an alarm and the one's that come out of this filter would be only those where a) Additional Data was unavailable to the application to get b) Additional data was inconclusive and no new rules have been coded to handle this or c) no logical rules have been coded to handle automatic examination of an alarm...and hence has to be done using human-intuition?
Posted by: Sreenath Chary | January 05, 2009 at 11:32 PM