For organisations with many (several hundred) systems, quantitative IT risk management can be likened to a window cleaner being asked to clean the United Nations headquarters in New York. He will never finish before the first couple of windows need cleaning again. The process of quantitative risk assessment, which involves many systems and respondents, can be close to impossible to execute. The challenge means that one either fails to do the experiment or goes to qualitative methods, which are probably fast but problematic for the reasons we have mentioned in several posts here.
Imagine the following situation:
- Your organization has 100 IT systems.
- Each system is estimated to be exposed to 10 threats.
- Each asset-threat combination has 3 possible consequences with individual loss distributions.
This situation gives 3,000 scenarios (100 x 10 x 3), which must be estimated and calculated in a quantitative assessment. A qualitative approach would mean asking the respondents to place each scenario in a 5×5 heatmap. It requires resources, and the result is a useless analysis placebo. What can we do?
Imagine that we could develop an estimation robot that we can influence with a series of basic assumptions and data points about each IT system, after which the robot calculates the risk picture. Imagine further that it can do it with such high quality and speed that you can use the result as decision support in real-time.
Using a robot to determine probability and loss
Using statistical regression analysis (it sounds difficult but is a standard function in Microsoft Excel) is effective for analyzes where the uncertainty and the number of estimates is large. In an analysis with many systems, each with individual security settings, all of which can be affected by several types of events, we can advantageously prepare an estimation robot (LENS model developed by Egon Brunswik and described by Douglas Hubbard and Richard Seiersen in “How to Measure Anything in Cybersecurity Risk”).
The purpose is to let the respondents, who previously have been dragged through painful estimation processes, alone determine the settings for their systems. They typically do this with much less uncertainty than if they are asked about the probability or consequence of several scenarios. For example, we ask whether the system has implemented multi-factor authentication (also known as MFA). The system owner will know this, making the data point high in quality.
Based on a system where we know the system’s properties, the model can estimate the probability and the loss in different scenarios. A positive side effect is that the modelled estimates are better than the estimates the individual system owners would provide. This is because the model can remove the inconsistency that respondents typically impose on the risk assessment. Inconsistency in experts’ answers is well described by, among others, David Kahneman in his book “Thinking Fast and Slow”.
The construction of an estimation robot follows the following overall phases:
- Identification and calibration of the relatively few experts who must participate in providing reliable estimates.
- Identifying the characteristics relevant to assessing the probability and loss of cyber incidents targeting the systems.
- Selection of risk scenarios for the individual systems based on an assessment of their properties.
- The estimation phase where the experts estimate the probability and the loss for the selected scenarios.
- The analysis phase where logistic regression analysis is carried out using the average of expert estimates as the dependent variable and input to the experts as the independent variable. Here we remove the noise of human inconsistency.
- The result phase where the most suitable formula is extracted as the logistic regression for both probability and loss.
Can you picture it? We get a small group of selected and calibrated experts to provide estimates that depend on the characteristics of the systems. These estimates are of good quality when we have removed the noise that typically occurs in connection with all human judgments. Now we can scale to a large number of individual systems with different characteristics.
Blown away
ACI had the pleasure of working with this model with a very large Danish organisation in the financial sector. We were, to say the least, rather excited to see the model in action. The system owners submitted their answers to the systems’ security settings, and we pushed the button and got a data set that could be included in a simulation. With the simulation, we were able to present key figures that the steering group could relate to immediately. Due to the use of the LENS model, we were able to proceed with adjustment and validation many weeks earlier than would otherwise have been possible.
Another notable result was the response rate among the system owners that had increased significantly due to a more accessible and faster-completed questionnaire survey.
We naturally remember that “all models are wrong, but some are more correct than others”. If we continuously calibrate the model against reality, it will be quite good. We do not doubt that these models are the future of risk assessment in multi-system environments.
Our Quantitative Assessment Platform (QAP), which we develop in collaboration with our customers, will include the facility out-of-the-box.
Stay tuned.