A new report from Gallagher Re has found that cyber insurers could reduce loss ratios up to 16 percent by removing most-at-risk entities.
Using independent analysis of cybersecurity performance data provided by Bitsight in combination with claims data, the study uses a broad range of statistics to identify those most at risk for a cyber attack, and draws some surprising conclusions.
“This study provides clear, actionable insights for both insurance companies and enterprises on the efficacy of security controls,” Ed Pocock, global head of cybersecurity at Gallagher Re. “Leveraging Bitsight’s data, we’ve not only established a direct link between weak cybersecurity controls and higher insurance claims, but also highlighted additional strategies for insurers to more effectively assess an organization’s cyber risk and potentially improve loss ratios.”
Cybersecurity firms have been able to remotely scan and assess companies’ resilience to cyber attacks since at least the early 2010s. In recent years, cyber insurers have begun to use these to inform underwriting.
Intuitively, a larger firm has more computer systems, more people vulnerable to phishing, and more revenue for attackers to target. But a closer examination of data finds a complex combination of factors at play in determining who is targeted and how.
By incorporating factors such as single-point-of-failure (SPoF) data, which highlights the dependencies a company has on third-party systems and services, and an organization’s count of IP addresses (IPv4 data), insurers can get a clearer picture of potential risks, and organizations can explore ways to protect themselves from attack.
The Gallagher Re model used firmographic data to complement the vendor technographic data, including industry, revenue, limit and geography. Based on technographic data alone, the worst 20 percent of companies identified by the model were 3.17 times more likely to suffer a claim than the best 20 percent. These numbers rose to 6.93 times more likely when firmographic data was also included.
By assessing the real-world performance of the model in predicting insurers’ loss ratios, external scanning data helps to differentiate profitable and unprofitable businesses on previously unseen data. This in-depth analysis showed that by removing the worst 20 percent of risks in the study portfolio, (re)insurers could see a 16.35 percent reduction in loss ratio.
IPv4 Data and Cybersecurity Risk
In a 2022 study, Gallagher Re found that company revenue was the strongest predictor of claims when considering all factors together.
While this still holds true, the new report, which includes the number of IPv4 addresses related to each risk for the first time, found that the IPv4 numbers — an organization’s ‘cyber footprint’ — rank as the second-highest predictor.
While it may be intuitive to correlate larger companies with higher revenue to a greater number of IPv4 addresses, this is not always the case. The analysis can identify companies with large revenues but comparatively smaller numbers of IP addresses. While IP count is not a widely used metric among cyber insurers, it may deserve a closer look when determining risk.
Under traditional analyses, larger companies might generally be assigned a higher level of risk and higher premiums. However, including IPv4 data can provide a buffer to larger companies with fewer IPv4 addresses, offering an opportunity for them to lower premiums as they present less cyber risk than size might suggest.
Predicting Claims With SPoF Data
Certain SPoF data enables insurers to assess portfolio exposures to particular services and vendors (e.g., AWS) across their portfolios. SPoF data can also help identify aggregation points for modeling.
CyberCube has incorporated SPoF data directly into its modeled results, looking at actual dependencies of companies in a portfolio where the required input data is provided. Meanwhile, RMS and Guidewire use SPoF data to influence model parameterization.
However, because the SPoF dataset is at an earlier stage of development than the scanning data, there is little standardization or consistency between cybersecurity vendors in how they capture and process the data, Gallagher Re said. For example, some weigh a company’s dependency on an external service differently, depending on whether that service is delivered on-premises or hosted in the cloud.
DKIM, SPF, Patching Cadence and Other Cyber Risk Factors
This 2024 study looked at a dataset comprising over 62,000 companies in 67 countries and over 589 million separate IP addresses. It involved over one thousand material claims.
While broadly consistent across different revenue bands, the Bitsight risk factors (excluding SPoF and footprint data) predictive of various types of cyber events varied significantly, particularly for business email compromise claims, where factors such as DKIM (DomainKeys Identified Mail) and SPF (Sender Policy Framework), were particularly useful at anticipating loss.
Patching cadence remained the strongest predictor of ransomware loss.
Risk factors linked to hybrid or homeworking, such as weak mobile application security, have increased since 2022. Meanwhile, factors associated with traditional on-premises networks (e.g., port security) have decreased. These changes follow workforce trends in the intervening two years and demonstrate that the model has the potential to respond to shifts in the threat landscape.