Cutting through the data noise in small commercialSmall commercial insurers are racing to innovate within an ecosystem that supports simplified, accelerated, and accurate underwriting. Data can point the way, but it needs to be the correct data: Clean, relevant, and insightful. That’s not always easy in the age of Big Data, which can deliver vast volumes of information but can’t guarantee its usefulness. Much of that data is just noise, meaningless at best, and sometimes can be counterproductive to underwriting.

The market is waiting for insurers to solve the data noise conundrum and get to work delivering a simpler application process that leads to faster, more reliable quotes. Automated underwriting can help meet that demand as insurers compete in the race to zero underwriting questions on small commercial applications. Building that process on data that’s not just more, but better, can also help insurers collect appropriate premiums and avoid discovering hidden risks when it’s too late—at the point of claim.

Three noise-reduction steps can help insurers filter out of the static in risk data:

Step 1: Get the data right

Data quality can be wildly inconsistent: Incomplete, unstructured, second-hand, and subject to errors. Businesses may be misidentified due to similar names. Basic details could be months or years out of date. Company owners often forget or fail to update insurers on changing business activities that add or remove risk in their operations. It can be hard to ensure that coverage remains appropriate and premiums correlate with the level of risk.

Some small commercial insurers may bypass the problem of overwhelming quantity and questionable quality by accepting “good enough” data from easily accessible sources. But that data may not be as good as it seems, and its shortcoming can become amplified in automated underwriting systems that propagate those flaws across a book of business. Preventive measures and data enhancements can be critical:

  • Auditing and testing a new feed against validated data—before the data goes live in any workflows—can help guard against the damage bad data can do to a portfolio.
  • Reliable property data, site-verified or modeled with advanced analytics, can give underwriters more confidence by helping to validate sources and filter out the noise of less trustworthy information.
  • High-quality aerial imagery can underpin reliable modeled data to help fill knowledge gaps for roofs and exposures adjacent to a property.
  • Entity resolution can help reduce false positives and reconcile errors related to naming confusion within and between businesses. Such tools can also help connect business owners with the entities they operate for more accurate data retrieval.

Step 2: Get the data that matters

Clean, complete data can still be noisy, full of information that’s irrelevant to the insured risk. Legal characteristics of a business may be accurate but devoid of risk insight. For example, knowing that a company is incorporated as an LLC may not be enough. On the other hand, information that the company is a plumbing business with ten employees that does occasional excavation work and is housed in a suburban industrial park gets much closer to the heart of underwriting and pricing that risk. Gleaning that kind of information quickly from raw data can be critical for a small commercial insurer that rapidly issues a high volume of policies.

Such data may be unstructured and scattered in writing and images across websites and social media. Machine learning algorithms and keyword-based intelligence can help make sense of it, delivering usable data in seconds. The need continues throughout the policy life cycle to stay current with individual risk changes at renewal, for better or worse. Uncovering new developments can help an insurer provide better protection and capture more appropriate premiums by reclassifying risks with necessary policy limits and coverage adjustments.

Step 3: Get the data to point the way

Filtered data may still need interpretation. Technology trained by expertise can help transform data into underwriting insight in near real-time.

Image analytics, for example, may be able to determine whether an insured is doing work on scaffolding or whether a small gas station operates a brush-based car wash onsite. Meanwhile, aerial imagery can reveal roof type and condition, solar panels and other structures, and even a history of damage.

Machine learning can help automate otherwise manual information-gathering and assessment to support more robust straight-through processing. The key is providing answers rather than raising more questions. Verisk staff research has even found that certain tools such as image analytics have outperformed human interpretation in some cases. The result: An application may go from submission to quote in seconds—precise, reliable, and exceeding customer expectations.

Harmony out of the noise

Feedback from small commercial insurers has helped guide Verisk in building solutions that use advanced data science to help channel the flood of digital information. LightSpeedTM Small Commercial is one result, providing an API solution that applies advanced machine learning, artificial intelligence, and other technologies to help cleanse, filter, and interpret data that’s often unstructured. For insurers, that can mean faster, more accurate underwriting and a step forward in digital transformation.

Learn how to get clearer signals from your data to accelerate quoting, help contain premium leakage, and improve retention. Read our new white paper, Three Steps to Less Data Noise for Small Commercial Insurers.


By: Tracey Waller

Tracey Waller is product director for small commercial underwriting at Verisk. You can contact Tracey at Tracey.Waller@verisk.com.