In addition to our property inspection and premium audit products, Property and Casualty of the Southwest, Inc offers a number of analytic services that can help identify trends and patterns that may go undetected at the individual inspection level.
Businesses today have an incredible amount of data available at their fingertips – some might call it data overload. Yet, if this data is not being analyzed and reviewed on a regular basis, you might be missing out on key information and trends that could benefit your business.
For instance, if you look at a sample of homes in the same zip code that all have cracks in their foundation, this might indicate that there is a bigger issue – like sinkhole activity. Analyzing the performance of insurance agents can also help you identify top performers and weed out those who might not bring value to your organization.
Using the data collected during property inspections and premium audits, the Analytics Group at Property and Casualty of the Southwest, Inc can help you review data identify trends that could impact your business.
Contact our sales department today to find out more about our analytics services.
For more information on our analytics services, including case studies, follow the links below:
Here you will the answers to frequently asked questions about data analytics in the insurance industry. If you have a question that isn’t answered below, please use the submission form to connect with a representative from our team.
Modeling, which combines data into mathematical formulas, can help you make better business decisions, identify important trends in your data, and automatically determine which customers are your best risk or who are the best candidates for a marketing campaign.
Modeling is the process of mathematically combining data on past performance to make predictions about future events. A simple example of such a model is a baseball player's batting average.
Data about past performance is combined into a mathematical formula (hits divided by times at bat) to estimate the probability that the next time at bat will be a hit. This batting average tells you what you can expect from a player "on average" for their next time at bat and allows you to determine which of two players has a better chance of hitting the ball the next time they are at bat.
An insurance score works the same way as a batting average. Data about past performance such as number of previous MVR violations, length of time since most recent claim and deductible amount is combined in a formula to determine the expected loss ratio of a policy. This information can then be used to determine the underwriter’s risk in writing a policy.
Models can be built to predict:
· Loss Ratio
· Claim Frequency
· Professional Liability
· Propensity to Renew
· Propensity to Churn
· Probability of Responding to a Marketing Campaign
· Probability of Fraud
· Probability of MVR violation
Consider the costs and benefits of ordering motor vehicle reports (MVRs) on drivers as a part of the underwriting process. Ordering an MVR on every driver will ensure that every sur-chargeable offense will be found, but the cost of ordering these reports can be high.
If an MVR score is used to predict who is likely to have a sur-chargeable offense, all policies or applications could be scored and those least likely to have such offenses would not have MVRs ordered. A typical MVR model could result in not ordering on about 7.5% percent of all policies. The savings from not ordering on this group is greater than those missed on sur-chargeable offenses, thus improving the bottom line.
This same score could be used by a carrier that only orders on some of their policies. In this situation, every policy would be scored in the MVR model and only those most likely to have violations would be ordered. In this way, the "hit rate" for those with violations would be sharply increased while ordering costs could be kept constant.
Data mining, which searches through large collections of data to identify meaningful patterns and trends, can help you detect hard to find relationships that may benefit your business.
Data mining is the process of exploring and analyzing large quantities of data in order to discover meaningful patterns and rules. It combines techniques from machine learning, pattern recognition, statistics, database theory, and visualization to extract concepts, concept interrelations, and interesting patterns automatically from large databases of information.
A classic example of data mining is the discovery by the grocery industry of the relationship between beer and diapers. Over several months, data was captured for each person that checked out of a grocery store. This was then combined with demographic data, such as age and gender of the shopper. What they found was that there was a significant correlation between the purchase of diapers and beer by men in their twenties and thirties.
This data identified a propensity from these men to purchase beer when they are buying a pack or two of diapers in the evening. For the grocery industry, this data allowed them to identify a relationship and better target their shoppers by putting a display case of beer close to the diapers.
In the insurance industry, data mining can be used to find unexpected correlations. Such correlations can then go on to be the foundation of a very profitable cross-sell campaign. For instance…
Data Mining can also be used to find combinations of factors, which when present on a policy, result in extremely high loss ratios or claim frequencies.
Although Data Mining incorporates many statistical concepts and techniques, it is in some sense at the opposite end of the spectrum from modeling. Modeling looks at the big picture and asks what happens "on average." Data Mining, on the other hand, looks at the little "nuggets" of information.
Geographic Information Systems (GIS), which combine data and geography to better predict risks related to geographic location, can help underwriters map and analyze information for better rate quotes, faster service, and a less risk adverse business.
A Geographic Information System (GIS) is a system designed to capture, store, manipulate, analyze, manage, and present spatial or geographic data.
These systems can be used in the insurance industry to identify areas of risk – such as flood zones – and accurately determine an individuals’ risk. This is important in the insurance industry, as coverage is often dependent on the policy holders’ risk of natural disasters and other circumstances.
Some typical applications are
· Displaying a map showing all the policy locations with premium above a specified level
· Displaying a map showing all the locations of claims over a given amount.
· Producing a report on all policies within five miles of a particular river
· Determining the best location for a new agency office
· Finding rating areas with excessive loss ratios
· Finding all auto policy holders without flood insurance living in a high-risk flood zone.
For instance, suppose a carrier with 50,000 policies wants to determine how many policy holders are located in an assigned risk area. If we assume that you had all the maps your application needed, someone that was familiar with the problem and could process, on average, one location every 30 seconds and who could work accurately for eight hours a day, could only process 960 addresses per day. A portfolio with 50,000 policies would require more than 52 days to process this information.
Using our custom GIS applications, which contain all the necessary maps and descriptions of risk, we can process those same 50,000 policies in less time than it takes to make coffee in the morning – saving you time and money and helping identify risks quickly.