Publications
From: Charles Elkan elkan@cs.ucsd.edu
Date: Tue, 15 May 2001 10:53:27 -0700 (PDT)
Subject: data mining and insurance pricing
This McKinsey Quarterly article (registration required) may be interesting for data miners working on insurance applications.
Selected quotes:
- "It takes a portfolio of at least 200,000 to a million policies per
product line (assuming access to reliable data) to produce a fine and
reliable risk segmentation-the prerequisite for more differentiated
pricing."
- "the segment-by-segment correlation between premiums paid and claims
incurred is often low for individual insurers, and this discrepancy leads
us to believe that the pricing of many carriers is more or less random and
that they manage profitability on a total-portfolio rather than
segment-by-segment basis"
- "Sophisticated insurers use up to 40 variables to price a simple auto or
home owners' policy, against the average performer's 15 or so."
- "...the cost of losing the most profitable customers to cherry-picking
competitors is too high (Exhibit 3), and, to make matters worse, insurers
risk attracting the unprofitable customers whom competitors have
intentionally priced away. Once this customer-base swap has taken place,
it is hard to reverse, since people are unwilling to switch carriers
unless the price difference is on the order of 10 to 30 percent, depending
on the segment's price sensitivity. And then the only way to get customers
back is to offer below-cost prices."
| |
|