Archives for December 2008

Good Parenting Begins at Home – Financial Services Regulation in 2009

As 2008 draws to a close, so does the world as we know it – at least in terms of insurance regulation. In 2009, the “price” of the public investment in the financial industry will come due and there will be fundamental change to past approaches in oversight.

After a career in the property casualty insurance industry, my mental model draws a sharp distinction between finance and insurance. Certainly, I have never considered investment banking and commercial lines underwriting as siblings – very distant cousins, perhaps, but not in the same immediate family. Like the parents who assume bad debts accumulated by a wayward child, the federal government has stepped in and is about to administer some discipline.

Looking forward, do not expect the public, or members of Congress, to recognize a distinction between Bear Stearns and AIG. Do not expect the patience to understand that the credit default swaps weren’t “real insurance” and that the departments selling these products were not part of the “core” insurance operations. I expect that federal regulatory reform will be at the financial services level, not separate schemes for banking and insurance. The term optional federal charter will lose the “optional” part.

Yes, individual states will object. They have a large vested interest. Yes, there will be much debate and many speeches about federal versus local rights. But, at the end of the day, state governments will have more immediate issues as tax revenue disappears and their attention is drawn to other areas. The path of least resistance will be a dual regulation scheme. Expect a federal scheme to be placed on top of the state process.

Good parenting begins at home. To plan for this now, insurers should anticipate and investment of senior leadership time and energy on crafting a reasonable and effective response to the financial crisis for insurance. Influencing lawmakers from the highest levels will be essential. Industry lobbying and trade groups must guide media discussion by communicating the differences between insurance and other financial institutions. Comparisons on solvency and investment portfolio structures should be made. Information system vendors should enhance and/or build tools which will allow their clients to respond quickly to new regulations. Finally, insurers should review and improve, where necessary, their capabilities in data mastery (see the Celent report Insurance Data Mastery Strategies ).

With one notable exception, insurance companies did not require a bailout because the industry is so tightly regulated. In 2009, there will be the opportunity to improve this oversight. As in effective parenting, positive reinforcement and good guidance are better tools than the rod.

As expected, Solvency II is under threat

In April 2008, Celent published a report about the new regulatory approach for insurers and reinsurers operating in the European Union called Solvency II.

Surprisingly or not, the draft text submitted to and approved in the beginning of December by the European Council of Economic and Finance Ministers (ECOFIN) does not contain the group supervision provision any longer. With Solvency II, capital requirement is based on a risk-based system as risk is measured on consistent principles. Knowing that, the removal of the group supervision requirement is an important change to the overall Solvency II regulation. Indeed, the idea behind Solvency II is to encourage large and diversified groups because they can pool their capital resources which should in turn benefits to policyholders. This approach is directly derived from the Basel II regulation implemented for the bank industry.

In other words, it seems that some factors have played an important role during the last six-month period and led the policy makers to reconsider the pros and cons of the group supervision provision. First of all, a few internationally diversified banks have nearly collapsed in the recent past demonstrating that the Basel II regulation could not prevent even well-diversified institutions from experiencing solvency problems. In the insurance sector, the American International Group (AIG) has been seriously hit due to its vast financial exposures that were written at the group level. In addition, after the massive interventions of governments to save some of the biggest European financial institutions, political pressures have emerged. France, for instance, seems to be in favor of the deletion of the group support element of the directive. This decision is also due to the fact that mutuals – which are preponderant in France – tend to have lower solvency ratios and capital requirements. Smaller countries in Eastern Europe are also concerned since they fear losing control over some of the entities. According to a report published by FSA in April 2008 (Enhancing group supervision under Solvency II), foreign insurance subsidiaries own 98.6% of market share in the Slovakian life sector and 100% in the non-life. These figures help us better understand the small Eastern European countries concern.

Overall, the immediate consequence of the ECOFIN decision could trigger new rounds of political discussions and delay the effective implementation of the Solvency II directive. In this context, Celent thinks that 2012 might be a too optimistic objective. However, we still encourage insurers to prepare for the Solvency II implementation because the new set of capital requirement regulation means changes and will trigger new investments anyway.

Integrity In Times of Turmoil

It is easy to do the right thing when everything is going well. It gets harder as conditions deteriorate.

That basic truth applies to just about everything. I think it is relevant to the times we’re in, whether you’re an insurer competing for scarce premium dollars, a vendor trying to differentiate your products to customers, or an analyst firm that provides a neutral voice on business and technology trends.

How about an example that touches our industry? A homeowner I know was horrified to discover that the custom-made cabinets in his brand-new home were off-gassing immense amounts of formaldehyde, nearly a year after he moved in. His builder, the cabinet subcontractor, the supplier of the materials used in the cabinets, and their respective insurers all cashed their checks after the house was built. But despite their clear shared responsibility for the problem, they all ran for the hills when the formaldehyde was discovered.The homeowner moved out and took a third mortgage to remedy the problem while the lawyers argued over who should ultimately pay for the fix.

There’s plenty of shameful behavior in that story. But focus on the insurers. Were they justified in denying liability and pushing the case into suit? I think they all need a gentle reminder that the true measure of who we are–as companies and individuals–emerges in times of trouble, not in times of plenty.

For those of us who appreciate a good challenge and who want to demonstrate their commitment to doing the right thing–always–these are the best of times. And times of revelation, in a sense. As the prospects for a quick economic turnaround dim, we’ll all have new insights into who thrives on doing the right thing, and who does not. This is a distinction that matters.

1.30.09: Managing IT in Times of Crisis: A Celent CIO Roundtable

Celent Senior Analysts, Insurance Group Westin Times Square 270 West 43rd Street Broadway Ballroom, 3rd Floor New York, NY Please find more information at

Software Application Testing in Insurance, Part II: Getting Test Data

Previous posts about testing Topic 1: Automated Testing Topic 2: Getting Test Data Bad test data can mean that the best tests fail to predict real world problems. While many test topics apply to all industries, insurance carriers face some unique issues when it comes to getting good test data. Due to HIPAA and other industry regulations, utilizing real data for testing is a gray area, as the test team does not necessarily need to be working with real data to do their jobs. It’s a difficult task to take real data and “clean” it for testing. It’s also a difficult task to generate good test data from scratch, though this is really the best solution. An insurer should take the time to have a developer create a small application/utility that generates test data specifically for the application being tested. This utility should be generating random data but follow a set of rules to keep the data within the bounds of reality. It should intentionally create “edge cases” that might stress the system and reveal errors. It should be easily adjusted to create small data sets for simple tests and very large data sets for performance/scalability tests. While it may take a few days to implement this utility, it will save a lot of time later. Instead of struggling for a half a day every time tests need to be run (a common complaint), the work to manage this will be completed up front. Unfortunately, since every software application has different data needs, this kind of utility will likely have to be written separately (or at least significantly rewritten) for each new application that needs to be tested. 90% or more of tests should be run against a very small set of data. Running tests against a huge database is unnecessary, will slow down the tests themselves, and will complicate things. Most tests are meant to verify very specific issues and there is no reason the database needs to contain any more than the bare minimum of data. Only a very small number of tests need to be run against a large database. Many insurers simply copy over their entire real-world database and then run tests against it. This not only creates security issues but makes the job harder for development and quality assurance teams.

Micro-insurance — Thinking innovatively in targeting the uninsured

On a recent trip to South Africa, I was interested to see some innovative ideas targeted at the uninsured, currently around 90% of the population. South Africa, along with other emerging economies, face many structural challenges impeding the broad adoption of insurance. Remote areas of the country have poor road and communication infrastructure. Almost two-thirds of the population have no bank account. Accessibility to financial services is low as is the understanding of the value of such offerings. In a bid to rectify this, the government put in place targets back in 2003 for the financial services industry. The banking sector had to introduce a low cost bank account, now in place, and the insurers had to commit to massively increasing accessibility to insurance. The low cost bank account, called Mzanzi, is standard debit-card based account attracting no service fees. The insurers have accepted the challenge and plan to increase current penetration rates by 180% over the next five or so years.

In my conversations with South African insurers, it’s clear that whilst committed to these targets, tapping into the previously uninsured market remains immensely challenging. The innovation is mostly occurring in the area of products and premium collection. Tying product requirements to the needs of the uninsured is a sure-fire way of garnering interest. House, motor policies have little relevance for those who own none of these assets. However, household contents policies, funeral policies and term life are of interest.

One of the constraints for insurers has been and still is the collection of premiums. With many people not having bank accounts, credit/debit payments or direct debits are just not possible. Mobile phones are being used in one or two cases to purchase insurance. Voucher cards, paid for by cash upfront, offering term life insurance can be bought at supermarkets and other consumer outlets.

Technology can play a role in supporting this small but growing market. Low-cost policy administration and claims systems can form the heart beat of such an operation supported by a flexible product configuration tool. There is little need for broker/agent or consumer portal technology. Products in these markets such as shack insurance, life insurance for a month, or crop insurance may seem unusual but can be supported through the use of modern packaged tools. One of the more unusual product characteristics is the method of premium collection. For example, premium paid upfront, or monthly, or paid via a mobile phone account (or other third party collector). This flexibility needs to be supported a modern product configuration tool.

In South Africa, as in some other emerging countries, tapping the previously uninsured is a government enforced social objective and this has focused insurer’s attention on this challenging area. Micro-insurance is unchartered territory and as such it’s not possible for insurers to look to established markets for best practices. Successful insurers in these markets will continue to innovate in areas of product design and premium collection and this analyst will be keeping an eye on how this market evolves.

Software Application Testing in Insurance, Part I: Automated Testing

In my last post I talked about the vendor proof of concept as a way for insurers to avoid buying “a lemon”. The same risk management needs to apply when an insurer develops software internally, and that’s achieved with strong testing practices. Software testing is a huge topic, and I’m going to split the discussion up over a series of blog posts. There are a lot of great books and articles out there about software testing, but in these posts I’ll try to give it an insurance industry spin. You might run into different issues depending on whether you are building web portals, data-heavy applications, server utilities, or mainframe applications, though in general the same methodologies still apply. Topic 1: Automated Testing From the lowest to highest level, test cases can be broken into the following three categories 1. Unit Tests: Code-based test cases that developers write to test their own code. Typically these are written using developer tools such as the open source JUnit. 2. Automated System Tests: Test scripts that can be run against the entire system, and can be created by developers or test teams. These are typically written with applications that are specifically geared to help write tests for web-based, Windows-based, or Java-based applications. 3. Manual Tests: Test scripts that are manually executed by a test or QA team. The biggest testing issue I see at insurance companies is that there are too many manual test processes and not enough automated testing. Manual tests are important, but they are slow, difficult to reproduce reliably, costly, and aren’t scalable. Plus, it’s much more likely that previously fixed bugs will pop up again without being noticed. With a full suite of Automated System Tests and Unit Tests, large changes can be made to a software application with confidence. Many developers complain about writing unit tests or skip this step, claiming the responsibility lies with the QA team, or saying they will take care of it later (and then never do). The more unit tests written, the easier it gets to write them, and the less time will be spent fixing bugs later. And, remember, the later a bug is discovered and fixed, the costlier it becomes, especially when the bug isn’t found until the system is in production. To increase the automated test footprint on a system built using only manual testing, start requiring new unit tests for all bug fixes going forward. Each time the test group discovers a bug, an Automated System Test or a Unit Test should be written to repeat that bug before the developer fixes the code. Once the code is changed, the team can verify that the bug has been fixed by running the new test case again and seeing that it now works. This also creates a repeatable automated test that prevents the bug from reintroduction.

The Potential Growth Segment of Rural China

Being a Celent insurance analyst based in China, I sometimes meet with Celent customers who visit Beijing and share my insights on China insurance market growth and technology development. Many customers are interested in the potential market size, the competitiveness of the market, and the progress of technology. But recently, when I meet Celent customers in Beijing, they ask me a new question: will the insurance industry in China continue its high growth? Consumer confidence is down because of the global financial crisis and the many countries in recession. Thus China’s exports are decreasing quickly, and because this influences other industries, we’re seeing China’s economic development slow down. Facing this, the Chinese government decided to stimulate internal investment and consumption. Since China has a population of 1.3 billion, it is a big market for its own products. At present, the rural areas in China are underdeveloped, and rural incomes are much lower. Some government policies are aimed to increase rural people’s income. So in the not too distant future, the rural area could be a large potential market for many industries, including insurance. The insurance premium from rural areas is very low. The market is dominated by a few large insurance companies and is still not highly competitive. The products being sold in rural areas are not specialized for rural people and thus couldn’t arouse their interest. The distribution channel is still dominated by sales agents. In June 2008, China insurance regulators started the micro-insurance experiment. To participate in the experiment, companies must develop simple products targeting low income people in rural areas, with payouts of RMB 10,000 Yuan to RMB 50,000 Yuan, low premiums, simple underwriting rules, and a simple claims process. The regulation also promotes the use of multichannel distribution and new technologies such as wireless solutions. By entering into the rural area, in the beginning, insurance companies may only get a very low premium from each client. However, as a mid-term to long-term investment, the rural area is a huge potential market for insurers, and future, rapid growth could still be expected.

References . . . Fonts . . . The Meaning of Cool

References Whenever Celent publishes an ABCD vendor report, we always ask the vendors for references. We subsequently speak to these references and/or ask them to take a short online survey. When we help an insurer with a vendor selection process, we also contact references. In general, we expect references to be quite positive. There is a hope–since we do not live in the best of all possible worlds with the best of all possible software solutions–that references will also mention some things that they would like to see improved–either about the application or the vendor. But still, overall, the expectation is a bunch of A’s, A-‘s, and a couple of B+’s. Once in a while a reference will come out with a string of negatives–too slow, too expensive, too hard to implement or change, too much hassle working with this vendor’s staff. Enough negatives that the reference is basically saying caveat emptor–or even, you might want to take your business elsewhere. What gives? How could the vendor not know, in general, what the reference is thinking–and knowing that, just give another reference? I don’t have a good answer. But it looks like some vendor account managers need to do a better job of asking, listening, and getting broken things fixed. Fonts There is a school of thought in web design that small fonts are cool–and really teeny-tiny fonts are very cool. Personally I’m a “form and function” guy. Aesthetically pleasing forms are good, but only if the intended function is achieved. I’m not quite sure what is the function of website fonts too small to be read by anyone over the age of 40. Or maybe that is the function. The Meaning of Cool And while we are on the topic of cool. What does “cool” mean? It can mean “yes”–as in “It’s noon, let’s go get a sandwich.” “Cool.” It can also mean “good” — as in “Look, I can put a lot more text in this label with a 1.5 point font.” “Cool.” But at a deeper level, it seems to mean something more. It seems to mean “This statement / thing / activity is just my style, just my taste, just my thing. And . . . I don’t have to explain how or why that is.”

The Rise of the Capable Insurer

I was talking about the financial crisis to an executive at a large life insurer recently. Given the steady stream of dismal financial news over the past two months, you might have expected it to turn into a “woe is me!” conversation. But in fact, the opposite occurred.

“We’re doing fine,” the exec told me, “and quite frankly we’re patting ourselves on the back for sticking with our investment strategies.” Their CFO had apparently resisted the pressure to chase high returns and is now being hailed internally as a quiet hero. As a result of his tenacity, the company is well positioned to invest in technology, service innovation, and perhaps even acquisitions of some less fortunate (less lucky? less well-run?) companies.

This is one example of an insurer doing things right, and doing things well. Somehow the bad news about our industry blares from the rooftops, while success stories like this get little notice. We’ve tried to elevate the success stories through our annual Model Carrier research, which offers a series of short case studies on effective use of technology. But it is difficult to battle the perception that insurers are laggards in terms of their business and technology strategies.

This theme of the capable insurer came up again last week at a Celent breakfast event in London. My colleague from Oliver Wyman, Andy Rear, used the term to frame a discussion around the strategies of insurers poised for success. His thesis—that capable insurers are emerging, particularly those that effectively manage customers, create rational service models, maintain operational agility, and recycle capital rapidly—really struck a chord with me.

There will always be winners and losers in the battle for premium dollars and customer loyalty. And we agree with Andy that the winners will have clearly defined strategies that make them more nimble and responsive to constant market changes. Getting there, as always, will be a challenge. But examples are emerging, so we know it can be done.