Data Governance in Insurance Carriers

Data Governance in Insurance Carriers
Data initiatives abound in the insurance industry. Most carriers have some type of data initiative in place. They focus their efforts on implementing reporting tools, analytic tools, and repositories — with all the tools that go with them.   Data governance, on the other hand, is an emerging discipline. The discipline includes a focus on data quality, data management, data policies, and a variety of other processes surrounding the handling of data in an organization. The purpose is to assure carriers have reliable and consistent data sets to assess performance and make decisions.   As the insurance industry moves into a more data-centric world, data governance becomes more critical for assuring the data is consistent, reliable, and usable for analysis. Analysis and reporting issues are more often related to data governance issues, not technology issues.   Data governance initiatives are generally designed to assure the data is accurate, consistent, and complete in order to maximize the use of data to make decisions, to find unique insights, and to improve business planning. It assures that your data capture mechanisms are set up to capture what you need to capture and assures there is alignment between analytics tactics and strategic goals.   But carriers face governance challenges. Data is spread across a wide variety of applications, and data ownership is most often shared across the business and IT. Carriers report cultural resistance to understanding data issues, which makes it harder to find sponsors for data governance initiatives. Consequently, a large number of carriers deploy informal data governance initiatives — especially larger carriers.   I’ve just published a new report that surveys carriers around their attitudes, challenges, and initiatives related to data governance. Some very interesting findings. Check it out. http://celent.com/reports/importance-data-governance-current-practices

The Blame Game

The Blame Game
Kathleen Sebelius resigned on Monday, and I’m betting that she is hoping that her next role does not include a major IT project.  As Secretary of the Department of Health and Human Services (HHS), Sebelius was responsible for overseeing the rollout of the troubled healthcare.gov website whose launch was tainted by serious technical problems. 

Initially the technical problems were thought to be confined to scalability issues given the large amount of traffic.  But it was found that there were a variety of other issues.  The site rejected valid passwords, served up blank drop-down menus, and crashed repeatedly.  There were challenges with the database, issues with integration, and after millions of visits on the first day, only six people got all the way through.  New contractors were brought in to fix the problems which added to the cost overruns.  The cost ceiling began at $93M, was raised to $292M, and today, it is estimated that the site has cost around $500M.    To be fair, this was an extremely complex project consisting of 6 complex systems, 55 contractors working in parallel, 5 government agencies were involved; it is being used in 36 States, and covers 300 insurers’ 4500 insurance plans.

There were a number of contributing factors to the technical problems.  No single contractor managed the entire project and there was a lack of coordination across the multiple vendors.  There were a number of last minute changes – and the project was managed using a waterfall methodology – which can make it difficult to respond to the changes quickly.  Testing was inadequate.  Not only did the system not perform according to design, but it didn’t scale to the level anticipated.  Clearly they knew what the load would be – but the load testing didn’t meet the capacity plan. 

However, Sebelius had little direct oversight of the project and certainly wasn’t responsible for the day to day project management.   The website design was managed and overseen by the Centers for Medicare and Medicaid Services, which directly supervised the construction of the federal website.   Regardless, Sebelius is likely updating her resume today and considering alternatives. 

What does this mean for a CIO?  If you’re going through a large scale project – and many carriers are – you won’t know everything that the project manager knows – even though your neck is on the line if the project fails. Large scale projects require a different level of management than day to day operations.

Areas to focus on include:

·         Set realistic time frames.   Don’t underestimate the amount of time it will take to implement the project.  A lot of carriers want to hear that implementation of a policy admin system can be done in 6 – 12 months and while there are some examples of that being true, it’s more likely that your solution will take longer.  Plan carefully,  add contingencies, and if you end up with a choice between launching late or launching with less functionality that was initially planned,  you’re usually better off taking the time to do it right.  People are much less forgiving of a poorly executed project than a late one.

·         Manage the project with multiple, aligned work streams.  Large projects generally will require multiple work streams.  We often see carriers who divide the project into streams such as data, workflow, rating, documents, etc.   This allows the team to focus their efforts.  However, you have to continuously monitor that the streams are aligned.  Communication across multiple work streams is critical

·         Communications is a key success factor for large projects, yet is often an afterthought – or worse – not planned.  Communications across project teams is necessary to assure the functionality is aligned as planned. Communications is also critical when it comes to managing scope creep.  When the team clearly understands the priorities, they’re better able to make tradeoffs early on.   Clearly setting expectations around the deliverables and then continuing to manage those expectations as the project moves forward is an important piece of the communications – especially if faced with optimistic delivery dates, changing requirements, or staffing constraints. 

·         Focus on the worst case scenario.  Be skeptical when all is going smoothly.  Insist on regular checks on the project and take red flags seriously.  Realistic monitoring of the project progress and analysis of the underlying factors impacting the use of contingency will help identify issues early on.  Make sure not to just look backwards at what has occurred – but focus on readiness for future stages.  Some carriers benefit from having third parties come in and conduct project health checks – looking objectively across the project for subtle indicators of potential issues.

In the end, Sebelius is responsible for the results of the healthcare.gov implementation and her resignation should be seen as a red flag for carriers in similar situations.  Take a look at the governance you’ve put in place for your large projects.  Now may be a good time to consider adding some additional oversight. 

Is the insurance industry facing a Cyber-Cat? Thousands of websites at risk to heartbleed bug…

Is the insurance industry facing a Cyber-Cat? Thousands of websites at risk to heartbleed bug…
No no – I’m not referring to an animated cat on an App but rather the announcement yesterday regarding the Heartbleed bug affecting the security of over 50% of the Internet according to some estimates. The bug affects the OpenSSL package and is believed to have been in the package since 2011. It affects the way the package deals with heart beat messages, hence the moniker given to the bug. There are already tools in use that exploit the bug and provide access to recent user data on compromised servers. There have been security alerts before with many large brands facing fines and media inquiries about their losses but this bug potentially affects hundreds of thousands of websites and many businesses globally, but why characterise this as a catastrophe and why would insurers be interested? In the last 2 to 3 years with the cost of data breaches growing significantly businesses have been offsetting the risk of a breach or loss through Cyber Liability Insurance Covers. Whilst the practice and cover is arguably in it’s infancy it’s popularity suggests that this sort of event could constitute a significant liability to insurers globally offering this cover. Further the event has some characteristics in common with other events requiring catastrophe response:
  • Many insured are at risk.
  • The event will likely draw the attention of governments and regulators.
  • Swift response will mitigate further loss.
There are some significant differences here though. Most notably in the event of hail, storm or flooding the insured are likely aware if their assets are affected or not – they may not know the extent of the loss but are likely aware if they need to claim. Increasingly risk aggregation and modelling tools are helping carriers and brokers understand the likely impact of catastrophe events. In this case however the insured may not be aware if they are compromised or not since the bug allowed for intrusions that would not be logged by the affected systems. In this case the advice is to determine if OpenSSL is used and if so then the server has been vulnerable, may have been compromised and should be patched immediately. The full statement regarding the bug is available at http://heartbleed.com/ although it is also covered at http://blog.fox-it.com/2014/04/08/openssl-heartbleed-bug-live-blog/ which contains some useful advice. Further coverage is available from Reuters and The Guardian. As noted on heartbleed.com – Apache and NGinx webservers are known to typically use the OpenSSL library and account for 66% of the Internet according to Netcraft’s April 2014 Web Server Survey. Google says that it is not affected however Yahoo has already reported that they are working to fix the affected services on their side. As always communication and collaboration is crucial to managing these events. Insurer clients of Celent may like to read Celent’s case study combining internal and external data to respond to a catastrophe.

Celent Predictions for 2014

Celent Predictions for 2014
It’s clear that my colleagues and I see 2014 as something of a tipping point, a water shed for established and new technologies  to take hold in the insurance industry. I’ll try to summarise them succinctly here. Expect to see reports on these topics in the near future. Celent’s 2014 prediction focus on:
  • The increasing importance and evolution of digital
  • The rise of the robots, the sensor swarm and the Internet of Things
  • An eye to the basics
The first topic area is labelled digital but encompasses novel use of technology, user interfaces, evolving interaction, social interaction (enabled by technology) and ye olde customer centricity. Celent predicts vendors would market core systems as customer centric again, but this time meaning digital customer centricity. Celent expects to see core system user interfaces to acquire more social features along with a deeper investment in user interfaces leveraging voice, gesture, expression and eye movements. A specific digital UI example was the wide spread adjustment of auto damage claims (almost) entirely done through photos. In addition, gamification use for both policyholders and brokers will be adopted or increase in use for those early adopters. Celent further predicts greater investment in digital and that comprehensive digitisation projects would start to drive most of the attention and budgets of IT. The second topic I’ve called Robots and Sensors, while digital there is a significant amount of attention and specificity. The merger or evolution of the Internet with the Internet of Things accelerates with devices contributing ever more data. Celent predicts this rise of the Internet of Things or the sensor swarm, will push usage based insurance policies to other lines of business, not just telematics based auto policies that UBI is currently synonymous with. Celent further predicts that the quantified self movement and humans with sensors will in 2014 yield the first potentially disruptive business model for health insurance using this data. As an aside the increasing use of automation, robotics and AI will see broader adoption in the insurance industry. For those reading my tweets, Celent predicts 2014 will see drones used for commercial purposes. I hope we won’t have the need, but wonder if we’ll see drones rather helicopters capturing information about crisis stricken regions in 2014. The final topic I’ve called the basics. Celent predicts insurers will continue to focus heavily on improving performance of the core business – a good counterbalance to the hype around digital and a good pointer to where to focus digitisation efforts. At Celent we have noted a pragmatic interest in the cloud from insurers and we predict increasing complexity in hybrid cloud models, to the benefit of the industry. A little tongue in cheek but finally, Celent suggests that industry will finally find a business case for insurers adopting big data outside of UBI. Avid readers of the blog will be happy to see we haven’t predicted an apocalypse for 2014.   A special thanks to Jamie Macgregor, Juan Mazzini, Donald Light and Jamie Bisker for their contributions.  

Thoughts from Insurance Technology Congress 2013, London

Thoughts from Insurance Technology Congress 2013, London
Insurers and vendors met in London to discuss insurance technology on the 24th and 25th of September this year. The audience mostly consisted of those with an interest in the London market and Lloyds although there were representatives from general insurers in the UK too. I was glad to see that the tone of the meeting had shifted. In years past there has been a theme of technology and modernisation being necessary but too difficult. This is a market that has seen some high profile and expensive failures in IT along with successes. This week I heard again the call to action, the need to modernise but there was a much clearer sense of optimism, a way forward. There are still very large, expensive projects in the market with Jim Sadler, CIO of XChanging giving a colourful view of the latest deployment on behalf of the market. Alongside these are independent initiatives, demonstrating the value of standards and cooperation amongst competitors in the market. A panel discussing the eAccounting initiative, Ruschlikon, led by XL Group’s Simon Squires, gave a surprising engaging and transparent story of how a group of insurers and brokers collaborated and delivered to market technology that fundamentally improved their operations and speed of response to the insured. Genesis offered another example of a group of insurers coming together and collaborating to fix an issue that again, slowed down the market and affected customer service. In the course of the proceedings the architect of Genesis mentioned the best thing for the project would be that it is superseded by something that worked better, but that wasn’t a reason not to do it. Throughout the discussions there was a theme of automating where human interaction didn’t add value, but not automating for the sake of automation. There were discussions about delivering smaller projects, doing it quicker, collaborating and adopting standards where this didn’t affect competitive advantage and not doing so harmed customer service. Themes I expect we’ll see repeated at next weeks Celent event in San Francisco. As before and for the last few decades, there was a sense of a need to modernise, to attract new talent, to move the market forward. This year there was a real sense of optimism, sample projects that have moved quickly and gained adoption, a way forward.

Celent honors twenty-four Model Insurers for 2012 – Will your insurance company be next in 2013?

Celent honors twenty-four Model Insurers for 2012 – Will your insurance company be next in 2013?

Celent announced the winners of its sixth annual Celent Model Insurer awards on Thursday, January 26, 2012 in Boston, MA. http://www.celent.com/reports/event-presentations-celent-2012-insurance-innovation-insight-day-featuring-celents-model. On hand were nearly 150 insurers and technology vendors. Celent Model Insurer Award nominations are open to insurers across the globe and recognize insurers who have successfully demonstrated key best practices in the use of technology within the product and policyholder life cycle and in IT infrastructure and management that a “model insurer” would use. Each award winner’s technology initiative is presented as a Model Insurer Component – components of a theoretical model insurer’s IT systems and practices.

Representing insurance technology “best of the best”, the 2012 Celent Model Insurer Awards honored projects in eleven categories. There were common themes in the award winning projects: an emphasis on innovation and emerging technologies like mobile, telematics, and geospatial risk management tools; a continuation of insurers using SOA/Web Services and ACORD standards as insurers build lasting, integrated solutions; and the increased digitalization of processes as insurers go green and automate process flows.

This year Celent received well over 80 submissions each representing a different technology initiative. Twenty-three Model Insurer Components and one Model Insurer of the Year were named. Nationwide Insurance won Celent’s Model Insurer of the Year recognition for their successful Catalyst Program, a five year consolidation of two $1 billion commercial property & casualty companies. Nationwide’s Catalyst program was successful because of several key best practices including: a strong IT/business alignment; the use of a multi-phase roadmap with “builds” in flight simultaneously; the successful adaptation of the IT department from being a maintenance organization to a systems deployment factory; and the use of effective change management throughout the organizations ensure business readiness.

For more information on the Celent’s 2012 Model Insurer Awards please visit http://www.celent.com/reports/model-insurer-2012-case-studies-effective-technology-use-insurance. Our media partner, Insurance Networking News, has posted a slideshow of the event on their website at http://www.insurancenetworking.com/gallery/celent-innovation-insight-day-recap-29868-1.html.

Nominations for Celent Model Insurer Awards are accepted year-round. We will accept nominations for 2013 beginning February 2, 2012. The deadline for nominations is November 1, 2012. Will your recently implemented project be recognized as a Model Insurer in 2013? Submit and find out!

Temporary IT, or, Where Does Technology Go When It Retires?

Temporary IT, or, Where Does Technology Go When It Retires?
I recently reviewed an IT project that had been submitted for an award, only to learn that between the time of the submission and the time of the judging, the IT project in question had been “retired.” The context forced me to ask whether such a project could still be considered award-worthy, but the more general question is whether a retired system can be called a success. Some IT projects are destined for retirement the day they are conceived. They are the temporary measures put in place to achieve a certain minimum level of functionality “until.” Until the entire core system is replaced. Until a vendor offering provides the function in their next upgrade. Until the technology available in the marketplace can duplicate this custom code. Until someone comes up with a better/cheaper/faster way of doing it. These projects live short, meaningful lives. They go into production quickly, do their job, and then gracefully make an exit six months, a year, or two years later when the permanent solution is ready. Such projects include things like a custom agent portal that is eventually eclipsed by a full policy administration replacement. Or a simple rating calculator that is maintained outside the legacy system and used solely to provide real-time quotes to agents until the legacy system has been modernized. Or hard-wired integration between a billing and claims solution maintained while both systems are being upgraded. These projects are a success not despite the fact that they are eventually retired, but because they are eventually retired. They have done their job and helped the organization advance to the next stage. In fact, the true disaster is when a tech initiative that was always meant to be a temporary solution evolves into a critical system that lives for ten years, cobbled together like Frankenstein’s IT, preventing the company from moving on to a better, more modern solution. This happens more often than we’d like, and so even projects with short lifespans need to be taken seriously. The truth is ALL technology is temporary technology. Every system will eventually be replaced by the next system. And any system–no matter how modern and “permanent” it is upon implementation–will become legacy technology if it is not constantly kept up to date. We can learn from these temporary IT projects. Smart organizations approach such initiatives warily, making sure that it will be easy to turn them off when it is time to move on. So should be the case with every project. Smart organizations consider how employees will be impacted when a temporary IT project is no longer available. So should be the case with every project. Smart organizations have a next step in mind when they start a temporary IT project. So should be the case with every project. If insurers treat every IT initiative like it is a temporary one then insurers will be less likely to end up with irreplaceable legacy systems. Part of IT planning, even for major systems that have long life spans, is to consider how the system will be kept up-to-date and how it will eventually be retired.

Can Google Buzz teach insurers a few lessons about social networking?

Can Google Buzz teach insurers a few lessons about social networking?
Google tends to get a lot of press simply because it’s Google, although it’s fair to say Google has got a few things wrong with some of its products – anyone remember Orkut, or all the hype around Google Wave? One thing is certain Google have got a lot right about its launch of Google Buzz. This may be particularly interesting as some insurers and financial advice web sites move to create their own social networks. Let’s examine first what Google got right and then what we can learn from where they went wrong. Firstly, and I think this is key, Google leveraged an existing network. Google Buzz is built on Google Mail. This immediately gives a population of customers. In addition this product already has many of the customer’s frequently contacted friends. This means there’s little set up involved and the network has been swiftly established. Secondly Buzz leverages existing assets and relationships. It’s linked to Flickr, YouTube, Google Reader, Google Maps and others. This means that customers can continue using existing and familiar tools and gain extra value from Buzz. Thirdly Buzz came out with a programming interface to allow third parties to start integrating. It also leveraged existing networks and APIs allow it integrate with the customers other applications very quickly. Lastly, Google have been very quick to respond not just with words but meaningful change to the product, even though it is only a week old at the time of writing. In an interview, Google have described having a War Room set up with developers and product owners listening to customers in real time, and making key decisions about whether to and how to change the product. However, Google have got two things wrong. The very public and ill-thought through impact on privacy has been the key concern. Customers could easily and accidently disclose information about who they frequently emailed and contacted – information previously private. This is of concern to cheating partners and political activists alike. Google have already done much to address this concern and what was an own goal is now being applauded as a swift response by advocates. We’ve learned today that the privacy issue has sparked some class action lawsuits in the US. The second thing that could have gone better is the programming interface – which is currently read only. I would expect a further increase in adoption once tool authors can create updates to Buzz directly and for me constitutes a huge opportunity missed. What does this mean in financial services? Some simple guidelines:
  1. Leverage existing assets – both information you have and public information. Google have asked their customers to volunteer their twitter ID. This information provides Google with an already public list of their customer’s friends. Unless you have a key unique selling point, consider leveraging existing networks rather than building your own – for example Twitter or Facebook.
  2. Link your network to support the free public feeds from Flickr, Twitter, etc.
  3. There are successful social networks that operate without a programming interface. Very few companies have offered open programming interfaces with insurance or financial data – wesabe.com is one such example offering read only transaction data. In other domains allowing third party developers and tools vendors to build applications for a website has sped up adoption. Limiting an API to posting updates, managing communication and friends should have the same effect.
  4. On launch, set up a War Room. Most of the feedback will arrive in the applications infancy and its survival depends on identifying the issues and opportunities, prioritising these and visibly acting on them.
  5. Finally – get the security and privacy right. The two go hand in hand. Getting it a bit wrong and fixing it quickly as Google have can earn you forgiveness, but customers will likely expect more from financial services organisations.

Hardware Outsourcing vs. People Outsourcing

Hardware Outsourcing vs. People Outsourcing

I’ve been doing research for several reports on the topic of IT outsourcing, some about utilizing cloud computing for hardware and some about working with vendors to handle consulting and development. While these two areas are conceptually very different, the approach and business values are quite similar.

One misconception about both is that they are used for “replacement.” By this I mean that an insurer uses cloud computing to replace their data center or an insurer utilizes an IT service provider to replace their IT organization. While in some instances this might be true, it is rarely the case.

The other misconception is that an insurer uses cloud computing or consulting services to lower costs. Lower cost might be one reason, but shouldn’t the only reason. Many CIOs, however, do approach IT outsourcing primarily for the perceived cost benefit, and Celent sees this as a mistake. In many cases, some or all of the long term costs might actually be higher. This does not mean a cost-sensitive insurer should avoid IT outsourcing, but, rather, should proceed with an outsourcing project while looking at the overall business values associated with it.

The added business values are the other area of similarly for hardware and development outsourcing. Both help a company increase capacity, one with increased server capacity and the other with increased human capacity. Both help a company access new capabilities that they didn’t have earlier; cloud computing providing rapid server deployment and failover (among other things), development outsourcing providing resources with skills sets that did not exist in house. And, finally, both work best when thought of as a long-term strategy that will complement the existing IT and not just as a temporary measure or as a replacement for existing resources.

The takeaway is that any organization looking at IT outsourcing–whether for hardware, software, or people–should focus not on cost but on long term business value. Organizations that only care about cost are often disappointed by the outcome. Organizations that have a strategy to bring new capabilities and business value to users will be successful.

Attacking Business Complexity

Attacking Business Complexity
This week, Celent is pleased to feature an article from guest contributor, John Boochever, who leads Oliver Wyman’s Global practice focused on strategic IT and operational issues across financial sectors. For senior executives facing the turbulence of today’s financial crisis, reducing the cost base of their business operations now sits squarely at the top of the agenda. After decades of product and service variation, channel diversification, geographic and operational expansion, all supported by layers upon layers of technology, many institutions are finally compelled to deal with a fundamental reality: their businesses are overly complex for the value they generate. Not only is this excess not valued by customers, it actually impedes value delivery by limiting the sales force’s ability to respond, increasing service and fulfillment costs, compounding operational risk, and making the organization more unwieldy to manage. The siren call for a simpler “core business” approach, incorporating elements of modular design and industrial engineering is being heard across the industry. But when senior executives take the first steps toward “dialing down” complexity, they rapidly come up against three immutable features that overshadow their ability to make change in their environment: Complexity is structural, deeply embedded in the business and operating models of their institutions. Poorly understood network effects across functions and businesses create linkages and interdependencies that compound complexity. There is a general lack of transparency of the features of complexity required to generate value versus those that do not. Eliminating complexity requires a “front-to-back” approach that identifies and addresses the root causes of complexity and all of its network effects. As an example, “eliminating 20% of non-profitable products” has limited impact if it is not followed through with a systemic simplification of the supporting operations and IT infrastructure. By the same token, introducing new middleware to make the IT architecture more “service oriented” is a waste of investment if the institution’s operating model is not built around modular services at all. Financial institutions have to not just cut but eradicate complexity to regain their focus and flexibility, and sustain efficiency. The long-term rewards of eliminating complexity include a radically simplified operating model, an improved client experience and a dramatically reduced cost structure. John Boochever leads Oliver Wyman’s Global practice focused on strategic IT and operational issues across financial sectors, and can be reached at john.boochever@oliverwyman.com.