Is the insurance industry facing a Cyber-Cat? Thousands of websites at risk to heartbleed bug…

Is the insurance industry facing a Cyber-Cat? Thousands of websites at risk to heartbleed bug…
No no – I’m not referring to an animated cat on an App but rather the announcement yesterday regarding the Heartbleed bug affecting the security of over 50% of the Internet according to some estimates. The bug affects the OpenSSL package and is believed to have been in the package since 2011. It affects the way the package deals with heart beat messages, hence the moniker given to the bug. There are already tools in use that exploit the bug and provide access to recent user data on compromised servers. There have been security alerts before with many large brands facing fines and media inquiries about their losses but this bug potentially affects hundreds of thousands of websites and many businesses globally, but why characterise this as a catastrophe and why would insurers be interested? In the last 2 to 3 years with the cost of data breaches growing significantly businesses have been offsetting the risk of a breach or loss through Cyber Liability Insurance Covers. Whilst the practice and cover is arguably in it’s infancy it’s popularity suggests that this sort of event could constitute a significant liability to insurers globally offering this cover. Further the event has some characteristics in common with other events requiring catastrophe response:
  • Many insured are at risk.
  • The event will likely draw the attention of governments and regulators.
  • Swift response will mitigate further loss.
There are some significant differences here though. Most notably in the event of hail, storm or flooding the insured are likely aware if their assets are affected or not – they may not know the extent of the loss but are likely aware if they need to claim. Increasingly risk aggregation and modelling tools are helping carriers and brokers understand the likely impact of catastrophe events. In this case however the insured may not be aware if they are compromised or not since the bug allowed for intrusions that would not be logged by the affected systems. In this case the advice is to determine if OpenSSL is used and if so then the server has been vulnerable, may have been compromised and should be patched immediately. The full statement regarding the bug is available at http://heartbleed.com/ although it is also covered at http://blog.fox-it.com/2014/04/08/openssl-heartbleed-bug-live-blog/ which contains some useful advice. Further coverage is available from Reuters and The Guardian. As noted on heartbleed.com – Apache and NGinx webservers are known to typically use the OpenSSL library and account for 66% of the Internet according to Netcraft’s April 2014 Web Server Survey. Google says that it is not affected however Yahoo has already reported that they are working to fix the affected services on their side. As always communication and collaboration is crucial to managing these events. Insurer clients of Celent may like to read Celent’s case study combining internal and external data to respond to a catastrophe.

Celent Predictions for 2014

Celent Predictions for 2014
It’s clear that my colleagues and I see 2014 as something of a tipping point, a water shed for established and new technologies  to take hold in the insurance industry. I’ll try to summarise them succinctly here. Expect to see reports on these topics in the near future. Celent’s 2014 prediction focus on:
  • The increasing importance and evolution of digital
  • The rise of the robots, the sensor swarm and the Internet of Things
  • An eye to the basics
The first topic area is labelled digital but encompasses novel use of technology, user interfaces, evolving interaction, social interaction (enabled by technology) and ye olde customer centricity. Celent predicts vendors would market core systems as customer centric again, but this time meaning digital customer centricity. Celent expects to see core system user interfaces to acquire more social features along with a deeper investment in user interfaces leveraging voice, gesture, expression and eye movements. A specific digital UI example was the wide spread adjustment of auto damage claims (almost) entirely done through photos. In addition, gamification use for both policyholders and brokers will be adopted or increase in use for those early adopters. Celent further predicts greater investment in digital and that comprehensive digitisation projects would start to drive most of the attention and budgets of IT. The second topic I’ve called Robots and Sensors, while digital there is a significant amount of attention and specificity. The merger or evolution of the Internet with the Internet of Things accelerates with devices contributing ever more data. Celent predicts this rise of the Internet of Things or the sensor swarm, will push usage based insurance policies to other lines of business, not just telematics based auto policies that UBI is currently synonymous with. Celent further predicts that the quantified self movement and humans with sensors will in 2014 yield the first potentially disruptive business model for health insurance using this data. As an aside the increasing use of automation, robotics and AI will see broader adoption in the insurance industry. For those reading my tweets, Celent predicts 2014 will see drones used for commercial purposes. I hope we won’t have the need, but wonder if we’ll see drones rather helicopters capturing information about crisis stricken regions in 2014. The final topic I’ve called the basics. Celent predicts insurers will continue to focus heavily on improving performance of the core business – a good counterbalance to the hype around digital and a good pointer to where to focus digitisation efforts. At Celent we have noted a pragmatic interest in the cloud from insurers and we predict increasing complexity in hybrid cloud models, to the benefit of the industry. A little tongue in cheek but finally, Celent suggests that industry will finally find a business case for insurers adopting big data outside of UBI. Avid readers of the blog will be happy to see we haven’t predicted an apocalypse for 2014.   A special thanks to Jamie Macgregor, Juan Mazzini, Donald Light and Jamie Bisker for their contributions.  

Thoughts from Insurance Technology Congress 2013, London

Thoughts from Insurance Technology Congress 2013, London
Insurers and vendors met in London to discuss insurance technology on the 24th and 25th of September this year. The audience mostly consisted of those with an interest in the London market and Lloyds although there were representatives from general insurers in the UK too. I was glad to see that the tone of the meeting had shifted. In years past there has been a theme of technology and modernisation being necessary but too difficult. This is a market that has seen some high profile and expensive failures in IT along with successes. This week I heard again the call to action, the need to modernise but there was a much clearer sense of optimism, a way forward. There are still very large, expensive projects in the market with Jim Sadler, CIO of XChanging giving a colourful view of the latest deployment on behalf of the market. Alongside these are independent initiatives, demonstrating the value of standards and cooperation amongst competitors in the market. A panel discussing the eAccounting initiative, Ruschlikon, led by XL Group’s Simon Squires, gave a surprising engaging and transparent story of how a group of insurers and brokers collaborated and delivered to market technology that fundamentally improved their operations and speed of response to the insured. Genesis offered another example of a group of insurers coming together and collaborating to fix an issue that again, slowed down the market and affected customer service. In the course of the proceedings the architect of Genesis mentioned the best thing for the project would be that it is superseded by something that worked better, but that wasn’t a reason not to do it. Throughout the discussions there was a theme of automating where human interaction didn’t add value, but not automating for the sake of automation. There were discussions about delivering smaller projects, doing it quicker, collaborating and adopting standards where this didn’t affect competitive advantage and not doing so harmed customer service. Themes I expect we’ll see repeated at next weeks Celent event in San Francisco. As before and for the last few decades, there was a sense of a need to modernise, to attract new talent, to move the market forward. This year there was a real sense of optimism, sample projects that have moved quickly and gained adoption, a way forward.

Celent honors twenty-four Model Insurers for 2012 – Will your insurance company be next in 2013?

Celent honors twenty-four Model Insurers for 2012 – Will your insurance company be next in 2013?

Celent announced the winners of its sixth annual Celent Model Insurer awards on Thursday, January 26, 2012 in Boston, MA. http://www.celent.com/reports/event-presentations-celent-2012-insurance-innovation-insight-day-featuring-celents-model. On hand were nearly 150 insurers and technology vendors. Celent Model Insurer Award nominations are open to insurers across the globe and recognize insurers who have successfully demonstrated key best practices in the use of technology within the product and policyholder life cycle and in IT infrastructure and management that a “model insurer” would use. Each award winner’s technology initiative is presented as a Model Insurer Component – components of a theoretical model insurer’s IT systems and practices.

Representing insurance technology “best of the best”, the 2012 Celent Model Insurer Awards honored projects in eleven categories. There were common themes in the award winning projects: an emphasis on innovation and emerging technologies like mobile, telematics, and geospatial risk management tools; a continuation of insurers using SOA/Web Services and ACORD standards as insurers build lasting, integrated solutions; and the increased digitalization of processes as insurers go green and automate process flows.

This year Celent received well over 80 submissions each representing a different technology initiative. Twenty-three Model Insurer Components and one Model Insurer of the Year were named. Nationwide Insurance won Celent’s Model Insurer of the Year recognition for their successful Catalyst Program, a five year consolidation of two $1 billion commercial property & casualty companies. Nationwide’s Catalyst program was successful because of several key best practices including: a strong IT/business alignment; the use of a multi-phase roadmap with “builds” in flight simultaneously; the successful adaptation of the IT department from being a maintenance organization to a systems deployment factory; and the use of effective change management throughout the organizations ensure business readiness.

For more information on the Celent’s 2012 Model Insurer Awards please visit http://www.celent.com/reports/model-insurer-2012-case-studies-effective-technology-use-insurance. Our media partner, Insurance Networking News, has posted a slideshow of the event on their website at http://www.insurancenetworking.com/gallery/celent-innovation-insight-day-recap-29868-1.html.

Nominations for Celent Model Insurer Awards are accepted year-round. We will accept nominations for 2013 beginning February 2, 2012. The deadline for nominations is November 1, 2012. Will your recently implemented project be recognized as a Model Insurer in 2013? Submit and find out!

Temporary IT, or, Where Does Technology Go When It Retires?

Temporary IT, or, Where Does Technology Go When It Retires?
I recently reviewed an IT project that had been submitted for an award, only to learn that between the time of the submission and the time of the judging, the IT project in question had been “retired.” The context forced me to ask whether such a project could still be considered award-worthy, but the more general question is whether a retired system can be called a success. Some IT projects are destined for retirement the day they are conceived. They are the temporary measures put in place to achieve a certain minimum level of functionality “until.” Until the entire core system is replaced. Until a vendor offering provides the function in their next upgrade. Until the technology available in the marketplace can duplicate this custom code. Until someone comes up with a better/cheaper/faster way of doing it. These projects live short, meaningful lives. They go into production quickly, do their job, and then gracefully make an exit six months, a year, or two years later when the permanent solution is ready. Such projects include things like a custom agent portal that is eventually eclipsed by a full policy administration replacement. Or a simple rating calculator that is maintained outside the legacy system and used solely to provide real-time quotes to agents until the legacy system has been modernized. Or hard-wired integration between a billing and claims solution maintained while both systems are being upgraded. These projects are a success not despite the fact that they are eventually retired, but because they are eventually retired. They have done their job and helped the organization advance to the next stage. In fact, the true disaster is when a tech initiative that was always meant to be a temporary solution evolves into a critical system that lives for ten years, cobbled together like Frankenstein’s IT, preventing the company from moving on to a better, more modern solution. This happens more often than we’d like, and so even projects with short lifespans need to be taken seriously. The truth is ALL technology is temporary technology. Every system will eventually be replaced by the next system. And any system–no matter how modern and “permanent” it is upon implementation–will become legacy technology if it is not constantly kept up to date. We can learn from these temporary IT projects. Smart organizations approach such initiatives warily, making sure that it will be easy to turn them off when it is time to move on. So should be the case with every project. Smart organizations consider how employees will be impacted when a temporary IT project is no longer available. So should be the case with every project. Smart organizations have a next step in mind when they start a temporary IT project. So should be the case with every project. If insurers treat every IT initiative like it is a temporary one then insurers will be less likely to end up with irreplaceable legacy systems. Part of IT planning, even for major systems that have long life spans, is to consider how the system will be kept up-to-date and how it will eventually be retired.

Can Google Buzz teach insurers a few lessons about social networking?

Can Google Buzz teach insurers a few lessons about social networking?
Google tends to get a lot of press simply because it’s Google, although it’s fair to say Google has got a few things wrong with some of its products – anyone remember Orkut, or all the hype around Google Wave? One thing is certain Google have got a lot right about its launch of Google Buzz. This may be particularly interesting as some insurers and financial advice web sites move to create their own social networks. Let’s examine first what Google got right and then what we can learn from where they went wrong. Firstly, and I think this is key, Google leveraged an existing network. Google Buzz is built on Google Mail. This immediately gives a population of customers. In addition this product already has many of the customer’s frequently contacted friends. This means there’s little set up involved and the network has been swiftly established. Secondly Buzz leverages existing assets and relationships. It’s linked to Flickr, YouTube, Google Reader, Google Maps and others. This means that customers can continue using existing and familiar tools and gain extra value from Buzz. Thirdly Buzz came out with a programming interface to allow third parties to start integrating. It also leveraged existing networks and APIs allow it integrate with the customers other applications very quickly. Lastly, Google have been very quick to respond not just with words but meaningful change to the product, even though it is only a week old at the time of writing. In an interview, Google have described having a War Room set up with developers and product owners listening to customers in real time, and making key decisions about whether to and how to change the product. However, Google have got two things wrong. The very public and ill-thought through impact on privacy has been the key concern. Customers could easily and accidently disclose information about who they frequently emailed and contacted – information previously private. This is of concern to cheating partners and political activists alike. Google have already done much to address this concern and what was an own goal is now being applauded as a swift response by advocates. We’ve learned today that the privacy issue has sparked some class action lawsuits in the US. The second thing that could have gone better is the programming interface – which is currently read only. I would expect a further increase in adoption once tool authors can create updates to Buzz directly and for me constitutes a huge opportunity missed. What does this mean in financial services? Some simple guidelines:
  1. Leverage existing assets – both information you have and public information. Google have asked their customers to volunteer their twitter ID. This information provides Google with an already public list of their customer’s friends. Unless you have a key unique selling point, consider leveraging existing networks rather than building your own – for example Twitter or Facebook.
  2. Link your network to support the free public feeds from Flickr, Twitter, etc.
  3. There are successful social networks that operate without a programming interface. Very few companies have offered open programming interfaces with insurance or financial data – wesabe.com is one such example offering read only transaction data. In other domains allowing third party developers and tools vendors to build applications for a website has sped up adoption. Limiting an API to posting updates, managing communication and friends should have the same effect.
  4. On launch, set up a War Room. Most of the feedback will arrive in the applications infancy and its survival depends on identifying the issues and opportunities, prioritising these and visibly acting on them.
  5. Finally – get the security and privacy right. The two go hand in hand. Getting it a bit wrong and fixing it quickly as Google have can earn you forgiveness, but customers will likely expect more from financial services organisations.

Hardware Outsourcing vs. People Outsourcing

Hardware Outsourcing vs. People Outsourcing

I’ve been doing research for several reports on the topic of IT outsourcing, some about utilizing cloud computing for hardware and some about working with vendors to handle consulting and development. While these two areas are conceptually very different, the approach and business values are quite similar.

One misconception about both is that they are used for “replacement.” By this I mean that an insurer uses cloud computing to replace their data center or an insurer utilizes an IT service provider to replace their IT organization. While in some instances this might be true, it is rarely the case.

The other misconception is that an insurer uses cloud computing or consulting services to lower costs. Lower cost might be one reason, but shouldn’t the only reason. Many CIOs, however, do approach IT outsourcing primarily for the perceived cost benefit, and Celent sees this as a mistake. In many cases, some or all of the long term costs might actually be higher. This does not mean a cost-sensitive insurer should avoid IT outsourcing, but, rather, should proceed with an outsourcing project while looking at the overall business values associated with it.

The added business values are the other area of similarly for hardware and development outsourcing. Both help a company increase capacity, one with increased server capacity and the other with increased human capacity. Both help a company access new capabilities that they didn’t have earlier; cloud computing providing rapid server deployment and failover (among other things), development outsourcing providing resources with skills sets that did not exist in house. And, finally, both work best when thought of as a long-term strategy that will complement the existing IT and not just as a temporary measure or as a replacement for existing resources.

The takeaway is that any organization looking at IT outsourcing–whether for hardware, software, or people–should focus not on cost but on long term business value. Organizations that only care about cost are often disappointed by the outcome. Organizations that have a strategy to bring new capabilities and business value to users will be successful.

Attacking Business Complexity

Attacking Business Complexity
This week, Celent is pleased to feature an article from guest contributor, John Boochever, who leads Oliver Wyman’s Global practice focused on strategic IT and operational issues across financial sectors. For senior executives facing the turbulence of today’s financial crisis, reducing the cost base of their business operations now sits squarely at the top of the agenda. After decades of product and service variation, channel diversification, geographic and operational expansion, all supported by layers upon layers of technology, many institutions are finally compelled to deal with a fundamental reality: their businesses are overly complex for the value they generate. Not only is this excess not valued by customers, it actually impedes value delivery by limiting the sales force’s ability to respond, increasing service and fulfillment costs, compounding operational risk, and making the organization more unwieldy to manage. The siren call for a simpler “core business” approach, incorporating elements of modular design and industrial engineering is being heard across the industry. But when senior executives take the first steps toward “dialing down” complexity, they rapidly come up against three immutable features that overshadow their ability to make change in their environment: Complexity is structural, deeply embedded in the business and operating models of their institutions. Poorly understood network effects across functions and businesses create linkages and interdependencies that compound complexity. There is a general lack of transparency of the features of complexity required to generate value versus those that do not. Eliminating complexity requires a “front-to-back” approach that identifies and addresses the root causes of complexity and all of its network effects. As an example, “eliminating 20% of non-profitable products” has limited impact if it is not followed through with a systemic simplification of the supporting operations and IT infrastructure. By the same token, introducing new middleware to make the IT architecture more “service oriented” is a waste of investment if the institution’s operating model is not built around modular services at all. Financial institutions have to not just cut but eradicate complexity to regain their focus and flexibility, and sustain efficiency. The long-term rewards of eliminating complexity include a radically simplified operating model, an improved client experience and a dramatically reduced cost structure. John Boochever leads Oliver Wyman’s Global practice focused on strategic IT and operational issues across financial sectors, and can be reached at john.boochever@oliverwyman.com.

Overcoming Fear as a Barrier to Change

Overcoming Fear as a Barrier to Change
At Celent, we often find ourselves helping insurance carriers implement a process of change, whether it’s selecting a new policy administration system, process reengineering, or restructuring the IT organization. Change means more than just a new technology or a new process; it also requires a shift in corporate culture. Even when the IT-side of a change goes well, the people-side of a change can fail. No matter how good a new system is, the project isn’t a success if employees can’t or won’t use it. There are many reasons employees resist change. Annoyance (“learning a new system is difficult and distracts me from my real job”) and skepticism (“the last new system failed so why trust this one”) are two problems. But the biggest barrier–and the most difficult to overcome–is fear. New technology and new efficient processes mean employees fear that their jobs will become redundant and eliminated. And when employees are afraid they will fight change as hard as possible. I recently spoke with the leadership at an insurance carrier who boasted they had not laid anyone off in the history of the company. My initial impulse was to assume this meant they were putting loyalty above creating an efficient business. In the US, it’s sometimes taken for granted that thriving as a corporation means some routine layoffs as operational efficiencies change. But this company instead invested a great deal of time and effort to retrain employees rather than letting them go. Far from being a barrier to change, this corporate attitude succeeded in taking fear out of the equation. Even in a difficult economic time, employees at this company understand that new systems and new processes don’t mean layoffs. While annoyance and skepticism might still be around (and, in fact, might be increased by entrenched training and memories of previous unsuccessful projects), there is less fear. Employees can look at change as an opportunity to gain new skills; end-users can provide feedback and participate in training without worrying that they are making themselves obsolete. And support and participation from end-users is the often overlooked critical change factor that determines a project’s success. I was happy to see this challenge to common wisdom providing such positive results. While not every company will be willing to dedicate itself to this extreme employee loyalty, there is an excellent lesson for everyone. It is often assumed that to remain nimble and efficient, new technologies and processes much go hand in hand with staff reductions or replacements. But, at least for one carrier, a long-standing culture of stability has allowed them to overcome fear and embrace change.

Software Application Testing in Insurance, Part IV: Best Practices

Software Application Testing in Insurance, Part IV: Best Practices
Previous posts about testing Topic 1: Automated Testing Topic 2: Getting Test Data Topic 3: Test Environment Topic 4: Best Practices in Application Testing: It’s been a while since I’ve blogged about this subject, but I think the previous three posts could use some follow up. After talking about the need for testing, getting the test data, and setting up a test environment, we need to actually do the testing. The best practices for software application testing are not easy to set up and maintain, often requiring one or two full time IT resources with a test development specialty (often called a Software Test Engineer, or an STE). Not many technology vendors follow these best practices, let alone insurers, and the expense and effort might not be practical or possible for every insurance company. However, by understanding the best practices an insurer can at least take steps in the right direction. The ideal scenario is an automated system that is able to clean itself to a standard state before and after each test. For example, if the QA team wants to run manual tests for software application ABC, they would run a script that would:
  • Drop all the tables in the database, recreate the tables in the database for ABC, and populate it with the same set of start data.
  • Clear out any existing application code, then get and install the latest application code from development.
  • In extreme cases, the entire test server OS will be reinstalled from scratch, though that is likely unnecessary for this level of application testing.
Later, if a different QA team wants to run manual tests for software application XYZ, they would run a similar process. Both teams are guaranteed a stable, repeatable base from which to begin their testing. By recreating the database and the application each time the tests are run, there is no need to worry about maintaining the database in a certain way, and multiple users can work with the same servers. Preparing to run the Automated System Tests should behave in a similar manner, with the reminder that the order of tests shouldn’t matter. That means it might be necessary to REBUILD the database between some automated system tests. In the case of Unit Tests, the ideal test environment should go one step further. Each unit test (and in many cases automated system tests) should contain its own code to add the data to the database needed by the test. Since unit tests are intended to be very focused, the typical test just needs a few rows of data. After the test is complete, it should clean out the data it just added. Since these are written by developers, there should be an application specific API for helping developers do this quickly, written by the IT resources devoted to test development. It’s a lot of effort, and each IT group needs to determine the level they are willing to go to achieve the best possible test practices. As with the previous topics, however, once the intial set up is complete, following the best practices becomes easy and natural.