Rethinking the role of the intercap

The trend-naming fashion of capital letters in the middle of words continues. I believe those “InterCaps”—also known as “BumpyCaps” and “CamelCaps”—are mostly a marketing trick intended to make terms sound important. I find them annoying. The hot example of late is FinTech. Plus its close cousins, BankTech, InsurTech, and RegTech. They’re popping up everywhere, including within the hallowed halls of Celent. We are all guilty of putting a new veneer on something that has been around for ages. What does that capital T in Tech imply, and why do the terms get such rapt attention? Is applying technology to the business of financial services new, and more worthy of our attention today than it was years ago? Is how we manage new technology fundamentally changed? I don’t think so. Maybe the point is to let us collectively off the hook for pursuing technology change so casually (was that it?) for the last 50 years. I can imagine the bank or insurance CIO, late in his/her career, saying, “Hey, if we had FinTech 30 years ago, this place might look a damn sight different by now!” Right, that’s what we were missing: Technology startups! Youngsters in hoodies! The truth behind technology and the financial services industry requires no such defense. Changing the world through application of technology didn’t depend on the arrival of startling new tools, or dorm room genius, as helpful as those might be in today’s world. It required a risk/reward shift. As an industry, we didn’t change because we didn’t have to. Our existence was not threatened by new consumer behaviors. Our livelihoods were not at risk from upstart competitors. We took a hard look at the costs and benefits of new technology, and behaved accordingly. Which meant…changing…slowly. But something is certainly different today. I believe that existential threats are emerging for our industry. We are now at risk. I’m firmly convinced that relationships between consumers and their financial providers are changing, with the industry’s participation or without it. There is a new dynamism, and it is clear that the entire ecosystem is feeling the impact. Instead of looking at FinTech and all the other Techs with an annoyed editor’s eye, maybe I should embrace the way intercaps communicate something important. They’re a stylistic irritation. But they’re also a visual cue that helps us rethink technology. And that is sorely needed in these times of powerful disruption.

The rise and fall (and rise) of Artificial Intelligence

Artificial intelligence has been around nearly as long as humans have been able to think about themselves, about thought and what they do. Empathy is wired into us – some more than others but we are all capable of thinking from another’s point of view. This capacity leads us to anthropomorphize things that aren’t human, to imbue things in our daily lives with human qualities like moods, characteristics and personality. When we build puppets, robots, models that look sort of human it is easy to for us to assign it with greater power, ability and promise than is really there. For marketers in other fields, to have consumers attribute their products with ‘magical’ properties would be a dream come true but for artificial intelligence it is a nightmare – one the industry has expended funds marketing against. Artificial intelligence has delivered many great tools which today we take for granted. Our phones listen to us and understand our requests in the context of our calendar, our camera’s recognise faces and social networks tell us who those faces belong to, machines translate words from one language to another (although don’t get the translations tattooed just yet) and the list goes on. We chuckle at these mistakes these learning and adaptive systems make, we see the huge strides and investment and we expect a new human like intelligence to emerge in the short term. Around the middle of every decade since the 60’s there has been a peak in excitement for AI, a frustration with it’s lack of progress, and a reduction of funding or AI winters as they are called. In the eighties it was LISP machines, in the nineties it was expert systems. Now in the twenty-tens (I thought it was teenies but that’s a kids show apparently) we are seeing a resurgence of AI, a blending of machine learning, predictive modelling and cognitive computing along with self driving cars. This raises some rare and interesting questions:
  • Are we headed for a new AI winter?
  • Or an AI apocalypse?
  • Also, will I still be cleaning my home in 2020?
It is certainly true to say the set of tasks we can expect software and physical computing systems to do is vastly increased compared to just a decade ago, and massively so since the 60s. Doing all the things humans can do and living in our society, empathising and understanding us in that broad context is still well beyond computers – but engaging with us in specific, well-defined domains such as about our calendar or what we would like to buy from the shop is well within their grasp today. Previously difficult tasks such as searching a database for information, reading that from a screen and keying it into another screen is now entirely possible – see the earlier blog post on bots. Having a drone fly itself around an obstacle to reach an objective is still very hard. Having a vehicle drive itself on the road is in fact easier, albeit most humans don’t benefit from lidar sensors, ultrasonics and eyes in the back of their head (alright, bumper). It is good to see AI on the rise again – I loved the topic ever since getting into programming and getting involved in a cognitive psychology course some years ago. I recall writing an expert system in Pascal back in the 90s. I am concerned, as the insurance industry should be, by a new AI winter. Self driving cars and vehicles have the potential to make the roads safer for all. We will when we see them, imbue them with more power than they have – this is human nature. We will, in the not too distant future, hear people say things like, “the car likes to give cyclists a lot of room on the road” or “the car prefers to take this corner at a fair speed” – imbuing a complex machine with sensors and programming with preferences, desires and likes – human qualities. When the first death comes we will ask how could it do such a thing. When an automated car is put in a position where it must decide between a set of actions – each leading to injury, we will hear people discuss why it chose to do what it did, people may say, “it did the best it could” or worse, “no person would ever have done that, this is why machines shouldn’t be able to choose.” The latter of course revealing the human construct, an unspoken contract – our expectation that smart or intelligent systems will operate like us, share our values, our culture, that we can predict their actions in our context. This is the greatest threat to AI and always has been – the expectation, the contract that the new intelligence will be like human intelligence. Some winters are due in part to that contract being broken, to these systems not living up to the expectation and making inhuman mistakes. There are a set of tools available now that are not intelligent but they are smart and they are powerful. We would be remiss in our duty to our customers and shareholders if we do not leverage them. Manage expectations about these powerful tools and understand the very real limits that exist on them. If we can do this we may benefit from the AI boom and avoid another AI Winter. Will we see an AI apocalypse? Ironically it’s not the human like intelligence that may be our greatest threat but simpler intelligences. A human like intelligence could empathise, could act in acordance with values and could be relatively predictable (in a human way). There are many stories across science fiction of smart robots that act like insects and replicate, in fact that only make copies of themselves, that pose a great threat to any civilisation. They are not intelligent, they don’t want to kill off all life in the galaxy – they just turn all the available resources into copies of themselves which would have that effect. We are much closer to building that threat frankly (with drones, 3d printers, etc.), than a super intelligence that decides all human life is worthless. For now though – I expect these things to stay firmly in the space of science fiction. I include this discussion here because it does demonstrate a key difference between smart with unintended consequences and ‘intelligent’ – a lesson worth bearing in mind for those adopting AI. Finally – will we see robots cleaning our homes by 2020? Well roomba is out there and sort of does that. Stairs and steps are still a huge challenge to robots. Frankly differentiating furniture, pets, clutter, magazines, rubbish, dust and recycling in a moving environment is still a very complex issue. As in insurance, I think smart things will make cleaning easier and assist those who invest but there’ll be a role for human intelligence in ensuring the pets aren’t recycled and the customer ultimately gets the service they expect.

Robotics, bots and chocolate teapots

Increasingly in operational efficiency and automation circles we’re hearing about bots and robotics. As a software engineer in days past and a recovering enterprise architect I have given up biting my tongue and repeatedly note that, “we have seen it all before.” I’ve written screen scrapers that get code out of screens, written code to drive terminal applications and even hunted around user interfaces to find buttons to press. The early price comparison websites over a decade ago used these techniques to do the comparison. These techniques work for a while but are desperately fragile when someone changes the name of a button, or a screen or a screen flow. However, they can help. I recall a while ago a manager lamenting ‘the solution’ was about as useful as a chocolate teapot. A useful 10 minutes hunting for this video of a chocolate teapot holding boiling water for one whole pot of tea made the point for me. Sometimes all you need is one pot of tea.
Tea poured from a chocolate tea pot

Tea poured from a chocolate tea pot

So it’s not new, some bots may be fragile and with my “efficiency of IT spend” hat on (the one typically worn by enterprise architects) stitching automation together by having software do what people do is an awful solution – but as a pragmatist sometimes it’s good enough. Things have moved on. Rather than a physical machine running this with a ghost apparently operating mouse and keyboard we have virtual machines and monitoring of this is a lot better than it used to be. Further machine learning and artificial intelligence libraries are now getting robust enough to contribute meaningfully smart or learning bots into the mix that can do a bit more than rote button pressing and reading screens. In fact this is all reminiscent of the AI dream of mutli-agent systems and distributed artificial intelligence where autonomous agents collaborated on learning and problem solving tasks amongst other things. The replacement of teams of humans working on tasks with teams of bots directly aligns with this early vision. The way these systems are now stitched together owes much to the recent work on service oriented architecture, component orchestration and modern approaches to monitoring distributed Internet scale applications. For outsourcers it makes a great deal of sense. The legacy systems are controlled and unlikely to change, the benefits are quick and if these bots do break they can have a team looking after many bots across their estate and fix them swiftly. It may not be as elegant as SOA purists would like but it helps them automate and achieve their objectives. The language frustrates me though, albeit bots is better than chocolate teapots. I’ve heard bot referred to as a chunk of code to run, a machine learning model and a virtual machine running the code. I’ve even heard discussion comparing the number staff saved to the number of bots in play – I can well imagine operations leads in the future including bot efficiency in their KPIs. Personally, I’d rather we discussed them for what they are – virtual desktops, screen scraper components, regression models, decision trees, code, bits of SQL were appropriate, etc. rather than bucket them together but perhaps I’m too close to the technology. In short bots may not be a well-defined term but the collection it describes is another useful set of tools, that are becoming increasingly robust, to add to the architects toolkit.

Living with the Internet of Things (and crowd funding)

Earlier this week some users of the Wink smart home hub found that their smart home hub was more useful as a door stop or brick than as a hub. A fix is being worked on and rolled out to customers but for me this looks like the teething problems of the still nascent Internet of Things movement and one of the hurdles Apple is trying to jump with the Apple Watch. Earlier this month I received a portable handheld scanner from Dacuda. It’s not unusual for me to receive gadgets in the post but this one was particularly interesting to me as I had been one of the kickstarter funders of the item and have been following it’s creation with some interest. It piqued my interest particularly because I’d seen the technology almost two decades ago in a research lab but not seen it come to market at a reasonable price – a scanner that one moves over the page and software builds a picture of the underlying document. This isn’t the first item funded via crowd funding I’ve bought. My keys have a tile attached to them and I’m still wearing the original Pebble wrist watch (with e-ink display). I guess this firmly places me as an early adopter in the Internet of Things, wearables and crowdfunding space. I don’t have a Wink hub although it’s sort of appealing but not available in the UK yet. So far though it hasn’t been all clear pastures and dreams ideally realised. The Internet of Things has it’s teething problems. Let’s take the Tile for instance, a small device that emits a bluetooth and short rage wifi signal so you can track it’s location from a phone or tablet, thus, never losing it. I used to have 3 of them and now have 2, that’s right I lost one. I was rushing out the door, the school run running a little behind schedule and forgot my phone. Somewhere on the brief journey I dropped the Tile and what it was attached to. Had I had my phone with me it would have given me the location of the last place it connected to the Tile, as it was it told me the last time it saw the Tile was at home. No matter, in theory if I retrace my steps I will come in range and be alerted that it is found. This didn’t work either so I assume it was picked up. Since the battery lasts two years perhaps someone with the app will go near it and it may yet find it’s way home – but not yet. Part user error and part an unfortunate series of events perhaps, but another technology found fallible and a dream not quite realised. The Pebble has been more successful. The fact I answer the phone when it rings is largely down to my smart watch rather than the phone these days and the wrist-borne notifications are hugely helpful. I use the misfit app on it to tell me I’m not doing enough exercise and a Withings smart body analyser at home to let me know the end result of not having done enough exercise – all great fun! I may still invest in the Apple Watch. I have a standing desk so do stand, something misfit on my pebble doesn’t track and I feel I want to be recognised digitally for this at least. The little handheld scanner is more work in progress. My son’s somewhat fascinated when it works and hugely interested in the errors it makes and where they are made – such is life as an early adopter. More teething issues there. No doubt though we as a population are moving to a world where anything we buy could be connected, where we can buy a $50 hub that controls our lighting from an app and it’s failure is covered in the global (technology) press and where we can fund and follow the development of gadgets we’ve dreamt of owning for a couple of decades (even if the software needs a little more work). So what does this have to do with insurance? The fact is the Internet of Things appears to be running apace, smart homes are being tried out by the early adopters and bugs are being squashed. Did you know with the Wink hub, the app on your phone and this $40 quirky+ge water sensor you can get alerted in real time regarding escape of water events? Ever been out of the house and come home to find the kitchen, bathroom or basement flooded? Indeed just yesterday Karen pointed out this article suggesting insurers are getting involved with smart homes. There’s a lot of buzz around health and life insurance in part driven by the Apple Watch launch. I’m looking forward to Apple doubling down on the HomeKit API or someone credible getting there first; I’m looking forward to the same boom around the Internet of Things and insurers handing out moisture sensors to home owners. I’m looking forward to prevention and intervention products, rather than selling services after a loss. Perhaps we just need to squash a few more bugs first.

Engaging the NAIC on Emerging Insurance Technologies

“If the regulators aren’t with you, expect insurance innovation to take longer and cost more.” This comment surfaces repeatedly in Celent’s research. We believe this is true and have seen this occur in the past (credit scoring, predictive analytics, telematics). In order to address this issue, we at Celent have begun to proactively engage regulators around emerging technology topics. Last week, I presented at the fall annual meeting of the National Association of Insurance Commissioners. Addressing the Property & Casualty Committee, the topic was “Emerging Technologies and Their Potential to Impact the Insurance Industry”. I observed that regulators are eager to receive information about this topic. This is understandable, as budget constraints make it very difficult to divert resources from the day-to-day crush of filing and approvals to concentrate on the future. However, mentioned in the session, it does the industry absolutely no good if the first time a regulator learns about a new technology is in a new filing! Celent is tracking several technologies with the promise to change insurance proposition in important, fundamental ways. For example, digital capabilities allow customer engagement to shift from periodic (only at time of renewal or a claim) to continuous (daily lifestyle suggestions). Another trend we see is a movement from “pay as you rate” to “pay how you use”. The introduction of telematics in the Auto line is the best example of this. Several case studies from around the world were used to illustrate how, in other regulatory environments, technology is being applied to insurance. The digital customer experience platform built by Tokio Marine was shown as an example of continuous customer engagement. AXA’s use of public and private data sources to change the FNOL reporting process was offered as a case of a transition from reactive to proactive claims management. Celent will continue to be involved in briefing regulators on these issues. We encourage insurance technology providers to do the same. We are all on this journey together and will get their faster and more effectively if we communicate actively.

A Recipe for Digital Innovation

At each of the five Celent Innovation Roundtables held in the last several months, innovation practitioners consistently identify culture change as a significant success factor. A particular challenge, poor communication between technologists and their business partners, is often cited as a barrier. The Second Machine Age by MIT professors Erik Brynjolfsson @erikbryn and Andrew McAfee @amcafee offers some help. Their explanation of digital innovation made a big impression on me as the clearest description that I have found so far.  The approach is simple: “digital information….is built on multiple layers”. It is a “recipe” of different automation solutions mixed together. That is, look at a list of digital technologies, pick a few and combine them in unique ways so that they work together, and deliver new value. This description led me to revisit some Celent insurance innovation case studies and rethink how to best explain them.  The first, the AXA claims example (Visualizing the London Riots at AXA UK, http://www.celent.com/reports/visualising-london-riots-axa-uk), outlined how the insurer combined data from public police records, media reports, and their internal systems to predict which of their insureds might suffer a loss during the multi-day rioting in the U.K. in 2011. AXA “layered” successive sources of digital data, then added some analytic algorithms to produce a new and valuable tool designed to proactively identify at-risk insureds (mainly small businesses that were exposed to looting). All of these technologies existed on their own, in isolation, until they were combined to yield new insights which helped avoid losses. The second study is from Tokio Marine & Nichido Fire Insurance Co., Ltd. They were recognized as a Celent Model Insurer for their One Time Insurance product (Model Insurer 2012: Case Studies of Effective Technology Use in Insurance http://www.celent.com/reports/model-insurer-2012-case-studies-effective-technology-use-insurance). They combined geo-location, text messaging, and data prefill services to deliver real-time insurance offers to subscribers. As a prospect drives to the airport, their mobile phone receives a text from the insurer with an offer for travel insurance. Similarly, texts are sent as golfers arrive for their tee times, skiers approach the lifts, etc. It is the combination, or layering, of these technologies in a unique manner that creates the innovative service. The value of this explanation is not only academic. Layering strikes me as a useful tool to explain how all of this “digital stuff” can fit together. The recipe and layering metaphors succinctly describe digital in non-technical, accessible terms. It can be used with any audience to illustrate how the sum of the parts can be greater than the whole. I also see value in using layering to generate new ideas. My thought is that, in an interactive session, a group of participants can create a list of technologies, data sources, etc. and then brainstorm different combinations from them. Our continuing research illustrates that there is no one prescription for innovation, but there are guideposts to follow.  The use of the layering metaphor to improve communication and as a technique for brainstorming is one such guide.

Thoughts from Insurance Technology Congress 2013, London

Insurers and vendors met in London to discuss insurance technology on the 24th and 25th of September this year. The audience mostly consisted of those with an interest in the London market and Lloyds although there were representatives from general insurers in the UK too. I was glad to see that the tone of the meeting had shifted. In years past there has been a theme of technology and modernisation being necessary but too difficult. This is a market that has seen some high profile and expensive failures in IT along with successes. This week I heard again the call to action, the need to modernise but there was a much clearer sense of optimism, a way forward. There are still very large, expensive projects in the market with Jim Sadler, CIO of XChanging giving a colourful view of the latest deployment on behalf of the market. Alongside these are independent initiatives, demonstrating the value of standards and cooperation amongst competitors in the market. A panel discussing the eAccounting initiative, Ruschlikon, led by XL Group’s Simon Squires, gave a surprising engaging and transparent story of how a group of insurers and brokers collaborated and delivered to market technology that fundamentally improved their operations and speed of response to the insured. Genesis offered another example of a group of insurers coming together and collaborating to fix an issue that again, slowed down the market and affected customer service. In the course of the proceedings the architect of Genesis mentioned the best thing for the project would be that it is superseded by something that worked better, but that wasn’t a reason not to do it. Throughout the discussions there was a theme of automating where human interaction didn’t add value, but not automating for the sake of automation. There were discussions about delivering smaller projects, doing it quicker, collaborating and adopting standards where this didn’t affect competitive advantage and not doing so harmed customer service. Themes I expect we’ll see repeated at next weeks Celent event in San Francisco. As before and for the last few decades, there was a sense of a need to modernise, to attract new talent, to move the market forward. This year there was a real sense of optimism, sample projects that have moved quickly and gained adoption, a way forward.

New Challenges require a New Mindset for Insurance

Celent conducted another successful Peer Networking Event (PNE), this time in Atlanta, Georgia. The event was well attended by insurers from around the area and even had representation from a bank. The PNEs are designed to bring together insurers to discuss topics that they find of interest, either due to immediate concerns or future direction. The structure of the forum allows for open and candid discussions between the participants.

The two topics that were discussed during this PNE were emerging technologies and the architecture concerns to incorporate and integrate these new technologies into the existing environment with which carriers must deal today. Celent provided its perspective and insight into these areas and the group engaged in a lot of interactive dialog. Carriers were interested in what others are doing with respect to telematics, customer sentiment and the use of external data. There was a growing concern expressed about how to deal with the large amounts of varying data and how to incorporate that information into the business decisioning process. For example, one carrier wanted to know if anyone had experience with data aggregators to help deal with the Big Data challenge that is beginning to hit the insurance industry.

Another concern expressed is how to maximize the user experience for policy holders, agents, and CSRs (Customer Service Representatives). The insurers said they face a challenge with their ability to integrate across their systems to provide the level of experience that users have come to expect with Google, Facebook and Amazon. They also discussed the significant advantages available to specialty insurers by leveraging more customer data to better underwrite risks.

There was a high value discussion regarding the need for IT to educate the business on the art of the possible. Regarding emerging technologies, IT needs to better understand the business and take a seat at the table to help drive the business and help them understand what is truly possible; what is still just a concept; and the true impact to the business of the emerging technologies that are so hot in the press.

Celent proposed that in the future insurance IT landscape, all that carriers will own is the architecture and the information. This generated good debate around the role of enterprise architect and the role of the business architect. Only three of the carriers in attendance have a formal, mature business architect practice. Others described their business architects as really business SMEs (subject matter experts). The insurers also observed that IT architects and insurers rarely talk about human capital. Carriers need to develop an IT human capital plan related to IT architecture skills. A central theme of the day was that the role of the insurance IT architect is definitely changing as we move forward.

One carrier presented their EA (Enterprise Architecture) journey. They have been moving away from business siloed architectures to a true enterprise architecture, responsible for the enterprise, not just a single line of business. They created an EA roadmap and established an EA governance group that was quite effective. The surprising aspect of their efforts was the speed (six months) in which they established and matured their EA governance group. Some of the key reasons for this are that the EA governance committee consists of senior executives from IT and business and all projects must go through a practical EA review. The governance process enhances the project deliverables and has not become a bottleneck for project delivery. Their focus is on their value proposition and how they can help drive their company to achieve the business goals.

In the afternoon, another carrier presented their system modernization efforts and the journey that they have come over the last couple of years. As with most carriers, they have a myriad of systems and had a lot of manual processes. They found it took longer than expected to get the first line of business up, but new lines now only take 5-9 months (reduced from 18 months previously). They have rationalized many of their systems and continue moving forward on improving the back end and introducing portals and improved customer experience on the front end. A key lesson learned was to fix underwriting first and then focus on the back end process systems, such as Claims and finance. Other lessons learned included:

o Need 100% commitment from the business

o Change/fix the process, not the system

o Define your requirements based on the new business process

o Decide what you want to do, then pick the tool (not vice versa)

o Define reqs up front before selecting an implementation partner

o Be realistic about data conversion time and effort

o Dedicate a Project Manager from underwriting full time

o Do not convert policy data! Convert policies at time of renewal.

o Allow projects to fail

o Define your requirements well before working with a vendor; otherwise, they cannot understand what you want

The PNE confirmed to all the carriers in the room that they are all struggling with variations of the same issues. It also confirmed that you cannot face these new challenges with the old Insurance mindset or culture and provided practical steps that have been taken to make the needed transitions.

The next PNE is scheduled for October 26, at RSA Canada in Toronto. The two topics for discussion will be Insurance innovation and Big Data in insurance. The event is open to all carriers. Check the Celent site (www.celent.com) soon for event and registration details. Based on the last several PNEs, you won’t want to miss it!

Question & Answer: Celent Insurance Webinar: Emerging Insurance Technologies: Life, Annuities, and Pensions Industry Edition

Jamie MacGregor, Nicolas Michellod and I presented a webinar, Emerging Insurance Technologies: Life, Annuities, and Pensions Industry Edition, on May 31st. The webinar had an active Q&A session at the end and, as a result, Nicolas, Jamie and I did not have time to answer all the questions. The blog will provide our answers to all the questions asked and answered as well as asked but not answered during the webinar.

Q: How are Insurers leveraging Social Media & Mobility in their business model and what are some of the major challenges faced in this area?

This was asked during the webinar.

A: From the European perspective, there are different ways to leverage social media and social media data in the insurance business. For life insurers, the most popular is to use it as a marketing tool: digging, mining social media data, and trying to shape the reputation and using it to communicate information about products. This has been used by life insurers more often so we can say it’s becoming common. What we are seeing more and more is life insurers trying to use the existing data on social media platforms to refine or better detect insurance fraud or better assess underwriting risks. This could be done using specific mining tools, but now the difficulty for insurers is to be sure that they can turn the data into appropriate information for their business use. It is coming.

On the mobile side, from the research we have done in North America. From the consumer side there has not been as much demand from life insurance policyholders as in, say, P&C. But consumers would like transactional capabilities such as paying their billing or at least knowing their bill is due or there is a potential for lapse. They would like communication with or from an insurer company. On the producer side, insurance companies are making the move to offer capabilities to their producers. It’s not highly prevalent, but it is increasing. From Celent research in 2011, we found that of the top 100 life insurance companies, only 12 were offering producers some functionality on mobile technology. Most of that was marketing or informational capabilities, not transactional. However, doing a quick review this past week, we found that many more insurers now offer producers mobile capabilities; still many are marketing oriented, but transactional functionalities like illustrations, quotes, needs analysis, even eApplications are increasingly being seen. Celent’s yearly CIO research also shows that insurers look to mobile capabilities for producers as a need to have in the short term.

Regulatory changes are also causing insurers beginning to investigate bold steps in both the mobile and social media space.

Q: What are the main drivers of technology adoption in Asia and what emerging technologies they are mostly adopting? Does this geography show same behavior as US and Europe or are there any differences?

This was asked during the webinar.

A: Geographies do not show the same behavior in terms of technology adoption. Our report provides our view on the level of adoption of emerging technologies for each region that we cover: North America, EMEA, Asia and Latin America. The report can be found on the Celent website: http://www.celent.com/reports/emerging-insurance-technologies-life-annuities-and-pensions-industry-edition-2012. We suggest you read the report and see if it makes sense for you and if you have any further questions, let us know and we can set up a call.

It depends on the technologies that affect specific lines of business. For example, we talked about hedging technologies, the tools that allow for better hedging of the financial position for variable annuities. We know that the market that is most developed in variable annuities is the US market, so in the US these technologies have a higher adoption rate. In Europe, the UK has a more mature variable annuity market than say continental Europe where the products are not so popular, so there is a varying level of adoption of the technologies in Europe.

Looking particularly at Asia Pacific, emerging technology that tend to get traction are the following:

– Mobile technology: Asia Pacific is the largest Mobile market in the world, and the growth is still strong. Access to the Internet through a mobile device is increasing fast. Many Asia Pacific insurance companies are currently working on providing mobility solution to agents, and some of them are also planning to provide mobile solution to policyholders to conduct self-service, and to prospects to purchase simple insurance products.

– Virtualization: insurers in developed markets already have clear strategies in place for virtualization and have deployments in place for the proportion of their estates able to take advantage. In some emerging markets, strategies are less well developed, although many will be reviewing their options currently.

– Business Rules Management: Many Asia Pacific insurance companies have adopted business rules management in recent years, and many other companies that currently doesn’t have a BRM in place are thinking to implement BRM solution within the next three years, in order to realize automation to some degree, to increase efficiency and to reduce headcount cost. The driver behind this is the increasing cost of insurance professionals, such as underwriters, and the growth of business amount.

Two additional points, for Latin America and Asia, when looking at the technology coming from there. Typically because the less mature markets don’t have the older legacy technologies seen in the older markets of the world, some of the new technologies are being adopted wrapped as part of greater the policy administration systems. The general difference is that the more mature markets tend to focus or concentrate on component based solutions from the outset rather than the out of the box solution, whereas the fresher, new markets have the luxury of taking more from a single source because of the greenfield nature of their businesses.

Q: Most of the emerging technologies seem to be in the efficiency and expense control quadrant. There are no proven/high priority technologies in the 4th quadarant (U/W). Why? How about business rules, don’t they qualify?

This was asked during the webinar.

A: It’s a concern for the insurer to work towards efficiency and cost expense control. The insurer is trying to balance revenues that are down with trying to make sure the cost structure allows for strong profits. When it comes to the liability management quadrant, Celent distinguishes liability management from broader efficiencies and cost savings because these technologies are quite specific to managing the risk presented by underwriting business. Technologies that identify and apply claims data to improve actuarial tables and underwriting rules, that identify potential ways to protect an insurer from fraud, or support the identification of high risk groups are often large endeavors with highly unstructured data or highly manual processes. As a result, the data available is difficult to apply to the business processes and for many insurers an area that has been in discussion for a while, but not implemented. Automated underwriting is a prime example.

In the case of the business rules, we have that as a technology in the efficiency and expense control quadrant. For the various geographies, adoption is varied. Insurers are using modern business rules management systems to capture, management, and parameterize business rules. Not many insurers have succeeded in fully externalizing business rules, providing a mechanism for reuse, and managing them separately from core code, but they are trying by implementing BRM tools with their core system upgrades. We feel it is an efficiency and expense control technology because it allows for the effiency of reuse of process rules across the enterprise.

Q: Are their legal/ regulatory complications of using data from social media for U/W, Claims? How authentic is the data available on social media platform and what are the privacy issues when Insurers are trying to access personal data of insured.

This was asked during the webinar.

In Europe, we are not aware of any restrictions on using data people have offered voluntarily on social media sites, forums, or blogs, etc. We think it might be a concern for the future, but currently it means that anyone can access to the data and can use it the way they want, people are adults and responsible for their own actions. They do it under their own choice; whether it affects their relationship with companies or insurance companies is a another question.

For the authenticity of the data on the social media platform, this is very important. And what are the privacy issues related to insurers trying to access the data. We see more and more insurance companies trying to lauch specific innovative products like AXA in Europe that launched a reputation protection products in France. The idea is that if there information about a person on a social media platform that is wrong, then AXA will try to force the social network or forum owner to erase the data about the client on the platform. If the company refuses, then AXA will flood the internet with only positive information about the insured. It means that it is still in the infancy stage as to whether one can trust the information on the internet; sometimes to rely on only data that we are sure is accurate.

Q: What level of savings can a Tier 2 insurer expect from upgrading to a modern PAS?

A: First of all let us clarify what Celent defines a Tier 2 insurer using our five Tier definition:

Tier 5: Insurers under US$100 million in premium

Tier 4: Insurers with US$100 million to US$499 million in premium

Tier 3: Insurers with US$500 million to US$999 million in premium

Tier 2: Insurers with US$1 billion to US$4.9 billion in premium

Tier 1: Insurers US$5 billion in premium and more

In general – at least in Europe – a majority of Tier 2 life insurers are companies having a presence in different countries. Therefore they have had to adapt to market changes when expanding cross-border either via acquisitions or organically. This has led them to run complex IT infrastructure and application landscape. While some of them are trying to find some ways towards simplification of their IT infrastructure, it remains difficult to:

– Quantify the intangible benefits of a modern PAS especially around speed-to-market and automation,

– Determine what part of the savings is derived from which budget position as in general life insurers lack of advanced management accounting capabilities.

In other words, while insurers understand that investment in modern core systems such as PAS including state-of-the-art product configuration tools will allow them to improve many aspects of their business, they often face issues to apply hard dollar benefits to them. On the other hand, if the replacement of a PAS legacy also implies a transformation of the infrastructure they are generally able to quantify the indirect saving linked to the overall information system changes. In conclusion, the level of savings a Tier 2 insurer can expect from upgrading to a modern PAS depends on the existing information system, the objective it tries to achieve with the new PAS (serve a single line of business or various ones, replace multiple systems and share specific components across geographies, etc.) and the type of transformation project accompanying the PAS implementation.

For more on the subject, Celent has published two research reports that indirectly try to answer part of the question:

Capturing the Strategic Value of IT: A Review of IT Investment Evaluation Methods

The Business Case for Modern Policy Administration Systems

Q: COTS product adoption vs. customized solution – what is the latest trend? any specific business process to highlight ? e.g. Policy Admin may have matured more over a period of time and we can see more Insurance carriers to adopt more COTS products in PAS

A: Celent has published a report reviewing the main trends in terms of the build vs. buy approach in the P/C sector last year (The Build Vs. Buy Debate: An Update from the Insurance System Landscape) and we think the trend in the life insurance space has followed the same path. However it is important to mention that there are differences across geographies. Indeed, while North America and UK based life insurers tend to prioritize COTS, continental European insurers still think that bespoke systems (internal development or development with an external partner) remain the best approach although the preferences are slowly changing toward a best-of-breed approach (assembling different components purchased from IT vendors on the market).

Q: What are the emerging technology solutions to enable insurers to provide unique customer experience for improved retention & up-sell / cross-sell.

A: These can be found in the Growth & Retention quadrant. Increasingly, we are having conversations with insurers about how to improve the customer experience. Although adoption of these technologies is not high currently, ‘next best action’ for improving product take-up and ‘top-ups’, and sentiment analysis for better understanding the likelihood of surrender are two of the technologies being considered seriously by some insurers. However, classic issues in our industry such as the current capacity to execute, legacy landscapes and intermediation creating a communication barrier between the insurer and consumer are impacting the pace of their adoption.

Q: Would you say the agent mobile support is based on market problems versus what the carriers are able to offer? If an agent could use a mobile device for more would they or do they just want it for marketing support?

A: As stated above, insurers are offering producers a mix of marketing support and transactional capabilities in their mobile applications today. Celent hears from insurers regularly that producers are asking for expanded mobile capabilities because of market dynamics. The iPad is increasingly being looked at as a ‘must have’ item in the sales process. Even older traditional insurers who have agents who do not use laptops are hearing that their agents want to do more and more on a tablet. It might start out with marketing or training materials, but the requests extend to CRM, illustrations, needs analyses, and access to policy holder data. The access to more transactional capabilities is occuring most often through the browser so that the application is platform agnostic. For many insurers, however, their back office technology is the larger challenge than getting a producer to use the technology if offered. For example, a desktop illustration system cannot be used on a mobile device. If the insurer does not find an answer to that problem soon, the market will quickly move past that insurer and it will lose sales to the insurer who can offer mobile illustration capabilities.

We hope we captured all of the questions. Thank you and we look forward to your being on our webinars in the future!

Emerging Technologies in General Insurance

The Celent webinar yesterday, Emerging Technologies in General Insurance, was very well attended and there were more questions than time allowed. Thank you to everyone who was able to attend and who contributed. If you didn’t get a chance to join us, the recording is posted at http://www.celent.com/node/29767

Below are the answers to the questions that were still outstanding at the end of the session:

Q: Is there such a thing as an Insurance Carrier Fraud Maturity Model?

· No, but great idea and don’t be surprised if you see one in the upcoming Celent report on fighting insurance fraud!

Q: For consuming telematics data, is this something that a carrier should do standalone or are there industry schemas such as IBM’s IIW that add value in this regard?

· We have observed a difference of approach between US carriers and insurers in the UK. Typically in the US larger carriers are building out the infrastructure and model themselves to capture, analyse, keep and use customer telematics data. In the UK the preference is to use partners or the vendors of the devices to gather the data and do the analysis on their behalf – sharing the results of the analysis for use by the insurer. It’s worth noting there are already efforts underway in the UK and German associations of insurers to discuss a common format for this data to allow the information to be shared between vendors at time of renewal, although I don’t believe they’re reviewing industry warehouse models such as IIW. There is no one size fits all approach and insurers using various approaches are meeting with success although Celent expects some common standards to emerge for sharing this data between insurers, agents, brokers and even customers should they request it.

Q: What carriers are best-in-class when it comes to Big Data? What technologies do they use?

· Big Data is still more of a buzz word for carriers than reality today. The larger insurers have Big Data programs/pilots underway due to the amount of data that they have. Smaller carriers are considering Cloud options and mid-size carriers for the most part are watching the results of the other two.

Q: Isn’t expense control closely associated with underwriting efficiency? What do you see adoption of emerging technologies wrt to underwriting?

· Expense control covers all insurance functions and processes, as well as the technologies that support/automate these solutions. Analytics have been used by a lot of carriers to create more effective and efficient UW decisions. We expect the use of social data to play a great role over the next 1-3 years with respect to UW, as well as the continued use and maturity of analytics in the UW process and decisioning.

Q: Why hasn’t the notion of insurance focused open source taken off?

· Open source has taken off in many carriers and is in use in varying degrees and levels (operating systems, libraries, ESBs, portals; and to a lesser extent applications). Analysts do not typically include open source solutions in their reports (as separate from other non-open source vendor solutions) for several reasons. First, the analyst process of evaluating solutions starts with a vendor, their profile, their implementations, customer references, etc. Second, vendors often use open source as a component or even a core of their applications that are included in most vendor reports.

· Many carriers prefer working with a vendor rather than developing solutions internally and thus select a vendor solution over open source.

Q: Many of the new data sources create privacy “surprises” when consumers intuit that a commercial organization knows what it knows and puts it to use, even if it benefits the consumer. Policy is of course lagging technologies but it will evolve unpredictably. California has limits on telematics data use, for example. How the insurance industry implements emerging technologies has a public relations component and potential for igniting a very fragmented state-by-state way that data can be used. What does Celent see as far as some uses being especially dangerous from a brand perspective and liable to be shut down by regulation?

· Pricing and eligibility decision rules must be filed with the states. Whether file-and-use or preapproval jurisdictions, all regulators expect that insurers declare these parameters with their organizations. Using any non-approved data source to price or determine eligibility should be strictly off limits.