Archives for February 2010

Moderated Commenting Is Now Supported

After several complaints from readers that it was difficult or impossible to comment on the Celent blog, we’ve embraced a social web approach and have opened the comments up to everyone. Comments can be posted without a WordPress account, though they will be moderated by the Celent analyst team and may not appear on the site immediately. We invite you to share your thoughts and join the discussion.

Temporary IT, or, Where Does Technology Go When It Retires?

I recently reviewed an IT project that had been submitted for an award, only to learn that between the time of the submission and the time of the judging, the IT project in question had been “retired.” The context forced me to ask whether such a project could still be considered award-worthy, but the more general question is whether a retired system can be called a success. Some IT projects are destined for retirement the day they are conceived. They are the temporary measures put in place to achieve a certain minimum level of functionality “until.” Until the entire core system is replaced. Until a vendor offering provides the function in their next upgrade. Until the technology available in the marketplace can duplicate this custom code. Until someone comes up with a better/cheaper/faster way of doing it. These projects live short, meaningful lives. They go into production quickly, do their job, and then gracefully make an exit six months, a year, or two years later when the permanent solution is ready. Such projects include things like a custom agent portal that is eventually eclipsed by a full policy administration replacement. Or a simple rating calculator that is maintained outside the legacy system and used solely to provide real-time quotes to agents until the legacy system has been modernized. Or hard-wired integration between a billing and claims solution maintained while both systems are being upgraded. These projects are a success not despite the fact that they are eventually retired, but because they are eventually retired. They have done their job and helped the organization advance to the next stage. In fact, the true disaster is when a tech initiative that was always meant to be a temporary solution evolves into a critical system that lives for ten years, cobbled together like Frankenstein’s IT, preventing the company from moving on to a better, more modern solution. This happens more often than we’d like, and so even projects with short lifespans need to be taken seriously. The truth is ALL technology is temporary technology. Every system will eventually be replaced by the next system. And any system–no matter how modern and “permanent” it is upon implementation–will become legacy technology if it is not constantly kept up to date. We can learn from these temporary IT projects. Smart organizations approach such initiatives warily, making sure that it will be easy to turn them off when it is time to move on. So should be the case with every project. Smart organizations consider how employees will be impacted when a temporary IT project is no longer available. So should be the case with every project. Smart organizations have a next step in mind when they start a temporary IT project. So should be the case with every project. If insurers treat every IT initiative like it is a temporary one then insurers will be less likely to end up with irreplaceable legacy systems. Part of IT planning, even for major systems that have long life spans, is to consider how the system will be kept up-to-date and how it will eventually be retired.

Can Google Buzz teach insurers a few lessons about social networking?

Google tends to get a lot of press simply because it’s Google, although it’s fair to say Google has got a few things wrong with some of its products – anyone remember Orkut, or all the hype around Google Wave? One thing is certain Google have got a lot right about its launch of Google Buzz. This may be particularly interesting as some insurers and financial advice web sites move to create their own social networks. Let’s examine first what Google got right and then what we can learn from where they went wrong. Firstly, and I think this is key, Google leveraged an existing network. Google Buzz is built on Google Mail. This immediately gives a population of customers. In addition this product already has many of the customer’s frequently contacted friends. This means there’s little set up involved and the network has been swiftly established. Secondly Buzz leverages existing assets and relationships. It’s linked to Flickr, YouTube, Google Reader, Google Maps and others. This means that customers can continue using existing and familiar tools and gain extra value from Buzz. Thirdly Buzz came out with a programming interface to allow third parties to start integrating. It also leveraged existing networks and APIs allow it integrate with the customers other applications very quickly. Lastly, Google have been very quick to respond not just with words but meaningful change to the product, even though it is only a week old at the time of writing. In an interview, Google have described having a War Room set up with developers and product owners listening to customers in real time, and making key decisions about whether to and how to change the product. However, Google have got two things wrong. The very public and ill-thought through impact on privacy has been the key concern. Customers could easily and accidently disclose information about who they frequently emailed and contacted – information previously private. This is of concern to cheating partners and political activists alike. Google have already done much to address this concern and what was an own goal is now being applauded as a swift response by advocates. We’ve learned today that the privacy issue has sparked some class action lawsuits in the US. The second thing that could have gone better is the programming interface – which is currently read only. I would expect a further increase in adoption once tool authors can create updates to Buzz directly and for me constitutes a huge opportunity missed. What does this mean in financial services? Some simple guidelines:
  1. Leverage existing assets – both information you have and public information. Google have asked their customers to volunteer their twitter ID. This information provides Google with an already public list of their customer’s friends. Unless you have a key unique selling point, consider leveraging existing networks rather than building your own – for example Twitter or Facebook.
  2. Link your network to support the free public feeds from Flickr, Twitter, etc.
  3. There are successful social networks that operate without a programming interface. Very few companies have offered open programming interfaces with insurance or financial data – wesabe.com is one such example offering read only transaction data. In other domains allowing third party developers and tools vendors to build applications for a website has sped up adoption. Limiting an API to posting updates, managing communication and friends should have the same effect.
  4. On launch, set up a War Room. Most of the feedback will arrive in the applications infancy and its survival depends on identifying the issues and opportunities, prioritising these and visibly acting on them.
  5. Finally – get the security and privacy right. The two go hand in hand. Getting it a bit wrong and fixing it quickly as Google have can earn you forgiveness, but customers will likely expect more from financial services organisations.

PoCs – Protecting your investment with the open road test

In a recent conversation with a vendor, I was asked if there was any way in which we [Celent] could encourage/support the standardization of request for information documents sent out by insurers. Think ACORD Standards for RFIs! At the heart of her question, was a sense of deep frustration at the evaluation process which inhibits the vendor from highlighting or differentiating themselves. I have some sympathy with this view.

IT departments have become very good at comparing the data for RFI/RFPs in two-dimensions. A structured, transparent evaluation process is an important step in choosing what it likely to be a significant IT investment in the case of a core system replacement. The evaluation process can be used to generate hard facts highlighting the differences in system functionality, company and delivery options.

For many insurers, the governance process requires a presentation to the project sponsors of a detailed quantitative analysis of vendor submissions, along with ranking and weightings. This is all well and good but this output can be incorrectly attributed the status of a statistically sound scientific evaluation. I have seen projects where a vendor has been chosen because they had 2 more points in the final analysis on paper.

Most of us would not buy a car based solely on a review of features and functions on the manufacture’s website or brochure. We’d be out there driving and deliberating the real experience of the open road, handling of corners, and comfort of the seats. All these factors are almost impossible to quantify in a brochure but vital in influencing our final choice.

And it’s similar in the world of core system evaluation. The proof of concept is vital addendum to the evaluation process. This is the time to put the vendor to the test. This stage of the evaluation should allow the buyer to really see how easy the product configurator might be, how well a vendor can understand requirements, how responsive/ organised the vendor is, or how much/little coding is required.

And in Celent’s view, proof of concepts should be paid for. It shows the commitment on part of the insurer, and it also encourages the vendor to be focused and committed to this important step. Being paid allows the vendor to justify allocating top staff otherwise committed to paid implementations. A recent evaluation project paid 4-figure sums to both short-listed vendors for a 3 month proof of concept.

The proof of concept is a great process by which to garner more buy-in from the business users, and to generate some excitement about what this change could bring to the organization. Adding the open road test to the brochure comparison will undoubtedly help the insurer make a more robust decision.

2.24.10: 2010 US Insurance CIO Survey: Pressures, Priorities, and Practices

Celent senior analyst Donald Light This event is free to Celent clients and the media. Non-clients can attend for a fee of USD $249. Celent will contact non-clients after they register for credit card information. Please click here for more information.

Repairing A Jumbo Jet While In Flight

We have daily discussions with insurers about how to balance the tasks involved with the maintenance of a legacy system environment with those required for strategic system renewal. It brings to my mind the analogy of “fixing a plane in mid-flight”. Most IT shops have neither the capacity nor the skill sets to do both with their in house staff. These needs have created a large market for ITO, BPO, and system integration services. In observing this expanded sourcing model in action, it occurs to me that the whole organizational design of information areas is undergoing a change.

Most “shops” were built on a manufacturing design. The work was handled like this: “You, Mr/s. Insurance Business Person, tell me what you want (give me the specifications) and I will have my craftspersons (programmers) build it. We will then test it and deliver it to your “door”. “

Many efficiency and effectiveness improvements have been realized by insurance IT areas adopting production techniques such as TQM, Six Sigma, Lean, etc. However, instead of a manufacturing paradigm, the multiple vendor and relationship management challenges faced in 2010 brings to my mind more of an air traffic controller analogy – multiple planes in the air with the common objective of getting to the destination safely (first) and on time (second).

This would all be a cute analyst musing except that the skills involved with delivering in these two environments are very different. Manufacturing requires a technician’s precision; air traffic control requires a detailed understanding of flight (but you don’t have to be a pilot) and a keen sense of anticipation of what is likely to happen next. The people in the tower must be able to guide without their hands on the controls, must be able to see several steps ahead, and must have an effective guidance system.

For executive leadership in an insurance company, the analogy leads to a different search profile for a CIO. It is not necessary to fill these positions with the best technician, but is critical to have someone in place with the required “soft skills” to coordinate effectively.

(BTW, why are “soft” skills so hard to develop?)

Hardware Outsourcing vs. People Outsourcing

I’ve been doing research for several reports on the topic of IT outsourcing, some about utilizing cloud computing for hardware and some about working with vendors to handle consulting and development. While these two areas are conceptually very different, the approach and business values are quite similar.

One misconception about both is that they are used for “replacement.” By this I mean that an insurer uses cloud computing to replace their data center or an insurer utilizes an IT service provider to replace their IT organization. While in some instances this might be true, it is rarely the case.

The other misconception is that an insurer uses cloud computing or consulting services to lower costs. Lower cost might be one reason, but shouldn’t the only reason. Many CIOs, however, do approach IT outsourcing primarily for the perceived cost benefit, and Celent sees this as a mistake. In many cases, some or all of the long term costs might actually be higher. This does not mean a cost-sensitive insurer should avoid IT outsourcing, but, rather, should proceed with an outsourcing project while looking at the overall business values associated with it.

The added business values are the other area of similarly for hardware and development outsourcing. Both help a company increase capacity, one with increased server capacity and the other with increased human capacity. Both help a company access new capabilities that they didn’t have earlier; cloud computing providing rapid server deployment and failover (among other things), development outsourcing providing resources with skills sets that did not exist in house. And, finally, both work best when thought of as a long-term strategy that will complement the existing IT and not just as a temporary measure or as a replacement for existing resources.

The takeaway is that any organization looking at IT outsourcing–whether for hardware, software, or people–should focus not on cost but on long term business value. Organizations that only care about cost are often disappointed by the outcome. Organizations that have a strategy to bring new capabilities and business value to users will be successful.