Initially the technical problems were thought to be confined to scalability issues given the large amount of traffic. But it was found that there were a variety of other issues. The site rejected valid passwords, served up blank drop-down menus, and crashed repeatedly. There were challenges with the database, issues with integration, and after millions of visits on the first day, only six people got all the way through. New contractors were brought in to fix the problems which added to the cost overruns. The cost ceiling began at $93M, was raised to $292M, and today, it is estimated that the site has cost around $500M. To be fair, this was an extremely complex project consisting of 6 complex systems, 55 contractors working in parallel, 5 government agencies were involved; it is being used in 36 States, and covers 300 insurers’ 4500 insurance plans.
There were a number of contributing factors to the technical problems. No single contractor managed the entire project and there was a lack of coordination across the multiple vendors. There were a number of last minute changes – and the project was managed using a waterfall methodology – which can make it difficult to respond to the changes quickly. Testing was inadequate. Not only did the system not perform according to design, but it didn’t scale to the level anticipated. Clearly they knew what the load would be – but the load testing didn’t meet the capacity plan.
However, Sebelius had little direct oversight of the project and certainly wasn’t responsible for the day to day project management. The website design was managed and overseen by the Centers for Medicare and Medicaid Services, which directly supervised the construction of the federal website. Regardless, Sebelius is likely updating her resume today and considering alternatives.
What does this mean for a CIO? If you’re going through a large scale project – and many carriers are – you won’t know everything that the project manager knows – even though your neck is on the line if the project fails. Large scale projects require a different level of management than day to day operations.
Areas to focus on include:
· Set realistic time frames. Don’t underestimate the amount of time it will take to implement the project. A lot of carriers want to hear that implementation of a policy admin system can be done in 6 – 12 months and while there are some examples of that being true, it’s more likely that your solution will take longer. Plan carefully, add contingencies, and if you end up with a choice between launching late or launching with less functionality that was initially planned, you’re usually better off taking the time to do it right. People are much less forgiving of a poorly executed project than a late one.
· Manage the project with multiple, aligned work streams. Large projects generally will require multiple work streams. We often see carriers who divide the project into streams such as data, workflow, rating, documents, etc. This allows the team to focus their efforts. However, you have to continuously monitor that the streams are aligned. Communication across multiple work streams is critical
· Communications is a key success factor for large projects, yet is often an afterthought – or worse – not planned. Communications across project teams is necessary to assure the functionality is aligned as planned. Communications is also critical when it comes to managing scope creep. When the team clearly understands the priorities, they’re better able to make tradeoffs early on. Clearly setting expectations around the deliverables and then continuing to manage those expectations as the project moves forward is an important piece of the communications – especially if faced with optimistic delivery dates, changing requirements, or staffing constraints.
· Focus on the worst case scenario. Be skeptical when all is going smoothly. Insist on regular checks on the project and take red flags seriously. Realistic monitoring of the project progress and analysis of the underlying factors impacting the use of contingency will help identify issues early on. Make sure not to just look backwards at what has occurred – but focus on readiness for future stages. Some carriers benefit from having third parties come in and conduct project health checks – looking objectively across the project for subtle indicators of potential issues.
In the end, Sebelius is responsible for the results of the healthcare.gov implementation and her resignation should be seen as a red flag for carriers in similar situations. Take a look at the governance you’ve put in place for your large projects. Now may be a good time to consider adding some additional oversight.
As part of research effort into building business analysis skills, I was talking with a manager of a business analyst department in a major U.S. bank this week. He described their approach to improving requirements collection. It struck me as an effective and practical method that I want to pass along.
Many of the models for building business analysis skills are top-down initiatives, planned and executed as part of a wider improvement program. These are often driven from a learning and development department or a special training area within the IT organization (see the Celent Report Building a Better Business Analyst – Transforming the Enterprise). This bank’s approach was more “bottom-up” and grew out of a focus on their software testing process. They improved the rigor of their business requirements documentation through automating their test scripting, planning and test case development process. As part of their revised methodology, it is now necessary for business analysts to gather requirements in a structured manner that can be automatically uploaded into their test automation software. These then generate test scripts, plans and cases. This yields increased consistency, control and structure to what previously was a very ad hoc process.
This practical development approach is valuable in that it delivers skill and process enhancements as part of the day-to-day activities of software development. For those looking for a different strategy for improving requirements gathering, getting there through automated testing may help. For those that have also taken this same approach, I would be interested in knowing what the results have been.
- Drop all the tables in the database, recreate the tables in the database for ABC, and populate it with the same set of start data.
- Clear out any existing application code, then get and install the latest application code from development.
- In extreme cases, the entire test server OS will be reinstalled from scratch, though that is likely unnecessary for this level of application testing.