Thomas Doughty
VP, Information Systems
Prudential
Tom Doughty did his homework on laptop encryption projects before he chose McAfee’s Endpoint Encryption product (formerly Safeboot) and built the proprietary database and distribution mechanism to support it. The solution fit all his criteria, and works well, but careful planning of the rollout with rigorous test cases, regression and overall quality assurance, played no small role in the ease of the deployment.
BTN: Describe the mobile computing environment at Prudential and the drivers behind the project?
Tom Doughty: Some of the primary drivers for the project were to take what were technically effective controls against data loss on laptops and evolve those to meet evolving external expectations. Those included implications of state statutes and regulations, but more importantly customer and other external expectations around encryption. It’s important to us to stay ahead of those requirements in terms of our technical solutions and be proactive.
How many laptops or mobile devices are we talking about?
At present we support about 18,000 laptops domestically, and at the time of the deployment it was about 15,500, so it’s grown pretty steadily since then.
What were your requirements when you were looking for a solution?
Some of the key selection criteria included effective central management of the keys and the endpoints. Its important to look at these as enterprise solutions and not just what works most effectively on the endpoint in a vacuum. Transparency to the user base — in terms of day-to-day use, the first time experience, when the machine and disk were encrypted for the first time, and, probably most importantly, the deployment process. Making this unnoticeable to our user base was probably first and foremost. Obviously scalability was critically important — the back end infrastructure we built to accommodate what was initially 15,500 users was expected to grow. Understanding and being comfortable with the ability to multiply that user base and not having to significantly re-engineer that back end was important to us. Full-disk encryption as opposed to file-based was obviously a core selection criteria for us, in order to fully satisfy the safe harbor language in the state statutes, and some of the draft Federal legislation.
How did the implementation go, can you describe it?
We had certain expectations that we engineered for. And, I did a lot of research and comparing notes with peer companies and across industries, and that set the stage for some of our expectations, some of which could be engineered out and some of which we communicated to our supported business base as part of the experience that would be involved in achieving this level of protection. We expected a one-to-two percent failure rate of laptops, at best. We expected that there would be more laptops that we would have to rebuild, or address at least within a helpdesk structure once we kicked off the process. We had even heard of some real-world, peer-company cases where that failure rate exceeded 20 percent in accelerated deployments—in an event-driven basis as opposed to our positive position in terms of being able to be less reactive in our deployment. I also thought that the user experience during the first encryption process would be pretty painful in terms of performance while the disk was encrypting itself. I also thought that given this population of machines it would take the better part of two months to deploy. Those were some of the baseline expectations we had prepared ourselves for.
But despite those types of expectations—I really had a lot of good people working on the project, and supporting it—it turned out none of those expectations we wanted to be careful about really materialized. We had a failure rate, statistically during the deployment time period, that was lower than the laptop population in general would normally experience during that time frame.
It was just a good illustrator of the transparency of the product once it was deployed. ...We were, pretty quickly after the first few days of deployment, able to make a decision to be even more aggressive in our deployment timeframe, to wrap up the schedule and get, at that time, all 15,500 encrypted within the space of about 30 days.
How long has it been since the deployment, and what has your experience been with management of the system?
The core of the deployment started last June and July and its been running the better part of a year now, and gone through full cycles of not only adding users, but having users with dormant machines need to have their keys refreshed. The process has been pretty positive. We’ve had some questions to be answered, which we successfully worked through with the provider around structure in the database and keeping machines in the database that are still active and machines out of the database which are no longer active. Those were pretty readily rectified, but the end result is that the user base has had minimal, if any, impact in using their laptops. The reviews have been pretty uniformly positive across the board in terms of not only the cleanliness of the distribution but the ongoing transparency.
What about the key management, are you happy with that?
We have been happy with the key management structure, so far. We do use a structure where after a certain period of inactivity even the authorized user has the key expire. The machines do have to stay synchronized with the enterprise back end on an ongoing basis. The experience for a dormant user to come back on and refresh themselves—we designed that from the start as a tier one helpdesk experience, that was another area I was anticipating some degree of user friction when that time period that we defined first rolled around after deployment, but it didn’t come to pass.
Was there a significant staff learning curve when it came to supporting this?
There was a fair amount. There is a proprietary database involved here, and there was a fair amount of infrastructure that we built and a distribution mechanism that we built to accommodate this. The important part of the message was that most of that learning was able to take place during the testing, regression and QA process before we began the deployment in production.
What advice would you give to peers considering such a project?
I would say to regress and understand every configuration option you have against every endpoint build you plan to support it on before deployment. That sounds like something you’re going to do for any type of software deployment, but really taking meaningful test and QA populations, and including your distribution strategy and mechanisms in that plan, rather than just trying to limit your learning to a lab environment, was critical in this case. They’re the same principals you’d apply to any good project where you’re putting software on thousands of endpoints in a short timeframe, but this one does have that worst-case potential to turn your revenue producers’ machines in to bricks. Really the advice is to be confident enough in your test plan’s completeness so that your deployment decisions are solid, particularly around the pace of the deployment and the timeframe. ...I’d say it really is all about completeness of test cases. (c) 2008 Bank Technology News and SourceMedia, Inc. All Rights Reserved. http://www.banktechnews.com http://www.sourcemedia.com