No banker wants to think about having to recover operationally from a disaster, natural or otherwise. Hurricane Andrew's two-state assault and downtown flooding in Chicago were two reminders this past year of the importance of being prepared for the worst.
Like an insurance policy you hope you'll never have to use, banks have contingency plans for dealing with such circumstances. Many have contracts with business continuity services, like Comdisco Disaster Recovery Services, Rosemont, Ill., or SunGard Recovery Services Inc., Wayne, Pa., that operate data recovery centers, also called hot sites, around the country.
Preventive medicine. But disaster recovery--or disaster preparedness, more accurately--begins at the bank. Ensuring the integrity of the data center, for instance, reduces the risk of ever needing an off-site facility to keep critical operations running. Building a new data center or overhauling an existing one to make room for additional systems provides an excellent chance for hankers to practice preventive medicine.
Several banks spent much of 1992 moving their data centers from downtown locations at their headquarters to new facilities in other towns. U.S. Bank, for example, the Portland, Ore.based lead bank subsidiary of $19 billion-assets U.S. Bancorp, opened its new data center, called Columbia Center, in suburban Gresham, Ore. On the East Coast, Chase Manhattan Bank, with $97 billion in assets, consolidated several processing centers into a super data center at its new quarters at MetroTech Center, in Brooklyn.
Other banks grappled with blending data center components of merged banks into one center. Society Corp., for example, the Cleveland-based bank holding company, is putting the finishing touches on its acquisition of Ameritrust, Society's former cross-town rival. Combining the data center applications of both institutions provided an opportunity to install new data security measures and to choose between existing systems both banks already had in place.
A closer look at all three of the above examples should suggest some preventive measures for banks considering new data centers. Quake proof. A hallmark of U.S. Bank's new Columbia Center is its resistance to damage caused by earthquakes.
"People don't usually think of Oregon as having a strong possibility of earthquakes, but historically there have been some large ones," notes Timothy Meier, the bank's senior vice-president of information services. "They're spread apart farther in time than ones in California and other areas, but there's definitely the risk of earthquakes here."
The building is designed to withstand earthquakes 30% stronger than those of the Seismic Zone 4 magnitude, which refers to quakes at the upper end of the Richter scale. Oregon is considered a Seismic Zone 2B region, meaning a more moderate quake would likely occur than a severe one.
The mainframes and other computers that handle account transactions, leasing, mortgage, and computerized banking operations are literally tied down to prevent them from bumping into each other if the floors move. Simply bolting them down would not absorb enough of the shock.
Telecommunications networks using fiber optics enter Columbia Center from two directions, approximately 90 degrees apart. One reason is to have a back-up link to other bank facilities. The other is to ensure that the fiber optic line isn't broken, since depending on which direction the earthquake fault runs, one of the lines would be roughly parallel and, therefore, the fault shouldn't sever it. If only one line were used and it were perpendicular to the fault, it would more likely be affected.
"We've never had that kind of redundancy before," says Meier, "but we do have a hot site arrangement with Comdisco with lines that go there in the event of an emergency, and we test that frequently."
Other standard contingency measures in place include water storage for cooling the large computers and sufficient diesel fuel for the generators to run the building for three days. Chase Brooklyn. There's plenty of backup built into Chase Manhattan's MetroTech center. The bank occupies two buildings at the complex, which is also home to, among others, the Securities Industry Automation Corp. (SIAC), which processes trade orders executed on the major New York securities exchanges.
The emphasis on contingency planning in New York has less to do with the threat of earthquakes than with keeping communication lines open. Chase's buildings at MetroTech are connected to facilities in Manhattan and more distant points by wide area networks (WANs) and local area networks (LANs) that are accessible from either building.
"Normally what happens in a high-rise fire is you lose a floor," says Douglas T. Williams, senior vice-presilock us out of the other building, and we can run all the systems from there."
Users of workstations and PCs can sign on to the appropriate network from anywhere in the facility, in the event their normal desk or workplace is incapacitated. In the event of a more localized problem, such as a breakdown in a network or telecommunications switch, secondary units automatically kick in. Wired for sound. One of Chase's buildings at MetroTech is wired with fiber optic lines, and the other easily could be. The entire facility is wired with copper cables for data communication and access to those lines is ample.
"We've wired the facility to be able to connect into the network every 81 square feet," says Thomas Fogarty, vice-president in charge of the technology services center at MetroTech. "Everywhere you go, you'll see a fixture that enables you to connect a communications box to the LAN and voice lines, and we can run fiber into those boxes if needed."
That capability is important when it becomes necessary to move a department from one location on a floor to another for space or strategic reasons. But it also would be useful in quickly resuming operations in a department affected by an equipment failure.
MetroTech, like Columbia Center, also uses two communications lines from the outside, one of which could serve as a backup if necessary. Electrical power is supplied from separate substations, in case one is incapacitated. (In August 1990, fire in a substation near New York City's South Street Seaport left much of the financial district without normal electrical service for hours-- days, in some cases.) Back-up water cooling systems are in place, and hot sites are available in the event of more widespread damage.
To help keep an eye on network operations worldwide, Chase's data center has a state-of-the-art command center that monitors network availability. Giant screens display schematics representing normal operations as well as trouble spots. When a glitch does occur, says Fogarty, network software specialists often can fix the problem from MetroTech. Larger problems would require on-site attention.
A similar facility in England, originally designed to handle operations for Chase's U.K. business, is evolving into a more sophisticated data center capable of handling more international operations. Though smaller, the Bournemouth facility, located about 100 miles from London, serves almost as a twin to MetroTech. In fact, capabilities are being built into the U.K. center that would enable it to quickly accommodate trading operations in the event of a breakdown in Chase's London trading center. Down the road, note Williams and Fogarty, a similar arrangement may take shape at MetroTech.
Managers' scrutiny. Senior management at Society Bank, Cleveland, is closely watching data center consolidation take place and is emphasizing contingency planning throughout the process, notes Ed Napolean, vice-president and manager of corporate information security at the bank.
"We shut down the primary Ameritrust data center in Cleveland in October," says Napolean. "We've scheduled a hot site test for Dec. 21, where we will test the recovery capability of not only our old applications, but all the new systems we've brought on board."
In October, the bank was building redundant communications pathways into Society's data center for a total of three. "We could lose two out of three of our telecommunications providers and still maintain 66% of our primary capability," says Napolean. "We did this to get rid of the single point of failure syndrome, where if you lose one telecom provider, you're dead."
Particular attention was paid to bringing trust processing applications from Ameritrust on line in Society's data center, notes Napolean.
"The trust application systems that Ameritrust had were big--bigger even than Society's, I think," he recalls. "We were very careful to make sure we had network requirements and disaster recovery plans in place, because we didn't want to disrupt those customers."
Under a renegotiated hot site arrangement with SunGard--the absorption of Ameritrust's systems roughly doubled the cost of that service-backup for the bank's automated teller machine network is now provided as well. Testing of that contingency system is to begin in early 1993.
As for the rest of the combined operations, the pieces already are in place for handling emergencies, and regular testing is under way.
"Senior management insists that we test our recovery capability this year," says Napolean. "They felt that so much change has occurred this year that they wanted to know contingency plans were in place instead of the attitude that 'so much has happened, we can't possibly run tests now."
Today's disaster recovery systems and technology for preventing mishaps are not inexpensive. Putting up a substantial building is costly to begin with, and adding redundant networks and other systems only inflates that cost. But falling behind the curve on disaster preparedness could prove more costly still.…