Survey suggests federal network complexity is limiting data center consolidation

Share

Written by

The Federal Data Center Consolidation Initiative affects every government agency and is monitored closely on Capitol Hill. But after some impressive initial gains, the effort is beginning to show signs of slowing. One survey released this week hints at some of the reasons consolidation is so difficult, namely the increasing complexity of federal networks.

2014_07_NetworkComplexityInfographi According to a survey by Brocade, federal networks are getting too complex, and that may be hurting FDCCI from meeting its pending deadlines.

FDCCI began as a realistic and reasonable effort to get a handle on the number of government data centers. Back in 1998, there were 432 data centers serving the entire federal government. That’s actually quite a lot. It means having to power, protect, patch and maintain a lot of real estate. But even that number grew. By 2009, there were more than 1,100 data centers serving the government, and plans were in place to bring even more online.

To stop that rapid growth, then U.S. chief information officer Vivek Kundra created FDCCI. The goal was to close more than 800 data centers by 2015, putting the government below its 1998 footing in terms of data centers.

Initially, the FDCCI went well, with more than 400 data centers closed pretty quickly. However, once the low-hanging fruit were eliminated, the effort slowed down, and it’s now an open question whether FDCCI will reach its original goals by deadline. Congress even got into the act with worried senators introducing a bill in November that would make FDCCI a law and require agencies to report on their data center closure plans.

So why the slowdown? If this were astronomy, there would be some undiscovered dark matter working against FDCCI. That counterforce may be federal network complexity if the results of a new survey put out by MeriTalk and Brocade is any indication.

I spoke with Director of Systems Engineering for Brocade Daemon Morrell about the survey and its results and what that means for FDCCI. He wasn’t at all surprised about the initiative to close federal data centers getting stalled. “The complexity of some of those federal data centers today doesn’t really lend itself to consolidation,” he said.

The reason for that may be the networks themselves, which are increasing in complexity beyond many agencies’ ability to fully manage them, much less consolidate the data centers on the backend. According to the survey, which was conducted with a large group of federal agency administrators, 54 percent said their network complexity has increased in the past year and 68 percent believe it will continue to increase over the next three years. More shocking, a full 94 percent said they have experienced downtime as a result of the complexity that has impacted their agency mission over the past year.

When asked what was driving the complexity, Morrell put the blame squarely on two main factors: the quest toward greater mobility by allowing employees to bring their own devices and the creation of new legacy systems to handle the mobility problems. “One really drives the other,” he said. “What we are seeing is that agencies have problems with their mobility programs and then vendors come in with these quick-hit, proprietary solutions that fix the small problem but don’t mesh with anything else.”

One thing Brocade recommends to reduce that complexity is the use of nonproprietary protocols such as IP or Ethernet Fabrics for every solution. While Spanning Tree is a nonproprietary protocol, it is no longer being widely adopted. Morrell said in a perfect world, standards would make navigating data center consolidation much like driving a car across the country, which is defined by a set of universal rules like driving on the right side of the road, red meaning stop and signs being written in English. By contrast, now it’s like trying to make that same drive across fifty states each with a different set of rules. For FDCCI, proprietary rules and standards make data center consolidation much harder because agencies have to decide which program to keep and which to destroy, whereas universal standards would make a merger much simpler.

It seems the old problem of what to do with stovepipe systems didn’t die off in the ’80s and ’90s. No wonder FDCCI has hit a bit of a roadblock.

Using open standards may not be a silver bullet, but it should go a long way to reducing complexity. In addition to moving to open standards, respondents suggested a few other things that might improve federal networks and reduce complexity, such as adding bandwidth (44 percent), increasing redundancy (28 percent) and increasing virtual networking/software-defined networking (22 percent).

-In this Story-

Commentary, data analytics, FDCCI, Government IT News, Guest Columns, open data, Tech, Technocrat, White House
TwitterFacebookLinkedInRedditGoogle Gmail