Editor's note: In his latest book, The Art of the Data Center, Cisco Press author Douglas Alger gives readers a behind-the-scenes look at some of the world's most innovative data center designs. One service provider's facility in Sweden -- built in an underground bunker and festooned with manmade waterfalls and a 687-gallon saltwater aquarium -- seems to resemble a James Bond villain's headquarters more than a data center. Another provider's data center runs entirely on solar power. In Spain, a 1920s chapel hosts one of Europe's largest supercomputing centers. Internet superpowers such as Facebook, Yahoo! and eBay also give readers a look behind the curtain -- well, make that 'firewall.'
Alger, who also works as an IT architect for Cisco Systems, recently spoke with SearchCloudProvider.com about some of the notable cloud data center designs he saw during his research.
Did you observe any big differences in data center designs between the facilities that hosted cloud services and those that did not?
Douglas Alger: There are certainly a handful of [data centers] in the book that do support a cloud service, whether it is private or public. I definitely made a point to ask those different folks, 'How did this influence your design and how did you opt to go about this?' Cisco is a good example of this and is one I am obviously pretty familiar with. They have been going through an interesting transition over the last few years, and in their case, they've got a private cloud. They initially would have separate data centers dedicated for different functions: production data centers doing all their mission-critical activity; development data centers doing engineering, R&D, working on things for the future; and some other environments [that sometimes] were part of entities that they picked up through acquisition but maybe was not fully integrated with some of their other environments. WebEx is a good example of this: They had all of these different rooms and were distributed all over the place.
More on cloud data center designs
How does cloud affect data center energy consumption?
Find out how Fujitsu is rethinking data center cooling with its new cloud rack design
Can a switching fabric make cloud data centers simpler?
For the last several years, they have been going through a significant consolidation effort. They have been starting to build out what they call multi-tenant data centers. The data center in Allen, Texas, that is profiled in the book is really their first -- the latest and greatest of how they are looking to do this. It is connected to another data center about 15 miles away, in Richardson, Texas, and it's an active-active configuration, so they are constantly in contact with one another.
The discussion was: How did they go about this? What special provisions did they have to make for this -- to support all of these different groups that would be drawing out of these overall resource pools -- rather than, 'Here's an individual box that is supported by this group in its own space'? It was interesting to talk to them and find out the technology that is in play and how it did influence the design.
In the chapter on IBM, the engineer you spoke with played up the importance of modularity in cloud services. Why is this model such a big deal to IBM?
Alger: Generally, the advantage of modularity is if you define an increment of infrastructure and it then becomes repeatable, you can roll it out and it allows you to scale up or even scale down if you need to. You can introduce change into those individual modules.
That modularity, because you've got some repeatability, allows you to get in there and deploy that so much faster. That was the case with IBM. We talked about how much quicker they felt like they could deploy infrastructure (compared [with] conventional models) and I/O. Another one of the colocation facilities -- and actually, a modular data center manufacturer -- also discussed that they definitely felt like they could roll things out faster and at a lower cost than a conventional build.
One of the repeated themes of the book is energy efficiency, but it was unclear whether cloud helps or hinders it. Green House Data spoke about the energy savings they experienced in cloud, but Terremark later said it didn't really lower energy usage for them. What's happening here?
Alger: I think it depends. Part of it is a scale issue. There is a scale issue and there is a utilization issue in that there is a certain point at which, if you are going to create a resource pool that people are going to be drawing from, you need to build out that resource pool. You are probably not going to do it with just one machine; you are going to put in a certain amount to provide you capacity to be able to draw from.
I think for some folks who have a server environment, you are going to go through this period as you are transitioning from where you used to be. If you had big physical build-out and you are trying to go much more to this resource pool [model], there is going to be a certain period of time [to adjust]. I think there is a little bit of that in play as you are going from one to the next; they have to coexist for a while, or you have to get enough of a seed in place for [a shared pool] to be able to grow.
Also, it ultimately depends. The technology can get you so far, but it is still going to come down to how you are using it. Someone can manage to use virtualization and cloud and still manage to be relatively inefficient. Obviously, that is not normally going to be the case. But do you have the model [deployed] in such a way that you are going to be able to have these resources come together and have multiple people draw off it, which drives up the utilization? Because if you're using virtualization technology but there's no willingness to share those resources … the technology in and of itself will not necessarily get you there. It is a piece of how you can get there.
This was first published in November 2012