Tip

New service needs drive changes to telecom data center architecture

Like all enterprises, network operators have long deployed and operated large data centers to manage their businesses. Yet the big difference between enterprises and network operators of any type is that operators also use their data center architecture

    Requires Free Membership to View

and infrastructure to provide services, and those services then shape infrastructure investment.

The combination of OSS/BSS and feature reuse is likely to be the largest driver of change for telecom data center networking.

Tom Nolle
President, CIMI Corp.

Driving the changes in traditional telecom data center architecture is the need for network operators to concentrate on their primary services priorities—content delivery, mobile services and cloud services. Existing telecom data center models tend to be based on two totally independent architectures and support two largely independent sets of applications.

  • The "traditional IT" part of the telecom operator's commitment runs operations and business services (OSS/BSS) applications. OSS/BSS systems started out as tools to track manual provisioning processes and account for the large number of network assets. As networks transitioned to smart devices with management interfaces, OSS/BSS processes were integrated with network operations centers (NOCs) to control the network. They were also integrated with customer service and order entry portals to support the creation of service orders and changes. Data centers that support these functions look a lot like enterprise data centers.
  • As voice services evolved from switch-hosted to computer-hosted, service features like voicemail were implemented on service delivery platforms (SDPs) that were more elements of the network than of the data center. For some service providers, SDPs have proliferated out of control, resulting in poor utilization and operations cost overruns.

Telecom data center architecture model to evolve in three phases

Because competition from over-the-top (OTT) companies will keep services prices low and revenue margins thin, telecom data center architecture problems can't be allowed to hinder the growth of any of the three new service priority areas.

Neither the OSS/BSS model nor the SDP model are viewed as the optimal targets for data center evolution. Mobile, content and cloud service priorities are already being offered by OTT competitors, and a standard infrastructure is already deployed. The OTT data center model is based largely on cloud computing, blade servers and high levels of data center network integration of elements (a fabric that connects all servers and storage with the Internet). While telecom operators might be free to change this, most see no reason to tinker with a model that's achieved wide success in the Internet service space.

Operators transition to a three-phase telecom data center architecture evolution

While the overall telecom data center architecture strategy needs to evolve, it is likely to occur in three distinct phases.

Phase one: Deploy blade-server farms using generic servers that run Linux. The next phase in network operators' telecom data center architecture evolution is to support cloud computing and early content needs by deploying blade server farms that use generic servers likely running Linux rather than the more traditional "minicomputer" multiprogramming platforms designed to support OSS/BSS systems. This type of server farm can be used to host content and Web-based service features. It can also provide for cloud computing and virtual machine hosting services based on the Infrastructure as a Service (IaaS) model.

In this phase, interconnecting data centers, storage networking and WAN connectivity are paramount network requirements. The cross-integration of server elements and wide distribution of storage assets across large server farms are less a factor because the applications are more siloed at this point.

Larger telecom operators appear united in the view that the future of their cloud computing opportunity is to offer a higher-level service than IaaS. Enterprises want to use the cloud to offload and back up their mission-critical applications, which is something that's easier in a workflow model based on service-oriented architecture (SOA).

This requires software components that run on a larger pool of servers, with component integration taking place through the data center network in a Platform as a Service (PaaS) model. Operators also believe that hosting Software as a Service (SaaS) offers the best overall revenue/profit balance, especially for SMBs.

Over time, operators expect to integrate OSS/BSS elements from their existing architecture into this picture to improve operational efficiency. The standards process has been migrating to one of several component models—the TMF offers both its next-generation OSS (NGOSS) framework and OSS/J, a Java-based OSS model that's very compatible with standard servers and open source tools.

Phase two: Migrate to fabric-based interconnection of storage and servers. The need to reuse features between content and mobile services, and to exploit cloud architectures to create all services, encourages the use of a functional component of a service (what you might call a "feature atom") linked to other "atoms" to create a service with workflow processing and inter-system communication.

The concept of service feature atomization creates the ability to compose and reuse elements in the operator space that SOA and componentization create in the software space. The combination of OSS/BSS and feature reuse is likely to be the largest driver of change for telecom data center networking, which sets the stage for the migration to fabric-based server and storage interconnection.

Another important force is the adoption of a more IT-like model of high availability rather than a telco-like model. Traditionally, telecom operators have employed very high-reliability components and focused on lowering device mean-time-between-failure (MTBF) rates with internal redundancy. Commercial servers tend to substitute failover for per-device reliability as a means of securing high availability. This also creates a demand for data center networking to reconnect users and resources quickly and make important data available to a whole farm of servers so that any server can be used for backup if needed.

Phase three: Connect data centers into modular clouds. Once data centers have evolved "atomically," or within themselves, they will begin to become the connected elements in a cloud. The third phase of the data center architecture evolution is to connect independent data centers to become modular clouds. It is not yet clear how far or fast this last phase will advance. Network operators haven't convincingly answered the question of which data centers should be dispersed versus collected.

Most mobile, content and cloud services opportunities can be realized through a single data center per metro area, if reasonable measures to assure availability are taken. In areas of high population density or where natural disaster risks could isolate a data center, dispersing IT to multiple sites makes sense. The issue is balancing the complexity of connecting a large pool of servers to cooperate in building services for customers against the potential single point of failure. It seems likely that in the most dense areas—between 60 and 90 of approximately 250 metro areas defined in the U.S., for example—operators will build multiple data centers and connect them via cloud technology.

The data center evolution will make service delivery increasingly data center based, like the OTT model. Being able to compose atomic service features into services depends on the fabric connectivity within modular data centers of the cloud and across those modules via the WAN.

Operators serving highly industrialized areas are likely to move fastest through these phases, and these same operators are likely to have the greatest dispersal of data center assets in the final phase. But the overall trend is nearly universal, and when it's over, the network operator data center will be totally transformed.

About the author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is the publisher of Netwatcher, a journal addressing advanced telecommunications strategy issues. Check out his SearchTelecom.com networking blog, Uncommon Wisdom.

This was first published in June 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.