Feature

Exploring OpenFlow scalability in cloud provider data centers

Many cloud providers challenged by network operations are curious about OpenFlow scalability.

The software-defined networking (SDN) protocol OpenFlow provides an interface that allows a central control plane to directly program the forwarding state of switches.

But the migration to OpenFlow represents a big paradigm shift for most cloud providers. "We tend to think of networking as little pieces of a jigsaw puzzle to assemble any way we want," said Brad McConnell, principal architect at Rackspace. "It takes time to adjust to big changes."

Whether OpenFlow will trigger a fundamental shift in networking remains to be seen, but "it can definitely help in areas such as mobility of virtual machines (VMs) and, by default, solves some orchestration problems customers have on their network today," McConnell said.

OpenFlow scalability: The protocol

Scalability isn't really an issue for OpenFlow because it's so simple, according to Nick McKeown, OpenFlow and SDN icon, and professor in Stanford University's Electrical Engineering and Computer Science departments.

"Once the control plane decides how it wants packets to be forwarded by each switch, it just uses OpenFlow to program the switches," McKeown explained. "The task of programming the switches is quite scalable, and many protocol choices would be just fine. Programming the switches isn't the challenging part. The control plane has a much more difficult job of calculating and deciding what forwarding state to put into the switches."

The SDN consistency model has yet to be proven, according to Brent Salisbury, lead network engineer at the University of Kentucky. "You can either run OpenFlow reactively, proactively or a combination of both. That's the crux -- how much state do you distribute, how much do you centralize? It's an exciting problem to have."

OpenFlow scalability at the controller layer

OpenFlow controllers at the scale that larger cloud providers will need in their data centers will require careful engineering.

"It's challenging to build any control plane for a large multi-tenant data center, regardless of the technology," McKeown said. "The amount of state, the number of VMs, the number of tenant policies, the number of service-level agreements, the number of flows … will create a challenge for the control plane -- SDN or not, virtualization or not -- particularly when VMs and workloads are moving around."

Rackspace has worked with OpenFlow for a few years and had it in production for nearly a year, and McConnell views controller scalability as an area with "room for improvements across the board."

"With controllers, it's a matter of choosing how much of your data center you want to put at risk at one time," explained McConnell. "For us, the question is: should a controller scale forever, or should there be a cutoff to the number of endpoints below that it manages, where we say that's as large as we're willing to make a domain?"

Federation of controller clusters also need to work seamlessly. "If we put X number of servers beneath this one domain, then another cluster will automatically be set up once we reach that number. But the services we deliver don't work that way; they don't think along those boundaries, so we need true federation where a virtual service can stitch endpoints together that are controlled by different controller clusters," McConnell explained.

OpenFlow or not, cloud providers will need "federation between controllers for scale, because central controllers can run out of horsepower sooner or later -- depending on how state is being determined. At some point, we'll need some sort of federation between east-west controllers to distribute state for use-case-specific consistency," Salisbury said.

OpenFlow scalability: Switch hardware challenges remain

Network switch manufacturers are still reacting to the emergence of OpenFlow, leaving many hardware challenges unresolved. Switch silicon and software need to be redesigned, and TCAM utilization needs to be improved. Some vendors still haven't even offered commercial support of a robust OpenFlow agent.

Switch silicon

"Current switch chips -- from folks like Broadcom, Intel, Marvell, Mellanox -- are all pretty good. They have high capacity and ample features for most data centers," McKeown said. "Really, all they need is the ability to do line-rate forwarding at 10 Gbps, which they all do, with reasonable forwarding tables that are mostly okay in this generation, but will be much better in the next. They also need features like equal-cost multipath routing, which is well-supported these days. Newer chips do things like VXLAN as well."

But because network virtualization is "much easier using an overlay, you don't need hardware support for virtualization," McKeown added.

Even last year, if you wanted to use overlays and some kind of network virtualization in hardware because of the performance it provides, according to Rackspace's McConnell, there wasn't a merchant silicon ASIC available that natively supported the tunneling protocols used by leading overlay vendors.

"But now that merchant silicon is on the way that actually does NVGRE and VXLAN in hardware, it gives us more options to figure out how they can actually integrate into an SDN domain comprised of software switches already doing that encapsulation," McConnell said. "Today, it all comes down to, what do I buy if I want to deploy these tomorrow? That's a question that didn't have good answers until new silicon started shipping."

Switch software

Many switch manufacturers will continue to maintain complex software on their switches, with features that aren't necessary in an OpenFlow environment.

"For a while the vendors will continue to sell boxes that are mostly far too complex, with far too much old software inside," McKeown said. "But this is ultimately a losing proposition -- the writing is on the wall for the boxes that simply add an OpenFlow interface and declare they have SDN."

Vendors need to consider "reducing the complexity and moving all the control functionality up and out of the box into the control plane," he said.

TCAM

Another OpenFlow hardware challenge is ternary content addressable memory (TCAM), which is a form of memory that can do rapid lookups for line-rate switch forwarding. Most commercial OpenFlow switches use TCAM to manage flow tables, but TCAM is expensive and power-hungry, which limits the number of OpenFlow flows a switch can handle.

"If you consider the typical amount of TCAM on a top-of-rack switch, the old standard answer was that with OpenFlow, you could get about 2,000 flows on a switch," McConnell said. "But if that switch is managing complex policies on, say, 48 downstream servers -- and if they're virtualized it's even worse -- you'll run out of TCAM space long before you populate all of the ports on the switch."

Another area McConnell sees room for improvements is the rate that flows can be calculated and loaded into the data plane. "If failover paths aren't preloaded, the time necessary to automatically repair around a fault might not be as consistent as some traditional protocols," he said.

"Startups addressing this from a greenfield perspective -- not trying to keep all the old protocols working the way they were -- might be able to give us some big gains in both flow space and the rate at which they can be programmed," McConnell noted.

OpenFlow agent support

One of the fundamental challenges, according to Salisbury, is getting OpenFlow agent support from a manufacturer today. "For some of us wanting to take early advantage of OpenFlow-driven services on the edges of our networks, it's difficult if the kit doesn't support OpenFlow," he said.

The lack of hybrid OpenFlow and native forwarding support is also problematic. "To enable incremental pathways to deployment of SDN integration, we need hybrid functions built into the switch," Salisbury explained. "For example, either pipeline hybrid interactions that vendors bake into their firmware or OpenFlow Normal, which can redirect packets from the OpenFlow pipeline back into the normal L2-L3 forwarding pipeline."

For more about lessons learned with early OpenFlow implementations...


This was first published in May 2013

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: