Tutorial

Virtualization basics for cloud providers

Editors, SearchCloudProvider.com

Editor's note: Cloud computing and virtualization are intricately intertwined, the latter playing a crucial role in giving data center operators the ability to deliver multi-tenant cloud services.

In "Chapter 1: Virtualization" of Designing network services for the cloud: Delivering business-grade cloud services

    Requires Free Membership to View

, authors Huseni Saboowala, Muhammad Abid and Sudhir Modali go over the fundamentals of virtualization and its role in creating cloud services. In addition to covering virtualization basics, such as a definition and brief history of the technology, the chapter also delves into the mechanics of server virtualization and its components, network virtualization, virtualization-aware networks and storage virtualization.


Virtualization: The essentials

The journey toward cloud begins with virtualization. Virtualization has emerged as the key disruptive technology that has catalyzed and enabled data centers to deliver cloud services. Compute, networks and storage form the three infrastructure pillars of today's data center. This chapter explores the abstraction of these vital resources, with a deep dive into server, network and storage virtualization. The fluidity of the infrastructure brought about by the virtualization of these key data center resources is fundamental to the enablement of the cloud.

The idea of virtualization is not new; it has been around since the days of the mainframe. But more recently, the term has gained a broader, more inclusive connotation beyond server virtualization. We begin this chapter by seeking a generic definition of virtualization, while examining the basic concepts and history associated with it.

Virtualization basics

Virtualization can be defined as the abstraction of physical resources into logical units, such that a single physical resource can appear as many logical units and multiple physical resources can appear as a single logical unit. The primary motivation behind virtualization is to hide the physical characteristics and irrelevant details of these resources from their end users. Thus, each user gets the illusion of being the lone user of that physical resource (one-to-many virtualization), or multiple physical resources appear as a single virtual resource to the user (many-to-one virtualization).

One-to-many virtualization

Consider the familiar example of virtualizing an x86 server in which software, called a virtual machine monitor or hypervisor, allows multiple virtual machines (VM) to run on the same physical server. Each VM emulates a physical computer by creating a separate operating system environment. The ability to run multiple VMs means that we can now simultaneously run multiple operating systems on the same underlying physical machine. The operating system running inside the VM gets the illusion that it is the only operating system running on that host server. One physical machine has effectively been divided into many logical ones.

Many-to-one virtualization

A load balancer, which sits in front of a group of Web servers, is a classic example of many-to-one virtualization. The load balancer hides the details about the multiple physical Web servers and simply exposes a single virtual IP (VIP). The Web clients that connect to the VIP to obtain the Web service have the illusion that there is a single Web server. Many physical Web servers have been abstracted into one logical Web server.

Virtualization: A Brief History

The concept of virtualization has been around since the 1960s, when IBM implemented it to logically partition mainframe computers into separate VMs. This partitioning enabled mainframes to run multiple applications and processes at the same time, which improved their utilization. Such multitasking allowed better leveraging of those expensive investments.

Over the next two to three decades, the need for virtualization declined as inexpensive PCs and servers became available. In addition, client/server applications became prevalent, and the trend shifted toward distributed computing. Furthermore, the universal adoption of Windows and Linux led to the emergence of x86 servers as the dominant compute platforms. Unlike mainframes, however, these servers have not been designed for virtualization. To enable the virtualization of x86 servers, specialized software called hypervisor was developed by VMware, Citrix, Microsoft and other companies.

The definition of term virtualization has evolved beyond server virtualization into a significantly broader context. Today, it represents any type of process obfuscation where a process is removed from its physical operating environment. Therefore, virtualization can be applied to other areas of IT, such as storage, network, applications, services, desktops and many more. This chapter focuses on server virtualization, network virtualization and storage virtualization, which collectively form the foundation of today's virtualized data center. The sections that follow explore these diverse forms of virtualization, starting with the most familiar one: server virtualization.

Excerpted from Designing network services for the cloud: Delivering business-grade cloud services, by Huseni Saboowala, Muhammad Abid and Sudhir Modali (ISBN: 1-58714-294-5). Copyright 2013, Cisco Press. All rights reserved.

→Download this free PDF to continue reading this chapter excerpt about virtualization basics and cloud service delivery from the book Designing network services for the cloud: Delivering business-grade cloud services.

About the book:
In this book, Cisco Systems experts demonstrate how to rapidly qualify and deploy next-generation network infrastructure and services in a cloud-centric world. Topics include key trends in infrastructure, along with various services and their drivers. Key differences in certifying on-premises and cloud-based platforms and services are also discussed at length. This book also takes an in-depth look at leading validation and benchmarking approaches. The authors present best practices, how to select the best test equipment and expert guidance on creating efficient test plans.

About the authors:
Huseni Saboowala holds a bachelor's degree in electronics engineering from Bombay University, Mumbai, India, and a master's degree in software engineering from Kansas State University, Manhattan, Kan. He currently works as a technical leader in unified communications, cloud computing and security at Cisco. Huseni lives in Fremont, Calif.

Muhammad Abid holds a bachelor's degree in electrical engineering from City University of New York and an executive master's degree in technology management from Stevens Institute of Technology New Jersey. Muhammad has 15 years of experience working for small and medium-sized enterprises as well as a global service provider. Currently, he is working in the areas of unified communications, cloud computing and security at Cisco. Muhammad lives in San Jose, Calif.

Sudhir Modali has 15 years of experience in networking, and he has worked in customer support, testing, technical marketing and product management. Sudhir has worked with diverse market segments -- ranging from enterprises to data center operations and cloud service providers -- on technologies including Ethernet, ATM, frame relay, VoIP, video, switching, routing and MPLS. He lives in Milpitas, Calif.

This was first published in January 2013

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: