Saturday, August 23, 2014

Is VMware's NSX a SDN, NFV or NV?



Next week is VMworld 2014.  Two weeks ago, there was already a lot of traffic on the internet about this event.  People are waiting to see what new product VMware is going to introduce and how these product can help solve their business or technical problem at work. 

I believe vSphere 6 will be announced.  Both vSAN and VVol will be a hot topic.  Integration of Dockers and VMware will be another hot topic as people are saying Dockers will replace VMs and VMware will be saying otherwise.  

Many people also talk about sessions and hands on lab on NSX.  This got me to look in to what NSX is.

Acronyms
The title of this blog has lots of acronyms:
  • SDN – Software Defined Network
  • NFV – Network Function Virtualization
  • NV – Network Virtualization
  • NSX – just like ESX it is a VMware product name.  
If one is in the IT industry, one would have heard about these acronyms at some point and one can say what these acronyms is abbreviating.  But do we really understand what they really are.

SDN – Software Defined Networking
The acronym SDN is a widely used term.  When I type in “What is SDN” on my favorite search engine I got 36,300,000 hits.

Most articles defines SDN as an architecture that separate the network control plane from the forwarding plane in which the control plane is generally centralized.


NFV – Network Function Virtualization
Network Function Virtualization as the word suggested is the virtualization of network functions.  Virtualize means to abstract from the physical.  Network Function is often refers to Layer 4 to Layer 7 functions such as firewall, load balancer, DNS or IDS/IPS.  A quick reference of the OSI layer can be found here


Network Virtualization
Network virtualization is the abstraction of the physical network into logical segments with network overlay/tunneling technologies.  VXLAN, NVGRE and STT are good examples of network overlay technology.


Image source: http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-729383.doc/_jcr_content/renditions/white-paper-c11-729383-07.jpg

With VXLAN as the network overlay, tunnels are established between the VTEPs (VXLAN Tunnel End Point).

After reading all these, what is your answer to the title of this blog post: “Is VMware's NSX a SDN, NFV or NV?”

To me the answer is – VMware NSX is all three. While these are 3 distinct terms but they are interrelated.  All 3 technologies have the same purpose of solving the networking demand of the contemporary data center.

VMware NSX
NSX was officially announced last year at VMworld 2013.  During the announcement there is one presentation slide that caught the whole world’s attention (well part of the tech world may be).  This slide is the companies that support NSX.  Cisco was missing in that slide.  For a long time Cisco’s v1000 virtual switch is working in vSphere as the Distributed Virtual Switch option.  While VMware introduces NSX, a few months later Cisco announced Application Centric Infrastructure (ACI). These are 2 different approaches for solving problems in the contemporary data center.




This picture is from a blog by Brad Hedlund, engineering architect for VMware’s Networking and Security Business Unit (NSBU).  This is the best way to understand what NSX is - Just like how ESI virtualized the compute platform, NSX is to virtualize the network.

VMware has good articles to describe what NSX is here and here is and I am not going into the details of it in this post. 

VMware NSX comes with 2 flavors:

  • NSX for multi-hypervisor
  • NSX for vSphere

NSX can integrate with OpenStack. Scott Lowe has a nice blog series on NSX/NVP and this particular post talks about NSX and OpenStack integration

VMware NSX components
According to this article by Hatem Naguib there are 5 basic components for NSX:

  • Controller Cluster
  • Hypervisor vSwitches
  • Gateways
  • Ecosystem partners
  •  NSX Manager

Also, in another VMware document – the VMware NSX Data sheet, the key feature of NSX are

  •  Logical Switching – Reproduce the complete L2 and L3 switching functionality in a virtual environment, decoupled from underlying hardware
  • NSX Gateway – L2 gateway for seamless connection to physical workloads and legacy VLANs
  •  Logical Routing –Routing between logical switches, providing dynamic routing within different virtual networks.
  •  Logical Firewall –Distributed firewall, kernel enabled line rate performance, virtualization and identity aware, with activity monitoring 
  •  Logical Load Balancer – Full featured load balancer with SSL termination.
  •  Logical VPN – Site-to-Site & Remote Access VPN in software  
  •  NSX API – RESTful API for integration into any cloud management platform
From this we can see portion of NSX is meeting the requirement of SDN, NFV and NV.

NSX is a big topic and in the future will dig deeper but this is my preparation for next week’s VMworld 2014.

Sunday, August 17, 2014

VXLAN in the contemporary data center



What is a Contemporary data center?
A contemporary data center is a virtualized data center.  At first it was only the server that was virtualized.  Virtualizing the servers alone changed the data center from static environment to a dynamic environment where servers running as virtual machines can be provisioned and deleted as well as moved from one physical machine to another.  The contemporary data center has changed into a dynamic/elastic environment.  

Later on, storage virtualization has made the data center more dynamic/elastic where beside the virtual machines, the data can be moved around. 

Virtual machines can move from one physical server to another server is very useful.  However, the limitation was that these physical servers have to be connected in a flat network (layer 2).

With multiple virtual servers running on a physical server allows for multi tenancy.  VLAN is a good way for traffic isolation among the various tenants.  The number of VLANs in a network is limited by the 12-bit field which is 4096 in which VLAN 0 is not a valid VLAN thus only 4095 VLANs can exist in a given layer-2 network.

To cope with the increased demand on the network from the virtualized data center the industry has come up with 3 different ways to alleviate the problems.  The most discussed technologies are:
  • Network Virtualization
  • Network Function Virtualization and
  • Software Defined Networking

I will describe and compare these 3 technologies in another post.  This post will focus on VXLAN which is one version of Network Virtualization.

What is Network Virtualization?
Virtualization is the abstraction or decoupling of something from the physical entity.  In this case, for network virtualization it is the ability to abstract networking from the physical network. 

How does network virtualization abstract from the physical network?  One way is to use the technique of network overlay where tunnels between end points are created on existing physical networks.  The most common tunneling protocols are:

  • VXLAN (Virtual Extensible LAN)
  • Network Virtualization using Generic Encapsulation (NVGRE)
  • Stateless Transport Tunneling (STT)
  • Network Virtualization Overlay3 (NVO3)
Benefits of Network Overlay
The very first problem that network overlay can help solve is to extend the layer 2 domain across layer 3 subnets.  This resulted in physical servers are not confined to a single flat layer 2 network for virtual machines to move around.  With traffic tunneled between end points, it helps in traffic isolation among tenants.

Each overlay networks has its own network id and thus extend the 4095 VLAN limitation.  Furthermore, multi-tenants in the same data center can have the same private IP address.

What is VXLAN?
VXLAN (Virtual Extensible LAN) is a network tunneling technology by encapsulating UDP packet on top of a native Ethernet frame and transport over an IP network. 

It was jointly developed by VMware, Arista Networks and Cisco.  The latest specification which is moved to RFC status also has contribution from Storvisor, Broadcom, Citrix and Red Hat.  The title of this IETF draft is “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks”.  This technology was first announced at VMworld 2011. Since then there are tons of articles about this topic on different technical magazine and blog.

VXLAN Terminology
The best way to understand a technology, in my opinion is to start from the terminology being used.  It provides a framework of what the important elements are for a given technology. At the very least we can type in these key words in our favorite search engine and start researching.

The following are in my opinion the essential basic terminology used in the VXLAN world:
  • Encapsulation
  • VTEP
  • VNI
  • IP Multicast 
Encapsulation
The term encapsulation is used in Object Oriented Programming as well as in data communication.  The idea is the same in both cases. The concept of encapsulation is to put one object into another object and send to a destination. 

In the case of VXLAN, it is to put a layer-2 frame as the payload of an UDP packet and uses IP to reach the destination.  When reaching the destination, the packet is being de-capsulated.

I have a picture taken from the Cisco website that not only details the individual field of a VXLAN packet but also to explain the concept of encapsulation with color.  The yellow portion is the “original L2 frame” and is being put as the payload of an UDP packet as highlighted in blue.


Image source: http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-729383.doc/_jcr_content/renditions/white-paper-c11-729383-02.jpg

VTEP
As we described in the above paragraph, packets are being encapsulated and de-capsulated from the source to the destination.  VTEP (VXLAN Tunnel End Point) is the entity that performs the encapsulation and de-capsulation. 

VTEP plays a vital role in the VXLAN operation.  It is these end points that the tunnel is created so that the “original L2 frame” can be transported back and forth thus achieving the goal of layer-2 communication over a layer-2 (IP) infrastructure when entities are in different IP subnets.  One constraint that we have mentioned in this post was that vMotions of virtual machine is limited to physical machines that are in the same layer-2 network.

VTEP can be implemented in virtual switches in the hypervisors or it can be on physical networking device such as switch and routers.

VNI
VXLAN is about layer-2 segments as inferred by its name – extensible.  Traditional VLAN segment is limited by the 12-bit VLAN ID to 4096 per network.  With VXLAN it is being expanded to 16 million logical segments.  This is done by the use of a 24-bit VNI or VNID (VXLAN Network ID) to uniquely identify logical segment within a VXLAN network.

Each device in the VXLAN network is uniquely identified by the combination of VNI and the MAC address.

IP Multicast
VXLAN operates on a layer-3/IP network.  Tunnels are created between VTEPs. IP multicast was specified in the VXLAN specification to be used to simulate a layer-2 broadcast to find the location of the destination device.  IP multicast can be IGMP or PIM.   I will have to find out if one method is being used more than the other.

Cisco has a proprietary implementation of using unicast to perform this function.  It is being called the unicast-mode.

Putting the terminology together to see how VXLAN works
Knowing the terminologies is like knowing the alphabets.  Now we are to make a sentence from the alphabets.


Image source: http://blogs.vmware.com/vsphere/files/2013/05/Learning-1.jpg

The above diagram that I have seen at the VMware blog explains the VXLAN operation very well:
  • VTEP is implemented in VMware’s vSphere Distributed Switch.
  • VTEP has an IP address and in this case they are on the same subnet.
  • There is a Layer-3 infrastructure network.
  • The 2 VTEPs are member of a multicast group and in this case IGMP (Internet Group Management Protocol) is used.
  • The VNID is 5001.
When VM on the left wanted to communicate with VM on the right:
  • VM sends out a destination unknown, broadcast or multicast packet
  • VTEP on the left (IP = 10.20.10.10) encapsulate this layer-2 frame into an UDP packet and send it out to the multicast group.
  • Other VTEPs in the multicast group (in this case there is only one) received the packet will de-capsulate and flood the packet on their local layer-2 domain.
  • In this process the VNI and the MAC address of the VM on the left is learned by the VTEPs.
  • VM on the right received the frame from VM on the left and reply.
  • The reply frame will be send from VTEP on the right to the VTEP on the left as a unicast frame since the MAC address and VNI are learned.
  • At this time the MAC address and the VNI of both VMs are learned by the VTEP and from this point onward, traffic between both VMs will be IP unicast between the 2 VTEPs

This is a brief overview of VXLAN in the contemporary data center. 

Tuesday, August 5, 2014

Stepping out of our comfort zone.



This blog post has long been over due.

At some point I wish I can transform this blog into a technical blog.  Before that I would like to get into a habit to publish something at least twice a month.

Last May I had the opportunity to do a 45 minutes presentation at a VMware User group (VMUG Los Angeles). This was my very first public presentation in my whole life. The topic was “A Beginner’s Journey to Puppet”.

I did the presentation because I want to step out of my comfort zone and push myself to do something different than my day to day work as a software developer.

In September 2013 Scott Lowe (@scott_lowe) has this idea of putting the user back in the VMware User Group (http://blog.scottlowe.org/2013/09/18/putting-the-user-back-in-vmware-user-group/).  It was a wonderful idea and Scott is willing to donate his precious time to help 5 people to speak at VMUG.

I totally agree with Scott that user should give back to VMUG.  At one time I told my VMUG leader: “I know some and you know some, let’s share what we know”.  This is the only way we can move on in our career. This also echoes my previous blog post where we should engage in some community so we can advance in our knowledge, skill and/or career.

After contemplating for a day, I contacted Scott expressing my desire to give back to the community. Picking a topic to present is not so easy for me consider that I do not work on VMware technology on a day to day basic.

After a topic was selected Scott suggested me to use a Mind Map to organize my ideas.  It turns out this is very useful for my VCAP-DCD preparation.  Using a Mind Map to organize my idea became a useful tool for me since then.

We went through a few iterations of refining the content of the Mind Map.  In this process I have learned how to approach putting a presentation topic together.  What I present will have to get the attention of the audience.  In other words, I must think from the audience point of view: Why do I need to listen to you?

From the final draft Mind Map, we translate the points into PowerPoint slides.  Scott was nice enough to spend one evening with me so I can do a “dry-run” of the presentation and gave me constructive comments and pointers.

The presentation went well and I got some positive feedback from the audience right after.

If you ask me will I do this again? 

I will answer – yes.  It is not because I like to stand in front of a group but I enjoy sharing what I know to others.