Sunday, December 27, 2015

Go Go Go: Golang is the way for me to go

It all started with this picture.

What do you see?

I see a happy environment with 2 essential elements.  All the characters are happy and there is food.  After all this is what I am trying to pursuit all these years.  A happy working environment and be able to provide food for the family.  

With these 2 things (happy working environment and food for the family) I have decided to learn this "new" language. An of course, I started with writing a Hello World program.  Even this is a simple program it provides some some insight as to how this language looks like.  In this blog post I am not going to dig into the language itself.  If you are interested, I will be presenting "Introduction to Go" on Dec 30 8:00 pm EST for Commitmas vBrownBag.  Once I got a link to the presentation, I will update the post to reference it.  Registration for the event is here.

So what are we to talk about here?

The purpose of this post is to have a glimpse of Go from a high level.

Go is developed in response to specific problem that Google encountered in software development and deployment.  According to Rob Pike one of the 3 original designers of Go along with Robert Griesemer and Ken Thompson defined Go as a:

  • Complied
  • Concurrent
  • Garbage-collected
  • Statically typed language developed at Google around 2007 for efficiency, scalability and productivity

These 16 words summarized what Go is cleverly and precisely. 

Efficient, Scalable and Productive

The Go language is created with the goal of being efficient, scalable and productive.  Google’s infrastructure is huge and some of the software that runs this infrastructure is also huge.  Even building the software image may take up to 45 minutes.  Imagine there is a one line change to fix a critical bug that needs to be deployed immediately. For the developer to make the change and build the software already takes 45 minutes and then go through testing and then another production build and then deploy.  The turnaround time is measured in unit of hours.

Compiled

I don’t know about Google’s infrastructure but one of the advantage of Go is that it works on Windows, Mac and Linux.  Once a program is written it can be compiled and run on one of the 3 platforms mentioned about.  This bring another point on Go.  It is a compiled language.  Python which make up of 99% of OpenStack is not compiled.  It needs a interrupter to run the code.  Compiled means the software will generate an executable and then being ran.  Compiled program run faster than language with an interrupter.

Garbage-collected

One of the build in feature of Go is "Garbage-collection".  This term is about memory management for the language.  The designer of Go see that lots of coding is done in the case of C to manage the allocation and freeing of memory.  If the language can ease up the mundane but necessary task of memory management, developer and spend more time effectively on the feature itself.  Another build-in feature of Go in memory management is how the stack is handled.  Stack size in Go is dynamic.  If more memory is needed for the stack a new block of memory is allocated and be used for the stack.  Developer does not need to worry about stack overflow crashing the program.

Static Type

A type describes how memory is used.  A type can be a integer, a string etc.  Type is Go is say to be type safe meaning the compiler at compilation time check the usage of memory variable and will not allow the developer to assign a string to a previously integer variable.  This can be done in Python and the designer of Go decided that this is "dangerous" and make sure the types are diligently checked at compilation time.

Concurrent

This is a famous feature of Go taking advantage of the multi-processor of the modern hardware for instruction execution.  I read somewhere on the web using a simple for loop takes 42 ms to complete a task and written in Go with concurrency take only 14 ms.  This is a deep topic and I will dig into this more in the future and for now we just from a high level, know that Go has the ability to execute the instructions faster thus fulfilling the design goal of efficiency.

A very developer friendly language

Besides what Rob Pike's explanation of Go, I have found that Go is really a very developer friendly language.  Go function is able to return multiple parameter and this makes debug much easier.

Go has lots of build-in packages for modern day applications such as JSON processing, networking or web processing.  This makes developer's job much easier and no need to re-invent the wheel.

Go has a rich ecosystem on tools as well as build-in testing and bench-marking support.  This helps to validate local function modification.  This testing and bench-marking support is also in line with DevOps or Agile programming practice.

Golang is the way for me to go

This is only a glimpse of the language Go.  There are much more to it but after looking at this language for almost a month, I have decided that if I were to learn any new language, Go or Golang is the one that I am going to pick up because it aligns with my programming philosophy and practice.
          

Monday, November 30, 2015

Nothing published this month - Did I stop learning?



Last night when I apply for the 2016 VMware vExpert program, I realized that I did not publish for the month of November.  This is the exact opposite of November 2014 where I participated in the “30 blog in 30 days”.


If you read the description of this blog, it said – “A blog to share security, networking and cloud related technology information as @vCloudernBeer picked up on his search for his destiny in the cloud”.    

Well does it mean that I did not pick up anything this month?  

That's correct, the answer is NO (double negative). 

What have I done?

Actually, I am learning more and thus I do not have time to write.  Blogging is important and it helps me to truly understand a given subject.  And it is fun to write and to share ideas, knowledge or experiences to the community.  It is for sure a win situation for me and I hope it is also a win-win situation for me and my audiences.

Another reason that I did not publish any post this month is that being a software developer for networking equipment, I am trying to find a job that is cloud related.  There are so many things/skill to pick up and I figure that I have “talked” enough and it is time for me to “do something”.  I need to work on some project and I need to be proficient in using GitHub for open source project collaboration.

I told my wife, to contribute to open source in my situation is just like trying to learn how to play a musical instrument from YouTube.  Everyone has the impression that it is open source and you just dig in and look at source code and then contribute.  What I have found is that understanding the code and even to make changes is not so difficult since I am a software developer.  What I found difficult is to create a testing environment to test my changes.  Unit testing is very important for software developers.  I am a true support of “Test Driven Development” methodology.  In practice I may not be able to follow this TDD methodology but I try to test my code as through as possible.  It is very important for me to create a test environment. I need to pick up more operator skill to augment my software development skill.

Instead of writing, I tried to follow the step-by-step guide that is available on the web and try to create for myself test environment for different projects.  I ran into equipment problems where I found that my home lab is not quite ready to create a stable open source development and testing environment.  This weekend, I will be converting my Dell T-110 that has 16 GB of RAM running Windows 10 from a hard disk to SSD.  After that conversion I am going to duel boot this machine with Windows and Ubuntu 14.04.   Well, 16.04 is coming and I better get my environment up and running soon.

Commitmas

In the month of December, I will not be publishing blog post much because I will be participating in a community driven event - #commitmas in which I will learn along with the community on how to be more proficient in using GitHub.  If you are interested also, take a look here and consider joining this community event either to learn, to teach or both at the same time.

vBrownBag (as always) is nice enough to host a series of special podcasts on different aspects or pro-tips of using GitHub.  I will be giving a presentation Dec 30 - Bringing it all Together - Intro to Go. And do join me and see how I did in presentation a boring subject.  You can register for the podcast here.







Sunday, October 25, 2015

Things good to know before the OpenStack Tokyo Summit.

OpenStack has a 6 months release cycle and each release is given a name in which the name is associated to the location where OpenStack Summit was held and follow the sequence of the English alphabet.  The 12th OpenStack release has the name Liberty and is officially available on Oct 15, 2015.

OpenStack Summit Tokyo will be held on Oct 27 - Oct 30 in Tokyo Japan.  On this weekend, a large portion of the OpenStack community is heading to Tokyo with excitement.  Even I am not able to attend this summit in Japan, I am excited to see what new technology innovation will be announced as well as the future direction of OpenStack.  (Note: OpenStack summit actually has 2 parts.  One part is the conference and the other part is the design summit where the open source community gather together to discuss and to shape the direction and/or feature of the next OpenStack release).

What's new in the Liberty release?

Liberty is the first release with the "Big Tent" approach.

Over the weekend, I have a chance to take a look at what's new in the Liberty release.  The most comprehensive and detail description of what's new in the Liberty is of course the release note from OpenStack.  If this is too detailed then Nick Chase (@NickChase) has an good article on the 53 things that are new in Liberty.  Also OpenStack has a very informative web page that describes the Liberty release.  There are 5 categories listed on this page as new features in Liberty:
  1. Enhanced Manageability
  2. Simplified Scalability
  3. Extensibility to Support New Technologies
  4. Container Management
  5. Orchestration
The categories are all self explanatory.  It is, however, interesting to see that container management is by a category by itself and is not under extensibility to support new technologies. 

In this post let me highlight a few things that interest me the most.

Most Useful

For me the new interface to display network topology - Curvature Network Topology interface in Horizon is the most useful to visual how the network looks like.  A sample display of this new interface is:
image source: https://www.openstack.org/software/liberty/

Hot Trend

There are 2 topics that are widely discussed in the IT industry and these 2 hot technologies are opening up new use cases and to help consumer either to save money or to deploy the services faster and easier.  These 2 hot trends are:
  1. Docker container
  2. NFV
OpenStack provide a natural infrastructures for these 2 technologies as well as a platform to bring out the essence or core features.

Docker Container

Docker containers need an orchestration engine to make it powerful and there are Docker Swarm, Kubernetes, CoreOS fleet that builds on etcd and systemd as well as Mesos.

In the OpenStack Vancouver summit there is a one day special track on container in which there is Project Magnum which was initially to work with Kubernetes in OpenStack and now the Container Orchestration Engine (COE) in Magnum is expanded to Docker Swarm and Mesos.

In July Google joined the OpenStack Foundation should be able to bring in their expertise on container and Kubernetes to the Magnum project.  I have a blog post on this subject. Also, the introduction of libnetwork brings new networking options to Docker container.

There are 2 new projects related to Docker Containers in the Liberty release:

NFV

The service provider industry is embracing NFV because it provides agility and huge cost saving advantages.  AT&T is said to have saved big on capex with NFV. There is the OPNFV project that is driving the advancement of open source NFV functionality and stability.

Both VMware announced vCloud NFV in VMworld Europe 2015.  Cisco also has it NFV offering.

Similar to Docker Container, NFV needs an infrastructure and OpenStack is a able to provide this need.

Interestingly, the effort of NFV in OpenStack is under the Nova project instead of Neutron as one might think.

Security

Role Based Access Control (RBAC) is added to both Heat and Neutron to provide better security on resource management and usage.

Security is essential to all projects and if OpenStack wanted to break into the enterprise market, security is an important element that needs to be addressed.  OpenStack already has a security group to handle security related issue in OpenStack.  Any bug fix that checked in has a SecImpact keyword to flag the OpenStack security group to look at potential security risk that is introduced in the code checkin.  There is also Project Bandit that can check for security issues in Python code in which OpenStack is written in.

Looking Ahead

I am excited to listen to the keynote and the YouTube recording of the various presentations from the OpenStack Tokyo summit. 

It will be interesting to see the future direction of OpenStack as decided by the community at the OpenStack Design Summit.

Another thing that I am looking forward to is to visit Japan and to try out the local ramen and the beer. 
image source: Gary Kevorkian of Cisco when he is at the OpenStack Tokyo Summit.


Wednesday, October 14, 2015

How are OVS, OVN, OVSDB and OpenFlow related?

We have seen these OVS, OVN, OVSDB and OpenFlow in various technology articles.  Do you know how they are related?  Or are they related at all?

Well, they all start with the letter "O", this will be great for a Sesame Street episode for us to learn words with the letter "O".  

They all start with the letter O because they are all have "open" in their name.

With "open" in their name, does that make them related?

OVS, OVN, OVSDB and OpenFlow are related but not because they have "open" in their name.

Before we look at how they are related let us take an overview of what they are and some of the key concept that we need to know.  We can only take an overview of OVS, OVN, OVSDB and OpenFlow in this post as they by itself can be one or more blog post to cover.


OVS


OVS is the short form of Open vSwitch.  It is an open source software based virtual network switching module that can be deployed on hypervisor such as KVM or white box switching hardware.   Detail description of OVS can be found on it homepage and GitHub page

A list of OVS feature can be found here.  The latest version is release 1.5

OVS is heavily used in OpenStack as KVM is the most deployed hypervsior for OpenStack while OVS is the default virtual switching module for KVM.

This diagram from OVS home page describe what OVS is:
image source: http://openvswitch.org/

As you can see both OpenFlow and OVSDB is related to OVS.  Will discus this in more detail later in this post.

 

OVN


OVN stands for Open Virtual Network and is pronounced as "oven".  This is why the logo looks like the front of an oven.  It is a sub-project within OVS.

Back in March 2015, I have already written an article on OVN.  Please refer to this post for OVN details.

OVSDB

OVSDB is a protocol. It stands for Open vSwtch Database Management protocol. It is used to manage an Open vSwitch. This is the management plan for Open vSwitch. It is defined in RFC 7047.

The heart of OVSDB is the database or schema that defines the configuration of the OVS as well as the QoS policies. Management module uses the JSON-RPC as the transport to communicate with the ovsdb-server module of the OVS. (note: JSON = JavaScript Object Notation and JSON-RPC is a Remote Procedural Call encode in JSON format).

OpenFlow

OpenFlow is a protocol.  It is one form of  control plane for Open vSwitch and is defined in RFC 7149.

The main idea is to setup flow table that defines the action of a particular flow.  The basic action types are forward and drop with other optional/recommended actions such as flood, enqueue or modify field.  

This diagram explains the Flow Table:
 

Putting OVS, OVN, OVSDB and OpenFlow together

This diagram summarize the relationship between OVS, OVSDB and OpenFlow

At this time OVN is part of OVS and uses the same interface ovsdb-server and ovs-vswitchd:

Summary

Both OVS and OVN are open source switching modules with OVSDB as the management plane and OpenFlow is used to program the flow from the controller to the OVS.
                             

Tuesday, September 22, 2015

A Paradigm Shift coming to the networking arena

In Wikipedia "Paradigm Shift" is defined as "a change in the basic assumptions, or paradigms, within the ruling theory of science".  It is defined by Thomas Kuhn, in his influential book The Structure of Scientific Revolutions

It has been adopted in the business world to describe a "Fundamental change in an individual's or a society's view of how things work in the world".  One classic example of a Paradigm Shift in the business world is how the Japanese Automaker Toyota changed it car manufacturing process making it able to adjust to external demands or changes and thus making Toyota a major thread to the Big 3 U.S. Automakers.


DevOps is a Paradigm Shift in the IT industry and is becoming a popular way of agile software deployment methodology.

Then what is a Paradigm Shift in the networking arena?

I think most of us will think that Software Defined Networking (SDN) is a Paradigm Shift for the networking arena.  

Well if you think this way you are only half correct. I am sure you will agree with me after reading this post.

What is SDN?

Different people have different definition on what Software Defined Networking is.  I have a blog post that defines what SDN is.  This TechTarget article describe SDN as "an umbrella term encompassing several kinds of network technology aimed at making the network as agile and flexible as the virtualized server and storage infrastructure of the modern data center."

Overlay technologies such as VXLAN, STT or NVGRE is sometimes considered as a form of SDN.  In the blog post we will look at SDN as the separation of the control and data plane and there is a centralized SDN controller to program the traffic flow on the physical network device.


 image source: https://www.sdxcentral.com/wp-content/uploads/2013/08/sdn-framework.jpg

In this SDN model, there is the concept of:
  • Northbound Interface - Interface between the business application and the SDN controller
  • Southbound Interface - Interface between the SDN controller and physical network device
Both the southbound and northbound interface has a set of APIs.

 

Southbound API

OpenFlow is the most common protocol used in the Southbound Interface to manage the flow dictating how the packers are moved from the source to the destination. (Note: OVSDB is the configuration management protocol used by the SDN controller to configure the Open vSwitch that is running on the physical network device).

Northbound API

The beauty of SDN is that it abstracted the physical networking devices with software and thus making the network programmable in respond to external changes.  The Northbound API is the channel for the network applications to interface with the SDN controller.  This article is a good primer on Northbound API.

The Paradigm Shift - IBN

Separating the control and forwarding plane in SDN is not exactly a fundamental change on how networking is done.  The true change on how networking is the concept of Intent-Based Networking (IBN).  This article (Intent: Don't tell me What to do! (Tell Me What You Want) by David Lenrow has a good description of what Intent-Based Networking is. This article described Intent-Based Networking with these characteristics:
  • Intent is invariant
  • Intent is portable
  • Intent is compose-able
  • Intent scales out, not up
  • Intent provides context
Intent-Based Networking is another abstraction to the physical network where network application only specifies it intent and does not specifies how to achieve the intent.  This is similar to the Declarative Language where user only specifies the end result.  One example of Declarative Language is Puppet the Configuration Management Tool where user only list out the end state of the device that he/she wants to manage.

This is a Paradigm Shift in networking as we are shifting from the how to the what when network application interface with the SDN Controller.

 

The Advantages of Intent-Based Networking

There are several advantages for Intent-Based Networking:

Portability: Workload in the infrastructure tends to move around and in the case of Docker Containers, the application come and go in a rapid manner and the same application may be provisioned on different physical host.  By specifying only the what and not the how, it makes the application more agile or in other words more portable.

Composability:  By specifying the intent, the operator or developer of the network application does not need to know the protocol, network attributes or vendor.  "It is possible to provide an integrated system where multiple, discrete SDN services are offered, while resolving and avoiding potential conflicts over shared resources such as forwarding table" as described in David Lenrow's more recent article on this subject

Security: In the traditional SDN Northbound API, it is possible for the attacker to manipulation the flow creation or deletion. In the Intent-based Networking model, the Northbound API only specifies the what and not the how thus making is more save.

Currently this Intent-Based Networking concept is still under development but is gaining support from the following well know networking bodies:
  • The Open Network Foundation
  • Open Source SDN boulder Project
  • OpenDayLight Network Intent Composition 
  • Open Networking Lab
  • OpenStack
  • OPNFV
  • European Telecommunication Standards Institute (ETSI)

Further Reading on this subject


"Intent: What. Not How"

Could Intent Modeling Save the NFV Business Case?“,

Intent Models in NFV: More than “Useful”,

Diving Deeper into Intent Models for NFV

Reference:
"Intent: Don't Tell Me What to Do! (Tell Me What You Want)." SDxCentral. N.p., 12 Feb. 2015. Web. 22 Sept. 2015.
"Intent-Based Networking Seeks Network Effect." SDxCentral. N.p., 18 Sept. 2015. Web. 22 Sept. 2015.
"What Is Software-defined Networking (SDN)? - Definition from WhatIs.com." SearchSDN. N.p., n.d. Web. 22 Sept. 2015.
Wikipedia. Wikimedia Foundation, n.d. Web. 22 Sept. 2015. 
"What Is a Paradigm Shift? Definition and Meaning." BusinessDictionary.com. N.p., n.d. Web. 22 Sept. 2015.  

Tuesday, September 15, 2015

Docker Global Hack Day #3

The Event

September 16, 2015 is the Docker Global Hack Day.  It starts at 4:00 pm Pacific Time and ends on September 21 (the following Monday). Local Docker meetup group will host the event and it will start with one hour of live stream with different speakers.  I believe food and drinks are provided also. The program at the local meetup will end on 9:00 pm but the project continues until Monday September 21.

The local meetup for Los Angeles is held at Ticketmaster

This is a global event where people all around the world will submit ideas on projects that is Docker related in these 3 areas:
  1. Docker Plugins
  2. Docker Plumbing – runC, Notary, etc.
  3. Docker Freestyle – must use features from the latest Docker releases including Engine and other Docker OSS projects
List of submitted projects can be found here.

My submitted project

My goal after VMworld 2015 is to learn the language Go and this Docker hack day is just perfect for me as Docker as well as Kubernetes are mostly written in Go.  Earlier I had an article on "A New Chapter in Docker Networking" and the project that I am going to submit to the Docker Global Hack Day is libnetwork related.

Hack Title

Utility to display traffic counter per container

Brief abstract of the project

This project is to provide a utility to display the transmit and receive counter for each container by tapping into libnetwork.  This will be a good debug tool to provide visibility on the traffic pattern of each container and to see if there is anything abnormal.

My expectation of this project

This project should able to provide me a jump start on Go and Docker.  Since this is a very simple project I do not expect to win nor expect anyone to join me but this can push me to focus and deliver the project before Monday September 21.

A part from having a jump start on Go and Docker, I hope I can meet different people of the Docker community.

Thursday, September 10, 2015

My first VMworld - It was simply awesome



Last week I was able to attend this renowned number 1 IT conference that everyone talked about - VMworld.

I arrived on Sunday afternoon and left Thursday evening.

It was a wonderful experience and when I left the conference I wrote this on Twitter:

 

In short, this summarizes my first VMworld.  I had good experience, memory and friendship.

Before I went I have no idea what I will encounter at the conference.  I worried being alone in a BIG crowd but the Twitter community immediately ensured that I would not be. I had written a blog post setting my theme to VMworld 2015 – Experience.

I was to experience VMware in 3 ways:
  1. Technologies
  2. People
  3. Community
As I reflect on the conference this is what I experienced at VMworld.


Technologies of VMware

There are tons of blog posts on what was announced at VMworld 2015. I like this post the most which stated the following points:
  1. VMware public cloud gets vCloud Air SQL, Site Recovery Mgr Air and object storage
  2. Working with Nvidia's Grid 2.0 on virtual desktop
  3. VMware Integrated OpenStack rev 2.0 (based on Kilo release)
  4. VMware vSphere Integrated Container and Photon Platform
  5. vSphere storage driver for ClusterHQ Flocker
It is clear that VMware is getting into the container space under the umbrella of Cloud-Native Apps which also includes technologies such as Docker containers as well as DevOps.  At VMworld 2015 there is a 3 day DevOps mini conference – DevOps @ VMworld held at the Hang Space with keynotes and hands on training.  There is also the Developer Day and Hackathon on Wednesday.  Participants of the hackathon were given a free one day pass to VMworld (if they have not registered to attend VMworld) and a $600 credit for vCloud Air.

People of VMware

For me this is the best part of VMworld.  At the conference people of VMware get together to talk and to exchange ideas or concepts.  At the conference, friendships are built and this is why we have such a strong VMware community.  Even being a first time attendee of the conference I could already feel the bonding effect among the attendees.

The VMunderground events on Sunday that began at 1:00 pm was just amazing. People in this event mingled well and everyone was extremely friendly.  I, am a nobody in the VMware community was able to meet some VMware “hot shots” and was greeted with friendly smiles and warm conversations. 

Every day, the Partner Exchange was very crowded and was difficult to move around.  Still people were very cordial and would give others the right of way if they were on a collision course in a tight space.  After all there were 23,000 people at this conference.

I also attended a few parties hosted by different vendors and I was able to interact with different people.  I was able to encourage one person to start blogging.

Some of the people I met, I recognized them not by their face but by their Twitter handle. And in one instance, I have a change to tell the story of my Twitter handle - vCloudenBeer (ping me on Twitter if you want to know the story).

Community of VMware

There are 2 specific communities that I am involved in. The first one is VMUG and the second one is vBrownBag.

VMUG is the best place to learn, network and to share. This year VMUG was having a lounge at Moscone West 2nd floor instead of just a booth at the Partner Exchange.  Attendees were able to play games, relaxed and to talk to different people at the VMUG Lounge.  The staffs from VMUG headquarter were all very friendly, capable and organized. I took a picture at the VMUG Lounge where they provide a photo booth for members to capture memories.

The vBrownBag crews were busy from Monday to Thursday.  Lots of people signed up for a 10 minutes slot to present different topics.  Some were rookies like me and some were VMware veterans. Complete schedule of all the presentations can be found here

My presentation was on Wednesday and you can view it on YouTube with the slide deck posted here. Comments on the presentation are welcome as I wanted to improve my presentation skill.  My presentation topic was on "Microsegmentation - a perfect fit for Microservices security" which some how aligned with one of the highlighted topics of WMworld 2015 by VMware.  Before the presentation I have written blog posts on this subject.  One of these blog posts (A new chapter in Docker Networking) was featured at the DockerWeekly.  This was encouraging and elevated my confidence during the presentation because being featured at DockerWeekly validated my presentation content.  After all this was VMworld and not just any event.

Thanks to James Brown (@jbcompvm), I was able to take part on another vBrownBag slot talking about Virtual Design Master as a participant sharing my experience and benefits from the contest. It was in a form of penal discussion.  English being my second language, this is a bit challenging to me as the session is not scripted nor based on PowerPoint that we can expand on or reference to. It was free form question and answer. If interested, you can watch this here.   

With these efforts I am now a proud owner of this polo shirt


It was simply awesome

Yes, it was simply awesome. I did not attend a single session nor tried any Hands-on lab at WMworld but I had a great time interact with different people and felt the bonding effect of the conference.  Of course I bought home a whole bunch of new t-shirts and other swags.

Next year VMworld will be held in Las Vegas and I am looking forward to experiencing the technologies, people and communities of VMware again. 

Monday, August 24, 2015

Microsegmentation – a perfect fit for Microservices security

On my last post, we have explored a new chapter in Docker networking. With the new and yet still under development libnetwork, Docker container is now able to take advantage of 3rd party networking and security solutions.  One of the security solution is Microsegmentation.

Microsegmentation is not a new concept but the implementation of Microsegmentation is not feasible until network virtualization had become mature.  Both VMware and Cisco offers Microsegmentation solutions.

What is Microsegmentation?

Segmentation is a security principle used to group entities within a network into one unit and to apply rules/polices to control the traffic in and out of the segment. Usually, this is done by a firewall.  Microsegmentation is to be able to provide a way to define rules/policies in a smaller granular way and sometimes as small as a Docker container.

Underlying Principles
An important principle for Microsegmentation is Zero Trust.  It is simply to say that within a network, nothing is trusted.  In the traditional IT security model, the assumption that attack to an IT infrastructure is from the "outside".  There is the familiar tools such as DMZ, Intrusion Detection/Protection System and the antivirus software that companies spend a lots of money on to stop hacker from entering into the system. Once inside the system, there is not much being done to check for the traffic inside the perimeter.

Perimeter based security measures are not enough for the modern day IT infrastructure especially when the work load is in the cloud in which the perimeter is not very well defined.  It is important to employ a Zero Trust model so as to provide the maximum level of security measure.

Major components for effective Microsegmentation
I read an article by Scott Lowe (I cannot find where it is.  Will put in the URL once I find it).  According to that article there are 3 main components that defines an effective Microsegmentation implementation
  1. Network independent policy definition
  2. Centralized policy definition repository
  3. Distributed policy enforcement
1. Network independent policy definition
Traditional firewall rules use the 5 tuples to define access or deny rules.  It is enough when service is run in the monolithic process.  To provide a more granular way for Microsegmentation, the rules and policies has to be more than just network attributes.  One example of network independent definition can be the type of OS kernel or even path level of the same OS.

2. Centralized policy definition repository
Virtual Machine and/or Docker containers can move around.  To be effective, Microsegmentation has to have the ability to define the security policy in a central location so that no matter where the segment is moved, the policy can be easily retrieved.

3. Distributed policy enforcement
The enforcement point of the security policy has to be as close to the source as possible so that it will not be a bottleneck when applying the security policy. 

Microservices opens up Security risk

Microservices architecture is perfect for cloud native application, agile and with the ability to scale in and out depending on the need.  On the other hand, Microservices architecture opens up security risk that needs to be addressed:
  • Frequent and short life span
  • Increase of East-West traffic
  • Services are not as isolated
Frequent and short life span
How is short life span a security risk? It is a security risk when the security rules cannot keep track on when and on which host machine the services are being deployed.  Over time the ACL which is a common form of security rule for segmentation can become unmanageable and thus create security risk. This problem is aggregated when the Microservices have frequent life span. A Docker container can be provisioned in a matter of millisecond.  We need to have the security measure to catch up with the pace otherwise there is going to create interval of security exposures.

Increase of East-West traffic
The traffic between client and server is defined as North-South traffic and traffic between servers are defined as East-West traffic.  Why would increase in east-west traffic present a new security threat?  Traditional security model has the assumption that attack to an IT infrastructure is from the "outside".  There are not much being done to stop hackers from hoping from server to server.  With the increase of East-West traffic, it amplifies this security risk that needs to be addressed.

Services are not as isolated
Microservices architecture is to break a monolithic service into smaller services.  In the monolithic model components of a service runs within the same process and it is much easier to provide service isolation.  Also the traffic to a monolithic service is much easier to control.  With Microservices, components of a service are not running as an individual process.  Individual processes that work together to perform a service needs to communicate with each other.  Traditional firewall rules uses the networking 5 tuples as the bases of the rules as to either allow or deny.  Now with Microservices, the use of networking attributes to setup up allow or deny rules are not enough thus making it difficult for the administrator of the IT infrastructure to use traditional firewall rules to provide service segmentation.

If Docker container is used to implement the Microservice architecture, we have the problem of all containers sharing the same OS kernel.  In this way, individual micro services are not truly isolated from each other.  Lots of development is done on Docker security but at least for now, it is a security risk that we have to address.

Microsegmentation and Microservices fits right in

Microsegmentation and Microservices is a perfect match.  How?

What Microsegmentation can offer for security is just what Microservices architecture implemented in the form of Docker containers needs. With Network virtualization as the foundation of Microsegmentation, the security policy is able to be ready to secure the fast provisioned Microservices.

The 3 major components of an effective Microsegmentation help to mitigate the security risk that Microservices opened up.

Network independent policy definition
In Microsegmentation, security rules are not limited to the traditional 5 tuples networking attributes and thus we can define a Docker container as a single segment and that the enforcement point of the policy is applied to the networking interface of the Docker container.  This can effective monitor and control the East-West traffic of the various Microservices.

Even if a single Docker container is compromised the hacker will not be able to break out from the Docker container because the security rule that is defined for a single Microservice will stop the unexpected traffic.  Depending on the implementation of the Microsegmentation, the Docker container once being found to have violated the defined security rule, it can be killed and being spawned back.

Centralized policy definition repository and distributed policy enforcement
Microservices architecture allows the application to be cloud native.  Common characteristic of a cloud native application is that they can be provisioned quickly and on-demand.  As these Microservices come and go they can be provisioned on different host machines.  For the purpose of balancing resources, these Microservices might be moved around different host machine during its life span.   Microsegmentation is able to adapt to dynamic and elastic nature of cloud native applications with distributed policy enforcement.  The policy defined in a centralized location by the IT infrastructure administrator is enforced regardless of where the Microservices is provisioned.