Monday, September 11, 2017

Book review of "Mastering Python Networking"

Last month I had a change to get a hold of the book “MasteringPython Networking” by Eric Chou from Packt Publishing

I worked as a software developer for a networking company writing value-added firmware on top of the hardware based switching and routing engine. With in-depth knowledge and experience I still find this book very useful for me.

Below is the table of content and brief summary of the book that I got from the Packt Publishing site:

Table of Content
  1. Review of TCP/IP Protocol Suite and Python Language
  2. Low-Level Network Device Interactions
  3. API and Intent-Driven Networking
  4. The Python Automation Framework - Ansible Basics
  5. The Python Automation Framework - Ansible Advance Topic
  6. Network Security with Python
  7.  Network Monitoring with Python - Part 1
  8. Network Monitoring with Python - Part 2
  9. Building Network Web Services with Python
  10. OpenFlow Basics
  11.  Advanced OpenFlow Topics
  12. OpenStack, OpenDaylight, and NFV
  13. Hybrid SDN

What You Will Learn
  • Review all the fundamentals of Python and the TCP/IP suite
  • Use Python to execute commands when the device does not support the API or programmatic interaction with the device
  • Implement automation techniques by integrating Python with Cisco, Juniper, and Arista eAPI
  • Integrate Ansible using Python to control Cisco, Juniper, and Arista networks
  • Achieve network security with Python
  • Build Flask-based web-service APIs with Python
  • Construct a Python-based migration plan from a legacy to scalable SDN-based network.
This book is written in a very logical manner covering from the basics to the more advanced topics. Integrating networking and Python automation into one and to show the reader how to build a lab environment to try out what is covered in the book. This hands-on adds value to this book because it is not just theory.  We engineers like to get our feet wet and try things out ourselves.

This book is pretty comprehensive as it covers automaton of networking device from Cisco, Juniper and Arista Network. The 3 main areas are:

Automation with Python/Ansible
Two chapters were dedicated to this topic and again it cover the basics of Ansible and them move on the more advanced topic of using programming techniques to make Ansible more powerful and useful in automating the network. It also covered Ansible vault and to show how we can write customized modules.

Network Security with Python
Security is also an essential element that a network engineer has to deal with. One chapter of this book is dedicated to talk about different tools that can be used to automate some day to day task for network security including packet sniffing, port scanning, searching syslog and to automate writing Access Control List (ACLs) with Ansible.  This chapter also introduces the tool PythonScapy.

Network Monitoring with Python
Two chapters were dedicated for network monitoring. It first introduced the various Python based tools for network monitoring and then moving on to the more detail description of Graphviz on how we can better visualize the network, how to parse Netflow with Python and the use of AWS based Elasticsearch for ELK stack.

Python is a powerful and easy to use framework for web based applications. In this book one chapter is used to describe how to build a Network Web Services with Python and some reader may find this useful.

The last 4 chapters of this book were about the near matured technology – SDN. Emphasis are put in talking about Open vSwitch/OpenFlow and then this book touched on briefly the SDN ecosystem such as OpenStack and OpenDayLight with instruction on how to try out OpenStack Newtron 

This book ended with a chapter on moving forward with a hybrid SDN mixing the legacy network with the newer technology of SDN.

Overall, I highly recommend this book for all network engineers and to a certain degree software developers who want to get into the field of networking.

Tuesday, April 4, 2017

Container Runtime Interface in Kubernetes 1.6

Kubernetes 1.6 was released March 26, 2017.

What’s new in Release 1.6

According to the blog post from Kubernetes, this release focuses on scale and automation. Mirantis has a very good “What’s new in Kubernetes 1.6”. In this article, it listed the following categories of major changes:
  • DaemonSet rolling updates
  • Kubernetes Federation
  • Authentication and access control improvement
  • Scheduling changes
  • Container Runtime Interface is now the default
  • Storage improvements
  • Networking Improvements
  • Other Changes
“Other changes” is the catchall category for those changes that are also important. For all the changes in release 1.6, check out the release notes on GitHub.

Kubernetes also has a blog post describing release 1.6.

Container Runtime

 Kubernetes is a container orchestration engine. For container to run on the host, it needs to have a container runtime. Back in release 1.0, Kubernetes only support the Docker container runtime – runc. In release 1.3, rkt is added. In release 1.5 the Container Runtime Interface is added to allow Kubernetes to support a wider range of container runtime to integrate with kubelet on a node. The container runtime interface in Kubernetes 1.5 release as alpha and the Docker container runtime remains to be the default. With this interface, for Kubernetes to support a new container runtime, it does not need to be integrating deep in the kubelet source code.

What is Container Runtime Interface?

 In brief, the Container Runtime Interface is an abstraction layer allowing kubelet to interface with any container runtime. Before release 1.5, without this interface, adding container runtime support will have to make coding changes to the kubelet source code.

This diagram explains how the Container Runtime Interface works:

 image source:

Container Runtime Interface interacts with kubelet uses the gRPC protocol. This blog post from Kubernetes has a more detailed description on Container Runtime Interface. Like any open source project GitHub usually has good documentation on the subject.

Container Runtime Interface is turned on as the default behavior in Kubernetes 1.6 even it is still in beta status. Beside runc and rkt, currently these container runtime are in developement to work with CRI:

cri-o: OCI conformant runtimes.
rktlet: the rkt container runtime.
frakti: hypervisor-based container runtimes.
docker CRI shim.

Kubernetes Resources

Thursday, December 29, 2016

A project on Data Plane Development Kit

Project Title

A VPP plugin utilizing Intel® DPDK and QuickAssist Technology to perform hardware assisted compression operations.

Project Description

The Intel® QuickAssist Technology is a powerful hardware based solution to perform crypto and compression operations. QuickAssist offloads the operations from CPU to the 89XX communications chip. A VPP plugin that utilizes the QuickAssist feature for data compression can be used as a graph node and can be called by the packet processing graph.

Acceleration Enhancements for DPDK currently already has the ability to perform cryptographic operations either by software or hardware depending on the capabilities of the processor that the code is running on. This project will be adding compression to the Acceleration Enhancements for DPDK and at this time limit to hardware based only while later only add in software based compression similar to the crypto counter part.

This VPP plugin will use this compression feature that will be added to the Acceleration Enhancements for DPDK to perform the compression operation.

Below are 2 diagrams that I get from the Internet and included in the proposed project in Intel Developer Mesh for clarity:

image source:

image source:

Project Use Case

IP payload compression (RFC 3173) that help to save bandwidth can make good use of this hardware assist technology to speed up the compression operations.

Getting Started

To start this project I first looked at what Intel’s QuickAssist Technology is. Of course Google is the first place I went. I found this article from Admin Magazine very helpful in getting me started. It helped me to have a general understand of the features and a high level understand on how QuickAssist Technology works. One thing I like about this article and found it useful is the block diagrams and also it explains how this QuickAssist Technology can boost performance of different use cases and of course NFV is what I care about the most.
image source:

Next this web site provides tons of useful resources and most of all it contains the source file for the Linux Driver for the hardware that supports this QuickAssist Technology as well as the programmer’s Guide along with the Cryptographic and Compression API Reference Manual.

This article is also a good resource in understand the uses cases of QuickAssist Technology and Network functions.

With this in place, I download the DPDK source code and the DPDK Programmer's Guide as well as the Getting started Guide.

Initially for this proposed project my idea was that currently DPDK already had the QAT related PMD (Pull Mode Driver) and a sample program and thus it should be simple to make "some" modification to add in compression support.  Both the Crypto PMD driver and dpdk-qat sample application has useful documentations.

With all the source code and documentations, I am all ready to go.

Hitting a Road Block

Out of my surprise, currently everything in the DPDK code base is geared specifically on crypto and no mention of compression. I found myself hitting a road block. I am not sure if I should modify the existing Crypto QAT driver or come up with a Compression QAT driver. First of all I see that this is a design decision and also being new to DPDK, I am not quite ready to write a new Pull Mode Driver from scratch.  I need more time and information.

Reaching out to the DPDK Community for help

I then reach out to the DPDK developer mailing list but unfortunately, this is over the Christmas holiday and only after 3 days that I got one reply saying compression is not currently supported as well as point me to a DPDK QAT documentation.

Another day passed, still no further reply on my query and I sent out another Email to the mailing list. This time I also mention not getting any help from the DPDK mailing list. In that afternoon, I get a reply from an Intel program manager. He told me his team worked on crypto portion of QAT and kept me engaged over Email.After explaining to him where I am coming from he said most of his team member are on vacation and when they come back in January 2017, they will see if they have the resources and priority to get this going and is happy to include me in making this to work.

Project Scope needs adjustment

It seems that the scope of this project is bigger than I have expected and takes longer time and more collaboration of the DPDK community to complete.

Current Status of this Project

While waiting for the DPDK design decision and collaboration, I am currently  looking at building a VPP plugin.

I am only able to complete this much by December 30, 2016. There is no code to show and nothing to demo yet.

I, however, even not able to finish this project before 2016, I am going to complete this proposed project in 2017. Will update the latest status as I move along.

Are you interested in working on this project?

Join me in this project if you are interested and make it to completion. Contact me at the Intel Dev Mesh website or ping me on Twitter - @vCloudernBeer.

Monday, September 12, 2016

Programming made simple.

With the advent of DevOps, more and more people are finding the needs to do basic programming.  While the need to do programming for "non-programmers" is mounting, I see that a lots of people is looking at programming as a tall mountain and they are finding this hurdle difficult to overcome.

This blog post has the intent to try to make programming a easy task for "non-programmers". This post is for programming in general and is not any programming language specific.

All programming language can be simplified into 3 basic operations or ingredients.  These are the building blocks for all programming languages be it the simple "Hello World" or as complicated as artificial intelligent software.

The 3 basic building blocks are:
  1. Assignment
  2. "If-then-else" / conditional statemen
  3. Iterations


Most computer program has to deal with data manipulation.  Often time a block of memory is reserved called a variable.  This variable should be named and also with a meaningful name to remind even the author of the program as to what the variable is for. Some programmers are lazy and name their variables x, y and/or foo instead of meaningful name such as return_code, username ... etc.  (As a side point, when developing a a software program, all the logics seem so obvious but then 3 months down the road in the middle of the night, we might be scratching our head asking, why did I write this logic).

Anyways, getting back to the topic, assignment is when we assign a value to a variable.  Depending on the programming language, some variable are very strict in the type of the variable.  If a variable is reserved/declared as integer, we can only assign a number value to this variable.  There are other programming language such as Python that the type is not checked by the interrupter/compiler.  Type checking is another big topic that we can look into and in this post we will just concentrate on the 3 building blocks of computer programming.

Example of variable assignment in go:

var string1 string
string1 = "Hello World!" 
fmt.Printf("I just wrote my first program: "%s", string1)

Depending on the context, the variable "string1" may not be a good name to use as it did not reveal how this variable is being used.

If-then-else or conditional statement

This is called the conditional statement and different programming language has different syntax to express the conditional statement but the concept is the same.  Depending of the condition (or the value of certain variables) the program will execute different sets of logic.

The format is:

if condition A {
 do something
} else {
 do some other thing

The "else" part is where condition A is not true.  Or we can be checking the condition in anther way:

if condition A is not true "
 do something
} else {
 do some other thing

For example:

if _, err := checkMonthIndexSize(i); err != nil {
fmt.Printf("\n%s\n", err)
} else {
fmt.Printf("\nSlice is initialized correctly (len = %d)\n", i)

The above logic is written in go and it is only for demonstrating purpose.  This is a more complex construct of the if-then-else logic where we we execute a function and depending on the return value or error condition of the function to decide if we should print out an error message or a informational message.

The if statement is checking for the return value of the function checkMonthIndexSize().  I have not include the function as I am only trying to illustrate the "if-then-else". The function checkMonthIndexSize() returns a value where we choose to ignore and the build in error checking feature of the go language.

In fact this is taken from the demo program that I have written for one of the episodes in "30-days-in-committmas 2015".  If you are interested you can find the podcast here in YouTube


Programming usually involve in doing the same operation multiple times or depending on the value of a variable.

Most programming languages have the "for-loop" and the "while-loop".

Example of a for-loop:

for i := 6; i < 12; i++ {
fmt.Printf(" [%d] := %s\r\n", i, string1[i])

Usually a "for-loop" will do 3 things:
Assign a value to a variable and this case i := 6
check certain condition and in this case if ( value of the valuable is less than 12)
increment the value of i by one and in this case i++ or it can also be i = i + 1)

Example of a while-loop

while ( condition is true) {
 print a "dot" on the screen

This is good for user interface.  Let say we are doing a file transfer using ftp and we can see the progress by seeing dots being printed to the screen and we know the file transfer is still happening.  (Over simplified example for file transfer logic for illustration only).

So we can see this value assignment and condition checking is being used in the for-loop or while-loop.

Happy programming

Of course there are more to programming but this post is trying to help you get over the barrier and start programming.  You can see there 3 basic elements in very complex programs.

Of course there are data structure and object oriented approach plus other features to programming for the complex programs but we can use these 3 basic building blocks to start.  We got to start somewhere.

So: Happy ProgrammingImage result for Happy face

Monday, July 4, 2016

A step closer to the cloud

A few weeks back I have blogged about "Being relevant even considered irrelevant" where I mentioned my employer did not value what I have done/accomplished as relevant to them.  The company I worked for has not been doing too well in selling the products and thus at a point of to make the organization leaner.  We see people from different departments in the company being laid off every month.

Once I found out my employer did not consider my skill and knowledge in the area of server and network virtualization, OpenStack and Docker container as well as my involvement in the community being relevant to them, I know my days in the company is numbered. Sure enough just before the quarter ends, I was given a package and I am being laid off after working from that company for 21 years.

Community is awesome

Nice things came out of this unhappy event and I was able to learn once again the community is awesome.  Once I twitted about my laid off, I got a lots of responses from the community trying to help me find a new job.  Some ask me to send them my resume and some said very kind words about me and I would be useful to different teams.  I did not expect this to happen. So anyone willing to pick me up to join your team?

Crisis vs Opportunity

Being out of a job is a crisis to me. In Chinese, the word crisis comprise of 2 words - danger and opportunity.  This is exactly my situation.  Even being laid off is consider as danger or something bad for me, the other side of the coin is opportunity.  For the pass few years I have had opportunities to pick up knowledge and experiences on the a variety of technologies and more importantly soft skills.  (I have blogged about the importance of soft skill here). 

This laid off is in fact an opportunity for me to be one step closer to what I have been pursuing - a place in the cloud.

Accelerates my journey to the cloud

Usually, when one is out of a job, one will feel lost or empty and worried because of the financial burdened.  This is not my case for 2 reasons.  The first is my religious believe that gives me the peace of heart.  The second reason is that I have been doing all these "extra" work in learning about the various technologies and my involvement in the community in sharing what I have picked up.

This unfortunate event just accelerates the process for me to find my destiny in the cloud - getting a day job that is cloud related.

What about my soft skills?

Over these few years beside technologies, I have also developed for myself some soft skills.  Some of the soft skills that I have developed are:
  • Writing skill
  • Smiling face
  • Presentation skill 
  • People skill
The question is how do I convey these skills on my resume and/or LinkedIn profile?

Things to do in the coming few weeks

Over the weekend I sat down to think and to pray about what to do next.  I wrote down all the possible areas that I have been learning and potentially get myself into.  Here is a Mindmap that will help me visualize the different areas and just like a map for me to pursue opportunities:

In the coming days there are a few things that I plan on doing:
  • Prepare my VMUG user presentation - "A survey on SDN Technologies"
  • Start learning Amazon Web Services in preparation for the vBrownBag podcast in September.  I will be going over section 3 of the AWS Solution Architect certification exam objectives.
  • Continue to learn new technologies as they move forward in the various areas listed above.
  • Start contributing to some open source projects.
  • Build a decent home-lab so I can play with different technologies as well as having a development/testing platform for the open source projects that I contribute to.
  • Of course, look for a job and preferably cloud related.

Stay Tune for my update

Stay tune and see what happens to my quest to reach my destiny - the cloud which is my next job.

Tuesday, June 28, 2016

Is it a good thing to be - "Jack of All Trades"?

I believe usually this phase "Jack of All Trades" has a bad connotation especially "master of none" goes with it.

These few years if you have read my blog, you will see that I am exactly in this position.

In my journey to the cloud, I found that I have to pick up a lots of new technologies everyday as this industry is moving in a high pace.  Often times I found that I have to get back to the basics on various topics such as YAML, JSON, systemd, SELinux, linux bridge ... etc.  Yet there are other things that has different flavors.  Linux has Debian and Red Hat/CentOS/Fedora.  Container has Docker, CoreOS ... etc. Container orchestration has Kubernetes, Mesos ... etc.  SDN has OpenFlow, VMware NSX, Cisco ACI ... etc.  I have been thinking what I have been doing is making me to become a Jack of all trades which in turns a master of none.  This cannot be good. :(

Well, some people will say that oh you are just becoming a generalist. This is a little comforting but still I feel that something is not right with this label.  While contemplating on this I have this idea recently, instead of thinking myself as a generalist, I am in fact being comprehensive.  What triggers me to have this idea is that recently I found that all the different things that I have been trying to pick up is starting to come together.  I can see relationship between the different pieces.

My background is a software developer in networking and I can see that container can be used for Network Function Virtualization.  Also I found that some networking equipment vendor is already running containers on there router which is using Linux as the operating system.  Each feature can be deployed as a single container and this makes bug fixing easier.  To patch the software for one bug, we can just stop one container and then load in the new one and restart that container while the other features can still be operational.

Lately, I have not blog that much and even when I blog, the articles are not technical in nature.  So let me update you what I have been doing lately.

In may I had a one hour podcast with vBrownBag on Ansible and PKI (Public Key Infrastructure).  I did not know I can talk for one hour non stop.  One thing amazes me is that even when something went wrong with my demo I was still able to keep on talking and troubleshoot in the same time.  It was fun.  If you are interested, go to this link .

I have been using Ubuntu for some time and is very comfortable with it.  Last week as Fedora 24 was GA, I reload my desktop with Fedora.  There are 2 main reason that I wanted to try out Fedora.  The first one is I wanted to play with systemd and the second one is that most Linux deployment in the enterprise are Red Hat based.

Sometimes being a Jack of all trades does payoff.  In the upcoming VMUG, the original theme was container and I was going to give a user presentation on container with some demo on how to spin up a single container.  The theme got changed because we eventually found a sponsor and they are in the storage area.  So we change the theme of the meeting to be NSX and storage.  We were able to find a NSX expert to cover the VMware portion of the meeting.  With that I was able to change my presentation to gear toward NSX and now I will be talking about "A survey on SDN technologies" so that attendees without much networking background can pick up some SDN basics and get more from the VMware NSX in-depth materials.

In the mean time I am adding a new area into my bag of tricks.  I will be looking into Amazon Web Services.  I will be presentation at vBrownBag in September covering exam objective section 3 for the Amazon Web Services Solution Architect (associate level) certification.    Well, this means I am going to sit in for this certification as well.  I have never thought that I will get into the public cloud.

As you can see I am really all over the places learning different things and is truly a Jack of All Trades.

Hopefully, all things will come together real soon and I will be able to find a job that is cloud related. My journey to the cloud had not been easy but I learned a lot and also I get to know different people both online on social media and personally.

So I would say it is good to be "Jack of All Trades", if we are able to connect the dots.

image source:

Sunday, June 12, 2016

Being relevant even considered as irrelevant

image source:

The TV series “Person of Interest” is one my favorite show.  Even my kids have exam the next day I will still watch this show whereas I will usually abstain from watching any TV the day before my kids have test or exam.  In this TV show, often times it will show the view of CCTV seeing different people walking on the street and labels such as “asset”, threat” or “irrelevant” are assigned to these people.

Last week at work, I have the unfortunate and sad experience of being marked as irrelevant in terms of my skill and knowledge in virtualization and cloud technologies as well as my presentation skill and the ability to engage with various audiences. Ironically, if you read my blog you will see that these past few years I have been trying to catch up with the virtualization and cloud technologies so that I will be relevant in the IT industry.  I was trying not to be the dinosaur and yet found myself being the opposite as seen by my employer. 

It was a strange feeling for me. I have been a loyal employee and I am doing what I am told and on top of that I use my own time to catch up with the latest development in the virtualization and cloud industry. I work for a networking equipment company and virtualization and cloud technology is very relevant to what we do.  Our product has to fit into virtualized data center or the networking product can operate on the cloud infrastructure.  I know Arista Network  is already using Docker container in some of their products.  In my mind my skill in virtualization and cloud technology is very relevant to my employee one of these days.  There is a Chinese Proverb - "Army is to be maintained in the course of long years, but to be used in the nick of time. Keep feeding the army and you will find a use for it”.  I hope my employer realizes this truth.

While this is a sad news to me, I did get tons of support from the community.  I got DMs from various people telling me to stay strong and to keep doing what I am doing as I am in the right course in my pursue of my destiny in the cloud.  This was very encouraging to me. And incidentally, I found a few job openings and I sent in my resume.   

From this incident, again I see that skill and knowledge is important and yet being involved in a community is equally important.  To be part of the community, everyone has to contribute into the community.  My slogan is "I know some, you know some, let's share what we know". 

Moving forward, even it was some what discouraging to me I will keep on doing what I am doing now which is to keep learning every day and stay relevant.  Over the pass few months, I did not blog so much because I was busy doing things with the home lab. I think I will get back to blog more.  In fact the title of this blog post says it all  - "Being relevant even considered as irrelevant".  I know one day I will be able to find my destiny in the cloud thanks to the continuous support of my family and this community.

 image source: