Saturday, November 8, 2014

OpenStack Series: Part 8 – Neutron – Networking Service

As indicated in a previous post that compute, storage and network are the 3 main building block of OpenStack.

In the beginning of OpenStack, networking is under the Nova project - nova-networking.  It served its needs to support OpenStack when OpenStack is still in the infant state.  Later on as OpenStack becomes more mature, the need to for a more flexible and "powerful" networking module is required.  Nova-networking is found to be limited in selection of possible network topology, and most of all it cannot utilize third party solutions.  Nova-network can only uses Linux-bridge, limited network type and iptable to provide network services for hypervisor in Nova.  Capability of nova network can be found here.

A new and separate project is incubated and then integrated into OpenStack for networking.  It was initially named Quantum but due to conflict of an existing commercial product, the project is renamed Neutron.  Some of the older documents for OpenStack still reference Quantum as the networking service.

Neutron Components

OpenStack Wiki describe Neutron as:
A standalone service that often deploys several processes across a number of nodes. These processes interact with each other and other OpenStack services. The main process of the OpenStack Networking service is neutron-server, a Python daemon that exposes the OpenStack Networking API and passes tenant requests to a suite of plug-ins for additional processing. 

The OpenStack Networking components are:
neutron server (neutron-server and neutron-*-plugin)
This service runs on the network node to service the Networking API and its extensions. It also enforces the network model and IP addressing of each port. The neutron-server and plugin agents require access to a database for persistent storage and access to a message queue for inter-communication.
plugin agent (neutron-*-agent)
Runs on each compute node to manage local virtual switch (vSwitch) configuration. The plug-in that you use determine which agents run. This service requires message queue access. Optional depending on plugin.
DHCP agent (neutron-dhcp-agent)
Provides DHCP services to tenant networks. This agent is the same across all plug-ins and is responsible for maintaining DHCP configuration. The neutron-dhcp-agent requires message queue access.
L3 agent (neutron-l3-agent)
Provides L3/NAT forwarding for external network access of VMs on tenant networks. Requires message queue access. Optional depending on plug-in.
network provider services (SDN server/services)
Provide additional networking services to tenant networks. These SDN services might interact with the neutron-server, neutron-plugin, and/or plugin-agents through REST APIs or other communication channels.

image source:

Neutron API
Neutron API allow users to define:

  • An isolated L2 segment, analogous to VLAN in the physical networking world.
  • A block of v4 or v6 IP addresses and associated configuration state.
  • A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port.
Neutron API Extension
With the API extension, user is able to define additional networking function via Neutron plugins.  This diagram shows the relationship of Neutron API, Neutron API extension and Neutron plugin where the plugin interface with an SDN Controller - OpenDaylight.

image source:

Neutron Plug-ins
Plugin is the interface between Neutron and the back-end technologies such as SDN, Cisco, VMware NSX so that the consumer of Neutron can take advantage of these 3rd party networking equipment or software.

Popular plug-ins include:
  • Open vSwitch
  • Cisco UCS/Nexus
  • Linux Bridge
  • Nicira Network Virtualization Platform
  • Ryu OpenFlow Controller
  • NEC OpenFlow
A comprehensive list of Neutron plug-in can be found here and here.

One plugin that is not directly related to the 3rd party vendor but is a very important plugin is the ML2 (Modular Layer 2) plugin that is introduced in the Havana release. This plugin allows concurrent operations of mixed network technologies in Neutron.

image source:
Without the ML2 driver, Neutron can only provide one type Layer-2 service because the operation is monolithic.  ML2 has the concept of driver Type and driver Mechanism.  ML2 by itself can be a blog post. Will post more in the coming days.

More to come
There are a lot more to talk about for Neutron.  Will cover more Neutron related topics in the future.  For now this IBM document is a good comprehensive article on Neutron.

Related Post:
OpenStack Series Part 1: How do you look at OpenStack?
OpenStack Series Part 2: What's new in the Juno Release?
OpenStack Series Part 3: Keystone - Identity Service
OpenStack Series Part 4: Nova - Compute Service
OpenStack Series Part 5: Glance - Image Service
OpenStack Series Part 6: Cinder - Block Storage Service
OpenStack Series Part 7: Swift - Object Storage Service
OpenStack Series Part 9: Horizon - a web based UI Service
OpenStack Series Part 10: Heat - Orchestration Service
OpenStack Series Part 11: Ceilometer - Monitoring and Metering Service
OpenStack Series Part 12: Trove - Database Service
OpenStack Series Part 13: Docker in OpenStack
OpenStack Series Part 14: Sahara - Data Processing Service
OpenStack Series part 15: Messaging and Queuing System in OpenStack
OpenStack Series Part 16: Ceph in OpenStack
OpenStack Series Part 17: Congress - Policy Service
OpenStack Series Part 18: Network Function Virtualization in OpenStack
OpenStack Series Part 19: Storage Polices for Object Storage
OpenStack Series Part 20: Group-based Policy for Neutron
"Chapter 24. Networking Architecture." Document ATOM. N.p., n.d. Web. 05 Nov. 2014.


  1. Thanks for the wonderful blog. The purpose of neutron is to provide only network API services. The actual packet switching decisions are provided by the plugins?

    You have mentioned that the neutron-server runs in the networking node, but in the picture shows as running in the controller node. Should the neutron server interact with any of the services running in the controller node?

  2. Nice blog...This is a great article on block storage OpenStack and is nicely explain all about Cinder and types of Storage. Thanks

  3. Thnq for sharing your ideas with is. Its very useful for me to develope my knowledge.Nice work keep going

    DevOps training in chennai
    best DevOps training institute in chennai