Kubernetes networking solutions

Networking is a vast topic. There are many ways to set up networks and connect devices, pods, and containers. Kubernetes can't be opinionated about it. The high-level networking model of a flat address space for Pods is all that Kubernetes prescribes. Within that space, many valid solutions are possible, with various capabilities and policies for different environments. In this section, we'll examine some of the available solutions and understand how they map to the Kubernetes networking model.

Bridging on bare metal clusters

The most basic environment is a raw bare metal cluster with just an L2 physical network. You can connect your containers to the physical network with a Linux bridge device. The procedure is quite involved and requires familiarity with low-level Linux network commands such as brctl, ip addr, ip route, ip link, nsenter, and so on. If you plan to implement it, this guide can serve as a good start (search for the With Linux Bridge devices section): http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/.

Contiv

Contiv is a general-purpose network plugin for container networking and it can be used with Docker directly, Mesos, Docker Swarm, and of course Kubernetes via a CNI plugin. Contiv is focused on network policies that overlap somewhat with Kubernetes' own network policy object. Here are some of the capabilities of the Contiv net plugin:

  • Supports both libnetwork's CNM and the CNI specification
  • A feature-rich policy model to provide secure, predictable application deployment
  • Best-in-class throughput for container workloads
  • Multi-tenancy, isolation, and overlapping subnets
  • Integrated IPAM and service discovery
  • A variety of physical topologies:
    • Layer2 (VLAN)
    • Layer3 (BGP)
    • Overlay (VXLAN)
    • Cisco SDN solution (ACI)
  • IPv6 support
  • Scalable policy and route distribution

Integration with application blueprints, including the following:

  • Docker compose
  • Kubernetes deployment manager
  • Service load balancing is built in east-west microservice load balancing
  • Traffic isolation for storage, control (for example, etcd/consul), network, and management traffic

Contiv has many features and capabilities. I'm not sure if it's the best choice for Kubernetes due to its broad surface area.

Open vSwitch

Open vSwitch is a mature software-based virtual switch solution endorsed by many big players. The Open Virtualization Network (OVN) solution lets you build various virtual networking topologies. It has a dedicated Kubernetes plugin, but it is not trivial to set up, as demonstrated by this guide: https://github.com/openvswitch/ovn-kubernetes.

Open vSwitch can connect bare metal servers, VMs, and pods/containers using the same logical network. It actually supports both overlay and underlay modes.

Here are some of its key features:

  • Standard 802.1Q VLAN model with trunk and access ports
  • NIC bonding with or without LACP on upstream switch
  • NetFlow, sFlow(R), and mirroring for increased visibility
  • QoS (Quality of Service) configuration, plus policing
  • Geneve, GRE, VXLAN, STT, and LISP tunneling
  • 802.1ag connectivity fault management
  • OpenFlow 1.0 plus numerous extensions
  • Transactional configuration database with C and Python bindings
  • High-performance forwarding using a Linux kernel module

Nuage networks VCS

The Virtualized Cloud Services (VCS) product from Nuage networks provides a highly scalable policy-based Software-Defined Networking (SDN) platform. It is an enterprise-grade offering that builds on top of the open source open vSwitch for the data plane along with a feature-rich SDN controller built on open standards.

The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications. The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications.

In addition, all of VCS components can be installed in containers. There are no special hardware requirements.

Canal

Canal is a mix of two open source projects: Calico and Flannel. The name Canal is a portmanteau of the project names. Flannel by CoreOS is focused on container networking and Calico is focused on network policy. Originally, they were developed independently, but users wanted to use them together. The open source Canal project is currently a deployment pattern to install both projects as separate CNI plugins. But a new company called Tigera formed by Calico's founders is shepherding both projects now and has plans for tighter integration.

The following diagram demonstrates the present status of Canal and how it relates to container orchestrators such as Kubernetes or Mesos:

Canal

Note that when integrating with Kubernetes, Canal doesn't use etcd directly anymore. Instead it relies on the Kubernetes API server.

Flannel

Flannel is a virtual network that gives a subnet to each host for use with container runtimes. It runs a flaneld agent on each host that allocates a subnet to the node from a reserved address space stored in etcd. Forwarding packets between containers and, ultimately, hosts is done by one of multiple backends. The most common backend uses UDP over a TUN device that tunnels through port 8285 by default (make sure it's open in your firewall).

The following diagram describes in detail the various components of Flannel, the virtual network devices it creates, and how they interact with the host and the pod via the docker0 bridge. It also shows the UDP encapsulation of packets and how they are transmitted between hosts:

Flannel

Other backends include the following:

  • vxlan: Uses in-kernel VXLAN to encapsulate the packets.
  • host-gw: Creates IP routes to subnets via remote machine IPs. Note that this requires direct layer2 connectivity between hosts running Flannel.
  • aws-vpc: Creates IP routes in an Amazon VPC route table.
  • gce: Creates IP routes in a Google compute engine network.
  • alloc: Only performs subnet allocation (no forwarding of data packets).
  • ali-vpc: Creates IP routes in an alicloud VPC route table.

Calico project

Calico is a versatile virtual networking and network security solution for containers. Calico can integrate with all the primary container orchestration frameworks and runtimes:

  • Kubernetes (CNI plugin)
  • Mesos (CNI plugin)
  • Docker (libnework plugin)
  • OpenStack (Neutron plugin)

Calico can also be deployed on-premises or on public clouds with its full feature set. Calico's network policy enforcement can be specialized for each workload and make sure that traffic is controlled precisely and packets always go from their source to vetted destinations. Calico can map automatically network policy concepts from orchestration platforms to its own network policy. The reference implementation of Kubernetes' network policy is Calico.

Romana

Romana is a modern cloud-native container networking solution. It operates at layer 3, taking advantage of standard IP address management techniques. Whole networks can become the unit of isolation as Romana uses Linux hosts to create gateways and routes to the networks. Operating at layer 3 level means that no encapsulation is needed. Network policy is enforced as a distributed firewall across all endpoints and services. Hybrid deployments across cloud platforms and on-premises deployments are easier as there is no need to configure virtual overlay networks.

Romana claims that their approach brings significant performance improvements. The following diagram shows how Romana eliminates a lot of the overhead associated with VXLAN encapsulation:

Romana

Weave net

Weave net is all about ease of use and zero configuration. It uses VXLAN encapsulation under the covers and micro DNS on each node. As a developer, you operate at a higher abstraction level. You name your containers and Weave net lets you connect to and use standard ports for services. That helps migrating existing applications into containerized applications and microservices. Weave net has a CNI plugin for interfacing with Kubernetes (and Mesos). On Kubernetes 1.4 and higher, you can integrate Weave net with Kubernetes by running a single command that deploys a DaemonSet:

kubectl apply -f https://git.io/weave-kube

The Weave net pods on every node will take care of attaching any new pod you create to the Weave network. Weave net supports the network policy API as well providing a complete yet easy to set up solution.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset