Driver

A driver owns a network and is responsible for making the network work and manages it. Network controller provides an API to configure the driver with specific labels/options that are not directly visible to the user but are transparent to libnetwork and can be handled by drivers directly. Drivers can be both in-built (such as bridge, host, or overlay) and remote (from plugin providers) to be deployed in various use cases and deployment scenarios.

The driver owns the network implementation and is responsible for managing it, including IP Address Management (IPAM). The following figure explains the process:

Driver

The following are the in-built drivers:

  • Null: In order to provide backward compatibility with old docker --net=none, this option exists primarily in the case when no networking is required.
  • Bridge: It provides a Linux-specific bridging implementation driver.
  • Overlay: The overlay driver implements networking that can span multiple hosts network encapsulation such as VXLAN. We will be doing a deep-dive on two of its implementations: basic setup with Consul and Vagrant setup to deploy the overlay driver.
  • Remote: It provides a means of supporting drivers over a remote transport and a specific driver can be written as per choice.

Bridge driver

A bridge driver represents a wrapper on a Linux bridge acting as a network for libcontainer. It creates a veth pair for each network created. One end is connected to the container and the other end is connected to the bridge. The following data structure represents a bridge network:

type driver struct {
  config      *configuration
  etwork      *bridgeNetwork
  natChain    *iptables.ChainInfo
  filterChain *iptables.ChainInfo
  networks    map[string]*bridgeNetwork
  store       datastore.DataStore
  sync.Mutex
}

Some of the actions performed in a bridge driver:

  • Configuring IPTables
  • Managing IP forwarding
  • Managing Port Mapping
  • Enabling Bridge Net Filtering
  • Setting up IPv4 and IPv6 on the bridge

The following diagram shows how the network is represented using docker0 and veth pairs to connect endpoints with the docker0 bridge:

Bridge driver

Overlay network driver

Overlay network in libnetwork uses VXLan along with a Linux bridge to create an overlaid address space. It supports multi-host networking:

const (
  networkType  = "overlay"
  vethPrefix   = "veth"
  vethLen      = 7
  vxlanIDStart = 256
  vxlanIDEnd   = 1000
  vxlanPort    = 4789
  vxlanVethMTU = 1450
)
type driver struct {
  eventCh      chan serf.Event
  notifyCh     chan ovNotify
  exitCh       chan chan struct{}
  bindAddress  string
  neighIP      string
  config       map[string]interface{}
  peerDb       peerNetworkMap
  serfInstance *serf.Serf
  networks     networkTable
  store        datastore.DataStore
  ipAllocator  *idm.Idm
  vxlanIdm     *idm.Idm
  once         sync.Once
  joinOnce     sync.Once
  sync.Mutex
}
Overlay network driver
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset