Containers and orchestration

Containers are a natural progression of additional isolation and virtualization of hardware. Where a virtual instance allows for a bare metal server to have multiple hosts, hence gaining more efficiencies and usage of that specific server's resources, containers behave similarly but are much more lightweight, portable, and scalable. Typically, containers run a Linux, or, in some cases, Windows operating system. However, a lot of the bloat and unused components are removed, and they handle the traditional startup, shutdown, and task management, and not much else. Developers are then able to include specific libraries or languages and deploy the code and configurations directly to the container file. This is then treated as the deployment artifact and published to a registry to be deployed or updated across the fleet of containers already running in the cluster via the orchestration service.

Because of the lightweight nature of containers, they make for an ideal technology to run in a cloud environment and for extreme decoupling of an application into microservices. The scaling aspect of containers can take place across nodes to maximize the resources on that instance and by using many hosts to form a cluster that are all maintained and scheduled by the orchestration service. For file storage, the service can use local ephemeral filesystem locations, or if the data needs to persist, the files can be stored in a persistent volume which is mounted and accessible by all containers in the node. To make the container placement and scaling easier, some orchestration tools use pods, or containers coupled together to act as a single unit that are instantiated together and scale together.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset