Appendix A. Use Cases

This section contains a small selection of use cases from the community with more technical detail than usual. Further examples can be found on the OpenStack Website (https://www.openstack.org/user-stories/)

NeCTAR

Who uses it: Researchers from the Australian publicly funded research sector. Use is across a wide variety of disciplines, with the purpose of instances being from running simple web servers to using hundreds of cores for high throughput computing.

Deployment

Using OpenStack Compute Cells, the NeCTAR Cloud spans eight sites with approximately 4,000 cores per site.

Each site runs a different configuration, as resource cells in an OpenStack Compute cells setup. Some sites span multiple data centers, some use off compute node storage with a shared file system and some use on compute node storage with a non-shared file system. Each site deploys the Image Service with an Object Storage back-end. A central Identity Service, Dashboard and Compute API Service is used. Login to the Dashboard triggers a SAML login with Shibboleth, that creates an account in the Identity Service with an SQL back-end.

Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per core and approximately 40 GB of ephemeral storage per core.

All sites are based on Ubuntu 12.04 with KVM as the hypervisor. The OpenStack version in use is typically the current stable version, with 5 to 10% back ported code from trunk and modifications.

Resources

MIT CSAIL

Who uses it: Researchers from the MIT Computer Science and Artificial Intelligence Lab.

Deployment

The CSAIL cloud is currently 64 physical nodes with a total of 768 physical cores and 3,456 GB of RAM. Persistent data storage is largely outside of the cloud on NFS with cloud resources focused on compute resources. There are 65 users in 23 projects with typical capacity utilization nearing 90% we are looking to expand.

The software stack is Ubuntu 12.04 LTS with OpenStack Folsom from the Ubuntu Cloud Archive. KVM is the hypervisor, deployed using FAI (http://fai-project.org/) and Puppet for configuration management. The FAI and Puppet combination is used lab wide, not only for OpenStack. There is a single cloud controller node, with the remainder of the server hardware dedicated to compute nodes. Due to the compute intensive nature of the use case, the ratio of physical CPU and RAM to virtual is 1:1 in nova.conf. Although hyper-threading is enabled so, given the way Linux counts CPUs, this is actually 2:1 in practice.

On the network side, physical systems have two network interfaces and a separate management card for IPMI management. The OpenStack network service uses multi-host networking and the FlatDHCP.

DAIR

Who uses it: DAIR is an integrated virtual environment that leverages the CANARIE network to develop and test new information communication technology (ICT) and other digital technologies. It combines such digital infrastructure as advanced networking, and cloud computing and storage to create an environment for develop and test of innovative ICT applications, protocols and services, perform at-scale experimentation for deployment, and facilitate a faster time to market.

Deployment

DAIR is hosted at two different data centres across Canada: one in Alberta and the other in Quebec. It consists of a cloud controller at each location, however, one is designated as the “master” controller that is in charge of central authentication and quotas. This is done through custom scripts and light modifications to OpenStack. DAIR is currently running Folsom.

For Object Storage, each region has a Swift environment.

A NetApp appliance is used in each region for both block storage and instance storage. There are future plans to move the instances off of the NetApp appliance and onto a distributed file system such as Ceph or GlusterFS.

VlanManager is used extensively for network management. All servers have two bonded 10gb NICs that are connected to two redundant switches. DAIR is set up to use single-node networking where the cloud controller is the gateway for all instances on all compute nodes. Internal OpenStack traffic (for example, storage traffic) does not go through the cloud controller.

Resources

CERN

Who uses it: Researchers at CERN (European Organization for Nuclear Research) conducting high-energy physics research.

Deployment

The environment is largely based on Scientific Linux 6, which is Red Hat compatible. We use KVM as our primary hypervisor although tests are ongoing with Hyper-V on Windows Server 2008.

We use the Puppet Labs OpenStack modules to configure Compute, Image Service, Identity Service and Dashboard. Puppet is used widely for instance configuration and Foreman as a GUI for reporting and instance provisioning.

Users and Groups are managed through Active Directory and imported into the Identity Service using LDAP. CLIs are available for Nova and Euca2ools to do this.

There are 3 clouds currently running at CERN, totaling around 3400 Nova Compute nodes, with approximately 60,000 cores. The CERN IT cloud aims to expand to 300,000 cores by 2015.

Resources

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset