Appendix E. Setting up your AWS GPU

If you want to train or run your NLP pipeline quickly, a server with a GPU will often speed things up. GPUs are especially speedy for training a deep neural network when you use a framework such as Keras (TensorFlow or Theano), PyTorch, or Caffe to build your model. These computational graph frameworks can take advantage of the massively parallel multiplication and addition operations that GPUs are built for.

A cloud service is a great option if you don’t want to invest the time and money to build your own server. But it’s possible to build a server with a GPU that’s twice as fast as a comparable Amazon Web Services (AWS) server for about the cost of a month on a comparable AWS instance. Plus you can store a lot more data with tighter coupling (higher bandwidth) to your machine and often get more RAM than is possible on a single AWS EC2 instance.

With AWS you can be up and running quickly, without having to maintain your own storage devices and servers. Plus most cloud services provide preconfigured hard drive images (ISOs) that can get you up and running much quicker than if you had to configure your own server. For a production system, a cloud provider like AWS or Google Cloud Services (Azure is still playing catch-up) likely makes sense. For recreation and experimentation, you may want to roll your own.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset