Memory tuning

In this section, we try to provide some insights. We start with an issue and its solution; convolutional layers require a huge amount of RAM, especially during training, because the reverse pass of backpropagation requires all the intermediate values computed during the forward pass. During inference (that is, when making a prediction for a new instance), the RAM occupied by one layer can be released as soon as the next layer has been computed, so you only need as much RAM as required by two consecutive layers.

Nevertheless, during training, everything computed during the forward pass needs to be preserved for the reverse pass, so the amount of RAM needed is (at least) the total amount of RAM required by all layers. If your GPU runs out of memory while training a CNN, here are five things you can try to solve the problem (other than purchasing a GPU with more RAM):

  • Reduce the mini-batch size
  • Reduce dimensionality using a larger stride in one or more layers
  • Remove one or more layers
  • Use 16-bit floats instead of 32-bit
  • Distribute the CNN across multiple devices (see more at https://www.tensorflow.org/deploy/distributed)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset