ML Engine

The process to develop a model using the ML Engine is quite similar to that we have seen in the Working with the Azure ML service and Implementing analytics on AWS SageMaker sections:

  1. Develop the algorithm locally
  2. Train the model using local data
  3. Publish the model on GCP Cloud
  4. Use ML Engine to perform a more advanced training step, using GPU, CPU or TPU
  5. Deploy the model on the cloud

The following diagram shows the workflow proposed by GCP:

Engine ML workflow (source: Google Cloud Platform)

The basic concept of Engine ML is to use the gcloud ml-engine command-line tool and the Google Cloud libraries to train the model locally and interact with the cloud through a REST API.

The following is an example of how we can run our algorithm locally and remotely:

$ gcloud ml-engine local train 
--module-name trainer.task
--package-path trainer/
--job-dir $MODEL_DIR
--
--train-files $TRAIN_DATA
--eval-files $EVAL_DATA


$ gcloud ml-engine jobs submit training $JOB_NAME
--job-dir $OUTPUT_PATH
--runtime-version 1.8
--module-name trainer.task
--package-path trainer/
--region $REGION
--
--train-files $TRAIN_DATA
--eval-files $EVAL_DATA

With the GCP Engine ML, we can do the same operations locally or remotely. The only difference is in how we store the data. In this respect, we can use Google Storage bucket, which is very similar to what we did with AWS S3.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset