Model deployment

The model deployment is the most important part of model life cycle. At this stage, the model is fed by real-life data and produce results that can support decision making (for example, accepting or rejecting a loan).

In this chapter, we will build a simple application combining the Spark streaming the models we exported earlier and shared code library, which we defined while writing the model-training application.

The latest Spark 2.1 introduces structural streaming, which is built upon the Spark SQL and allows us to utilize the SQL interface transparently with the streaming data. Furthermore, it brings a strong feature in the form of "exactly-once" semantics, which means that events are not dropped or delivered multiple times. The streaming Spark application has the same structure as a "regular" Spark application:

object Chapter8StreamApp extends App {

val spark = SparkSession.builder()
.master("local[*]")
.appName("Chapter8StreamApp")
.getOrCreate()


script(spark,
sys.env.get("MODELSDIR").getOrElse("models"),
sys.env.get("APPDATADIR").getOrElse("appdata"))

def script(ssc: SparkSession, modelDir: String, dataDir: String): Unit = {
// ...
val inputDataStream = spark.readStream/* (1) create stream */

val outputDataStream = /* (2) transform inputDataStream */

/* (3) export stream */ outputDataStream.writeStream.format("console").start().awaitTermination()
}
}

There are three important parts: (1) The creation of input stream, (2) The transformation of the created stream, and (3) The writing resulted stream.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset