Optimization and interactivity

While the data collected can be just used for understanding the business, the final goal of any data-driven business is to optimize the business behavior by automatically making data-based and model-based decisions. We want to reduce human intervention to minimum. The following simplified diagram can be depicted as a cycle:

Optimization and interactivity

Figure 02-4. The predictive model life cycle

The cycle is repeated over and over for new information coming into the system. The parameters of the system may be tuned to improve the overall system performance.

Feedback loops

While humans are still likely to be kept in the loop for most of the systems, last few years saw an emergence of systems that can manage the complete feedback loop on their own—ranging from advertisement systems to self-driving cars.

The classical formulation of this problem is the optimal control theory, which is also an optimization problem to minimize cost functional, given a set of differential equations describing the system. An optimal control is a set of control policies to minimize the cost functional given constraints. For example, the problem might be to find a way to drive the car to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Another control problem is to maximize profit for showing ads on a website, provided the inventory and time constraints. Most software packages for optimal control are written in other languages such as C or MATLAB (PROPT, SNOPT, RIOTS, DIDO, DIRECT, and GPOPS), but can be interfaced with Scala.

However, in many cases, the parameters for the optimization or the state transition, or differential equations, are not known with certainty. Markov Decision Processes (MDPs) provide a mathematical framework to model decision making in situations where outcomes are partly random and partly under the control of the decision maker. In MDPs, we deal with a discrete set of possible states and a set of actions. The "rewards" and state transitions depend both on the state and actions. MDPs are useful for studying a wide range of optimization problems solved via dynamic programming and reinforcement learning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset