CHAPTER 5

Recommendation Engine

The recommendation engine illustrates the logical flow as shown in Figure 5.1. Recommendation engines recommend strategies for risk mitigation and risk contingency.

Image

Figure 5.1 Recommendation engine

Data Collection

Gathering risk data starts with the data collection process shown in Tables 5.1, 5.2, 5.3, and 5.4 (Hahn 2018; Elswick 2016; Phillips and Stawarski 2008).


Table 5.1 Risks dataset

Measure Risk Name

Risk Category

Risk Impact Score

Risk Occurrence Probability

Risk Priority Score

Sales amount decrease

Financial/competitive risk

0.9

0.5

2

Inventory turnover decrease

Inventory risk

0.2

0.8

9

Inventory decrease

Inventory risk

Sales amount increase

Inventory risk


Table 5.2 Risk mitigation strategies dataset

Measure Risk Name

Risk Mitigation Strategy

# of Times Mitigation Strategy Taken

# of Times Mitigation Strategy Taken/# of Times Measure Risk Occurred

Sales amount decrease

Encourage customers to increase spending with your company

10

0.9

Inventory turnover decrease

Revise business strategy

20

0.3

Inventory decrease

Performance-based contracts with suppliers

Sales amount increase

Improve forecasting models


Table 5.3 Interaction matrix

Risk Mitigation Strategy

Mitigation Type

Run new marketing campaign

Avoid

Increase inventory

Avoid

Discontinue product

Control

Acquire new customers

Capture

Keep current customers happy

Maintain

Encourage customers to increase spending with your company

Grow

Win back former customers

Reclaim

Revise business strategy

Control

Performance-based contracts with suppliers

Control

Improve forecasting models

Control


Table 5.4 Risk-security strategies

Risk/Mitigation Strategy

Run New Marketing Campaign

Decrease Inventory

Revise Business Strategy

Performance-Based Contracts With Suppliers

Improve Forecasting Models

Sales amount decrease

5 (# of times taken this strategy)

0

2

0

1

Inventory turnover decrease

1

5

3

0

1

Inventory decrease

0

0

0

4

0

Sales amount increase

0

0

0

0

5


Design Algorithm

Recommendation engines can be designed based on risk–risk similarity models or mitigation-strategy similarity models.

Identify List of Recommender System Algorithms

Machine learning (ML) algorithms in recommender systems:

  • Content-based filtering methods (similarity of item attributes).
  • Collaborative-based filtering methods (calculates similarity from interactions, risk–risk, or mitigation strategy similarity models):
    • K: nearest neighbors.
    • Matrix factorization: stochastic gradient descent.
    • Matrix factorization: alternating least squares.
    • Association rules: Apriori algorithm (items frequently consumed together are connected with an edge in the graph).
    • Neural networks.
  • Deep autoencoder—with multiple hidden layers and nonlinearities that are more powerful but harder to train—can be used to preprocess item attributes and combine content-based and collaborative approaches.
  • Neural nets predict ratings and interactions based on item and user attributes.
  • Deep neural nets predict next action based on historical actions and content.
  • Deep autoencoders: collaborative filtering.
  • Ensemble of deep and wide regression to predict ratings.
  • Sequence-based recommenders can be realized by traditional ML models.
  • Gated Recurrent Units or Long Short-Term Memory (LSTM) recurrent neural networks.
  • Deep convolutional neural networks.
  • Feed-forward neural nets with history of purchases on-hot-encoded to input predicting probabilities of products to be purchased next.
  • Combination of both.
  • Finalized.
    • Train the model using collaborative-based filtering methods.

Train the Model

Training set (80 percent) and testing set (20 percent).

The testing set is further divided into an observation subset that is submitted to the system and the testing subset is used to evaluate the system.

Evaluate the Model

  • Evaluated similarly to a classification ML model.
  • Root mean squared error.

See Figures 5.2, 5.3, and 5.4.

Image

Figure 5.2 K-Nearest neighbors and association rules

Image

Figure 5.3 Model versus accuracy

Image

Figure 5.4 Coverage versus recall: Recommendation engine evaluation

Introduce regularization parameters to all algorithms to penalize recommendation of popular items.

  • Both recall and coverage should be maximized; recommender will be accurate and give diverse recommendations for users to explore new content.
  • Cold starts do not have enough historical interactions (for item or user). Attribute similarity (content-based similarity) may be used as collaborative filtering methods that fail to generate recommendations.
  • Cold start problems are reduced when attribute similarity is taken into account. Encode attributes into the binary vector and feeds it to recommender.
  • Items clustered based on their interaction similarity and attribute similarity are often aligned.

Evaluation Metrics for Recommendation Engines

  • (a) Recall, (b) what proportion of items that a user likes were actually recommended, and (c) the larger the recall, the better the recommendations.
  • (a) Precision, (b) out of all the recommended items, how many did the user actually like?, and (c) the larger the precision, the better the recommendations.
  • (a) Root mean squared error, (b) it measures the error in the predicted ratings, and (c) the smaller the root mean squared error value, the better the recommendations.
  • (a) Ranking metrics and (b) considers the order of the products recommended:
    • (a) Mean reciprocal rank, (b) evaluates the list of recommendations, and (c) the larger the mean reciprocal rank, the better the recommendations.
    • (a) Mean average precision at cutoff k, and (b) the larger the mean average precision, the more accurate the recommendations.
    • (a) Normalized discounted cumulative gain, and (b) the higher the normalized discounted cumulative gain, the better the recommendations.

See Figures 5.5 and 5.6.

Image

Figure 5.5 Confusion matrix

Image

Figure 5.6 Evaluation metrics for recommendation engines

  • One algorithm is fine, but ensembles are much more powerful.
  • Balance exploration and exploitation because exploring too much can lead to lower quality recommendation for some users, whereas limited exploration can lead to suboptimal recommendations for all users.

Model Conclusion

LSTM performed better than other algorithms for recommendation engine.

Conclusion

Many recommendation engines are available in the market.

Reinforcement Learning (Q-Learning) for Recommendations

Reinforcement learning has been shown to solve complex problems. Recently, reinforcement learning has been used with great success in Google’s DeepMind Atari games (Sutton and Barto 1998).

Underlying ML algorithms for Q-Learning do not have restrictions. The model can be any regression algorithms; however, deep neural networks dominate Q-Learning and reinforcement learning in general.

One key difference can be noted when using logistic regression for instead of classification is the data. In classification, the data are prelabeled with the correct class for the model to predict. Reinforcement learning does not have prelabeled data. Data are generated and these data have a reward signal that should be maximized.

Threat Response Recommendations

Suppose a risk is observed and a list of actions are taken. It is fed into the actor network, which decides what would the next action should be.  It produces an ideal response embedding. It can be compared with other response embeddings to find similarities. The best match will be recommended for the risk.

The critic helps to judge the actor and help it find out what is wrong.

For example, the recommender suggests an action for a risk. The action was taken and receives an immediate reward of $1,000; however, it may also happen that the action is undone in the future, penalizing the company by $2,000. All future actions need to be taken into consideration. See Figure 5.7.

Image

Figure 5.7 Reinforcement learning algorithm generic process diagram

The network consists of two layers: the actor and the critic. Each one resembles different learning types. The actor learns policies (probabilities of which action to choose next) and the critic is focused on rewards (Q-Learning).

First, a bunch of response embeddings are fed into the actor’s state representation module, where they are encoded. Next, a decision is made in the form of a vector. The action is combined with item embeddings and fed into the critic module, which aims to estimate how good the reward is going to be.

The state module models the complex dynamic risk-response interactions to pursue better recommendation performance.

For a given risk, the network accounts for generating a response is based on its states. The risk state denoted by the embeddings of its n latest response taken is used as the input.

Critic Network (Q-Learning) is used to estimate how good the reward of the current state and action will be.

Four categories of features have been constructed: risk features, context features as the state features of the environment, risk-response features, and response features as the action features. The four features were input to the deep Q-network to calculate the Q-value. A list of responses were chosen to recommend based on the Q-value, and the user’s action on the response was included in the reward the reinforcement learning agent received.

Rewards can be collected from recommendation engines’ system log.

Risk Contingency Recommendation

Recommendation engines are trained using the same steps as the threat response recommendation engine. See Table 5.5.


Table 5.5 Risk contingency dataset

Measure Risk Name

Risk Contingency Plan

# of Times Contingency Plan Taken

# of Times Contingency Plan Taken/# of Times Measure Risk Occurred

Sales amount decrease

Design new products

10

0.9

Inventory turnover decrease

Reduce price to boost sales

20

0.3

Inventory decrease

Create a list of alternative suppliers for inventory items

5

0.45

Sales amount increase

Fall back on overstocked inventory items

7

0.68

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset