How to do it...

Let us now see how to proceed with the recipe:

  1. In the case of standard linear regression, we have only one input variable and one output variable:
# Placeholder for the Training Data
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')

# Variables for coefficients initialized to 0
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)

# The Linear Regression Model
Y_hat = X*w1 + w0

# Loss function
loss = tf.square(Y - Y_hat, name=’loss’)
  1. In the case of multiple linear regression, the input variables are more than one, while the output variable remains one. Now you can define X placeholder of shape [m, n], where m is the number of samples and n is the number of features, then the code is as follows:
# Placeholder for the Training Data
X = tf.placeholder(tf.float32, name='X', shape=[m,n])
Y = tf.placeholder(tf.float32, name='Y')

# Variables for coefficients initialized to 0
w0 = tf.Variable(0.0)
w1 = tf.Variable(tf.random_normal([n,1]))

# The Linear Regression Model
Y_hat = tf.matmul(X, w1) + w0

# Multiple linear regression loss function
loss = tf.reduce_mean(tf.square(Y - Y_hat, name='loss')
  1. In the case of logistic regression, the loss function is defined by cross-entropy. Now output Y will have dimensions equal to the number of classes in the training dataset. With P number of classes, we will have the following:
# Placeholder for the Training Data
X = tf.placeholder(tf.float32, name='X', shape=[m,n])
Y = tf.placeholder(tf.float32, name='Y', shape=[m,P])

# Variables for coefficients initialized to 0
w0 = tf.Variable(tf.zeros([1,P]), name=’bias’)
w1 = tf.Variable(tf.random_normal([n,1]), name=’weights’)

# The Linear Regression Model
Y_hat = tf.matmul(X, w1) + w0

# Loss function
entropy = tf.nn.softmax_cross_entropy_with_logits(Y_hat,Y)
loss = tf.reduce_mean(entropy)
  1. If we want to add L1 regularization to loss, then the code is as follows:
lamda = tf.constant(0.8)  # regularization parameter
regularization_param = lamda*tf.reduce_sum(tf.abs(W1))

# New loss
loss += regularization_param
  1. For L2 regularization, we can use the following:
lamda = tf.constant(0.8)  # regularization parameter
regularization_param = lamda*tf.nn.l2_loss(W1)

# New loss
loss += regularization_param
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset