A graph represents low-level computations in terms of the dependencies between operations. In TensorFlow, you first define a graph, and then create a session that executes the operations in the graph.
The way a graph is built, computed, and optimized in TensorFlow allows a high degree of parallelism, distributed execution, and portability, all very important properties when building machine learning models.
To give you an idea of the structure of a graph produced internally by TensorFlow, the following program produces the computational graph demonstrated in the following diagram:
import tensorflow as tf
import numpy as np
const1 = tf.constant(3.0, name='constant1')
var = tf.get_variable("variable1", shape=[1,2], dtype=tf.float32)
var2 = tf.get_variable("variable2", shape=[1,2], trainable=False, dtype=tf.float32)
op1 = const1 * var
op2 = op1 + var2
op3 = tf.reduce_mean(op2)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run(op3)
This results in the following graph:
Figure 2.3: Example of a computational graph