Index
Symbols
- .assign(), Pretrained Word Embeddings, Assigning Loaded Weights
- .compile(), Sequential model
- .eval(), Tensor Arrays and Shapes
- .evaluate(), Linear Regression
- .fit(), Linear Regression, FeatureColumn, CNN, Sequential model
- .get_variable_value(), FeatureColumn
- .meta checkpoint files, The Saver Class
- .name attribute, Names
- .optimizers, Sequential model
- .run() method, Creating a Session and Running It
- .save(), The Saver Class
- .__enter__(), Constructing and Managing Our Graph
- .__exit__(), Constructing and Managing Our Graph
- <Estimator>.predict(), DNN Classifier
- @property decorator, Class encapsulation
A
- abstraction libraries
- acknowledgments, Acknowledgments
- activation functions, Convolution
- aliases, Installing TensorFlow
- argparse module, tf.app.flags
- arguments
- arrays, Tensor Arrays and Shapes-Name scopes
- .assign(), Pretrained Word Embeddings, Assigning Loaded Weights
- asynchronous training, What Is the Goal of Parallelization?
- as_default() command, Constructing and Managing Our Graph
- attributes
- attributions, Using Code Examples
- autoencoders, Autoencoders-Autoencoders
B
- backpropagation, A High-Level Overview
- bag-of-words text classification, Introduction to Word Embeddings
- BasicRNNCell, tf.contrib.rnn.BasicRNNCell and tf.nn.dynamic_rnn()
- batch_size, MNIST images as sequences
- Bazel build tool, Building and Exporting, Exporting our model, Required and Recommended Components for TensorFlow Serving
- between-graph replication, Replicating a Computational Graph Across Devices
- bias_variable(), The Model
- binary classification, RNN
- biologically inspired models, Introduction to CNNs
C
- callbacks argument, Sequential model
- casting, Casting
- chains, Introduction to Recurrent Neural Networks
- CIFAR10 dataset, CIFAR10-Simple CIFAR10 Models
- class encapsulation, Class encapsulation
- clusters, Clusters and Servers
- code examples, obtaining and using, Using Code Examples
- command-line arguments, tf.app.flags
- comments and questions, How to Contact Us
- Common Crawl vectors, Pretrained Word Embeddings
- .compile(), Sequential model
- computation graphs (see dataflow computation graphs)
- computer vision
- confusion matrices, DNN Classifier
- constructors, Nodes Are Operations, Edges Are Tensor Objects
- contact information, How to Contact Us
- context managers, Constructing and Managing Our Graph
- continuous (regression) learning, Softmax Regression, Training to predict, contrib.learn
- contrib library, tf.contrib.rnn.BasicRNNCell and tf.nn.dynamic_rnn(), Installation
- contrib.layers, FeatureColumn, Homemade CNN with contrib.learn
- contrib.learn
- contrib.learn.Estimator(), Homemade CNN with contrib.learn, CNN
- conv2d(), Convolution, The Model
- convolutional neural networks (CNNs)
- conv_layer(), The Model
- coord.request_stop(), tf.train.Coordinator
- coord.should_stop(), tf.train.Coordinator
- cross entropy, Softmax Regression, MSE and cross entropy, Customization
- customization
D
- data augmentation, Simple CIFAR10 Models
- data frames (tables), FeatureColumn
- data parallelism, Where Does the Parallelization Take Place?
- data types
- dataflow computation graphs
- benefits of, The Benefits of Graph Computations
- constructing and managing, Constructing and Managing Our Graph-Constructing and Managing Our Graph
- creating, Creating a Graph
- fetches argument, Fetches
- overview of, TensorFlow: What’s in a Name?, Computation Graphs
- replicating across devices, Replicating a Computational Graph Across Devices
- resetting prior to restoring, The Saver Class
- session closing, Creating a Session and Running It
- session creation, Creating a Session and Running It
- deactivate command, Installing TensorFlow
- decorators, Class encapsulation
- deep learning
- advances in, Going Deep
- computer vision, Pre-trained models: state-of-the-art computer vision for all, The Importance of Sequence Data
- data processing in, TensorFlow: What’s in a Name?
- image captioning, Generating rich natural language descriptions for images
- overview of, Going Deep
- sequence data and, The Importance of Sequence Data
- supervised learning, Softmax Regression, Softmax Regression
- unsupervised learning, Word2vec
- deep learning models (see models)
- deep neural networks (see deep learning)
- dense vector representations, Introduction to Word Embeddings
- Dense(), Sequential model
- dequeing and enqueuing, Enqueuing and Dequeuing
- design tips
- device placement, Device Placement
- digit_to_word_map dictionary, Text Sequences
- dimensionality reduction, Autoencoders
- discrete (classification) learning, Softmax Regression
- display_cifar(), Loading the CIFAR10 Dataset
- DistBelief , Going Deep
- distributed computing
- distributional hypothesis, Word2vec
- DNN classifier, DNN Classifier-DNN Classifier
- Docker
- dropout, Dropout, Bidirectional RNN and GRU Cells
- DropoutWrapper(), Bidirectional RNN and GRU Cells
- dtype attribute, Data Types
- dynamic_rnn(), LSTM and Using Sequence Length, Training Embeddings and the LSTM Classifier, Pretrained Word Embeddings, Bidirectional RNN and GRU Cells
E
- early stopping, Sequential model
- edge detectors, Introduction to CNNs
- edges, Nodes Are Operations, Edges Are Tensor Objects
- elements (see TensorFlow elements)
- element_size, MNIST images as sequences
- embedding_matrix, Pretrained Word Embeddings, Bidirectional RNN and GRU Cells
- embedding_placeholder, Pretrained Word Embeddings, Bidirectional RNN and GRU Cells
- enqueuing and dequeuing, Enqueuing and Dequeuing
- .__enter__(), Constructing and Managing Our Graph
- estimators, contrib.learn
- .eval(), Tensor Arrays and Shapes
- .evaluate(), Linear Regression
- .__exit__(), Constructing and Managing Our Graph
- external data, Softmax Regression
F
- feature maps, Convolution
- feature_columns, Linear Regression, FeatureColumn-FeatureColumn
- feed_dict argument, Softmax Regression, The Input Pipeline
- fetches argument, Fetches
- filters
- .fit(), Linear Regression, FeatureColumn, CNN, Sequential model
- flags mechanism, tf.app.flags
- fully connected neural networks, Introduction to CNNs
- full_layer(), The Model
- functools.wrap(), Class encapsulation
G
- gated recurrent unit (GRU) cells, Bidirectional RNN and GRU Cells
- get_shape(), Tensor Arrays and Shapes
- .get_variable_value(), FeatureColumn
- global_step, Distributed Example
- GloVe embedding method, Pretrained Word Embeddings
- gradient descent optimization, A High-Level Overview, Softmax Regression, The gradient descent optimizer-Gradient descent in TensorFlow, RNN classification, Learning Rate Decay
- GradientDescentOptimizer(), Gradient descent in TensorFlow
- graphs (see dataflow computation graphs)
- gRPC framework, Required and Recommended Components for TensorFlow Serving
- GRUCell(), Bidirectional RNN and GRU Cells
I
- IDE configuration, Hello World
- image captioning, Generating rich natural language descriptions for images
- image classification
- CIFAR10, CIFAR10-Simple CIFAR10 Models
- illustration of, Going Deep
- images as sequences, MNIST images as sequences
- invariance property, Introduction to CNNs
- MNIST, MNIST, MNIST: Take II-The Model
- pretrained models for, Pre-trained models: state-of-the-art computer vision for all
- softmax regression, Softmax Regression-Softmax Regression, The Noise-Contrastive Estimation (NCE) Loss Function, tf.train.start_queue_runners() and Wrapping Up
- ImageNet project, Creating CNN models with TF-Slim
- IMDb reviews dataset, RNN
- initializers, Tensor Arrays and Shapes, The Model, Bidirectional RNN and GRU Cells, Variable sharing
- input pipeline
- input_data.read_data_sets(), Loading the CIFAR10 Dataset
- input_fn(), FeatureColumn
- integer IDs, Introduction to Word Embeddings
- invariance, Introduction to CNNs
L
- labeled data, Introduction to Word Embeddings
- lambda functions, FeatureColumn
- language models, The Importance of Sequence Data
- layers.convolution2d(), Homemade CNN with contrib.learn
- learn.LinearRegressor(), Linear Regression
- learning rates, Softmax Regression, Gradient descent in TensorFlow, Learning Rate Decay
- learning_rate hyperparameter, Learning Rate Decay
- lifecycle management, Overview
- linear regression, Example 1: linear regression-Example 1: linear regression, contrib.learn
- loaded weights, assigning, Assigning Loaded Weights
- local response normalization (LRN), CNN
- logistic regression, Example 2: logistic regression-Example 2: logistic regression, contrib.learn
- LOG_DIR, MNIST images as sequences, Visualizing the model with TensorBoard
- long short-term memory (LSTM), LSTM and Using Sequence Length-Stacking multiple LSTMs, Bidirectional RNN and GRU Cells
- loss functions
M
- Markov chain model, Introduction to Recurrent Neural Networks
- Matplotlib, Loading the CIFAR10 Dataset
- matrices, Tensor Arrays and Shapes, DNN Classifier
- matrix multiplication, Matrix multiplication
- max_pool_2×2, The Model
- mean squared error (MSE), MSE and cross entropy, Linear Regression
- memory errors, Softmax Regression
- .meta checkpoint files, The Saver Class
- metadata files, Training and Visualizing with TensorBoard
- metainformation, The Saver Class
- mini-batches, Sampling methods
- MINIBATCH_SIZE, Softmax Regression
- MNIST (Mixed National Institute of Standards and Technology) , MNIST, MNIST: Take II-The Model, MNIST images as sequences-MNIST images as sequences
- model.fit(), Class encapsulation
- Model.load_weights(), Autoencoders
- models
- biologically inspired, Introduction to CNNs
- CNN classification of CIFAR10 dataset, Simple CIFAR10 Models-Simple CIFAR10 Models
- CNN classification of MNIST dataset, The Model-The Model
- customizing loss functions, Customization-Writing your very own op
- evaluating, Softmax Regression
- language models, The Importance of Sequence Data
- measures of similarity in, Softmax Regression, MSE and cross entropy
- optimizing, Optimization-Example 2: logistic regression
- pretrained, Pre-trained models: state-of-the-art computer vision for all
- regression, Training to predict, contrib.learn
- saving, Autoencoders
- saving and exporting, Saving and Exporting Our Model-The Saver Class
- sequential model, Sequential model
- serving, Introduction to TensorFlow Serving-Exporting our model
- softmax regression, Softmax Regression-Softmax Regression, The Noise-Contrastive Estimation (NCE) Loss Function, tf.train.start_queue_runners() and Wrapping Up
- structuring, Model Structuring and Customization-Class encapsulation
- training, Softmax Regression, Training to predict, The Model, FeatureColumn, CNN, Distributed Computing
- VGG model, Creating CNN models with TF-Slim
- MSE (mean squared error), MSE and cross entropy, Linear Regression, Customization
- multiple computational devices , Device Placement
- MultiRNNCell(), Stacking multiple LSTMs
- multithreading
N
- names and naming
- natural language processing (NLP), Introduction to Word Embeddings
- natural language understanding (NLU), Text summarization, The Importance of Sequence Data
- neuroscientific inspiration, Introduction to CNNs
- nodes, Nodes Are Operations, Edges Are Tensor Objects
- Noise-Contrastive Estimation (NCE), The Noise-Contrastive Estimation (NCE) Loss Function
- normal distribution, Tensor Arrays and Shapes
- normalization, CNN, CNN
- np.random.choice(range()), Text Sequences
- NumPy, Fetches, Tensor Arrays and Shapes, Writing your very own op
- num_epochs argument, tf.train.string_input_producer() and tf.TFRecordReader()
O
- objectives, Defining a loss function
- operation instances, Nodes Are Operations, Edges Are Tensor Objects
- operators and shortcuts, Creating a Graph
- ops, Writing your very own op
- optimization
- cross entropy loss, Softmax Regression, MSE and cross entropy, Customization
- gradient descent, Softmax Regression, The gradient descent optimizer-Gradient descent in TensorFlow
- linear regression example, Example 1: linear regression-Example 1: linear regression, contrib.learn
- logistic regression example, Example 2: logistic regression-Example 2: logistic regression, contrib.learn
- loss function, Defining a loss function
- MSE (mean squared error), Linear Regression, Customization
- MSE loss, MSE and cross entropy
- training to predict, Training to predict
- optimizer.minimize(), Gradient descent in TensorFlow
- .optimizers, Sequential model
- overfitting, Introduction to CNNs
P
- padding, Convolution, Text Sequences
- PAD_TOKEN, Pretrained Word Embeddings
- Pandas library, FeatureColumn
- parallelization, Where Does the Parallelization Take Place?, Replicating a Computational Graph Across Devices
- parameter servers, Clusters and Servers
- part-of-speech (POS) tagging, Bidirectional RNN and GRU Cells
- perm argument, Applying the RNN step with tf.scan()
- placeholders
- pooling, Pooling, The Model
- pre-processing, The Input Pipeline
- pre-trained models, Pre-trained models: state-of-the-art computer vision for all, Pretrained models with TF-Slim-Downloading and using a pretrained model
- principal component analysis (PCA), Autoencoders
- processed images, Convolution
- @property decorator, Class encapsulation
- protocol buffers (protobufs), TFRecords, The Saver Class
- PyCharm IDE, Hello World
- Python
R
- random initializers, Tensor Arrays and Shapes, The Model
- read_data_sets() method, Softmax Regression
- recurrent neural networks (RNNs)
- regression problems
- regression(), CNN
- regularization
- ReLU neurons, CNN
- remote procedure call (RPC), Required and Recommended Components for TensorFlow Serving
- RGB images, TensorFlow: What’s in a Name?
- RMSPropOptimizer, RNN classification
- rnn_cell, tf.contrib.rnn.BasicRNNCell and tf.nn.dynamic_rnn()
- rnn_step(), tf.contrib.rnn.BasicRNNCell and tf.nn.dynamic_rnn()
- .run() method, Creating a Session and Running It
S
- sampling methods, Sampling methods
- .save(), The Saver Class
- saver.save(), The Saver Class
- save_dir, Writing with TFRecordWriter
- saving and exporting, Saving and Exporting Our Model-The Saver Class
- scalars, Setting attributes with source operations, Tensor Arrays and Shapes, Visualizing the model with TensorBoard
- Scikit Flow, High-Level Survey
- scikit-learn, High-Level Survey
- sentiment analysis, RNN
- sequence data, The Importance of Sequence Data, MNIST images as sequences
- (see also text sequences)
- sequential model, Sequential model
- serialization, The Saver Class
- servers, Clusters and Servers
- serving output in production, Introduction to TensorFlow Serving-Exporting our model, Required and Recommended Components for TensorFlow Serving-Some Basic Docker Commands
- sess.run() method, Fetches, Example 1: linear regression
- session.run(), Placeholders
- sessions
- shape argument, Placeholders
- skip-grams, Word2vec-Skip-Grams
- slim.assign_from_checkpoint_fn(), Downloading and using a pretrained model
- softmax regression, Softmax Regression-Softmax Regression, The Noise-Contrastive Estimation (NCE) Loss Function, tf.train.start_queue_runners() and Wrapping Up
- source operations
- special method functions, Constructing and Managing Our Graph
- square error loss, MSE and cross entropy, Linear Regression, Customization
- stochastic gradient descent (SGD), Sampling methods
- strides argument, Convolution
- supervised learning, Softmax Regression, Softmax Regression, Text Sequences
- Supervisor, Managed Sessions
- synchronous training, What Is the Goal of Parallelization?
T
- tanh(·), Vanilla RNN Implementation
- tensor (mathematical term), Tensor Arrays and Shapes
- TensorBoard
- Embeddings tab, Checking Out Our Embeddings
- functions of, MNIST images as sequences
- Graphs tab, Visualizing the model with TensorBoard
- Histograms tab, Visualizing the model with TensorBoard
- illustration of, A High-Level Overview
- log verbosity, CNN
- logging summaries, The RNN step
- LOG_DIR (directory), MNIST images as sequences
- model visualization using, Visualizing the model with TensorBoard
- Scalars tab, Visualizing the model with TensorBoard
- tensorboard command, Visualizing the model with TensorBoard
- Word2vec training and visualization, Training and Visualizing with TensorBoard
- TensorFlow
- applications of by Google, Using TensorFlow for AI Systems-Text summarization
- data types supported, Casting
- documentation, The Noise-Contrastive Estimation (NCE) Loss Function
- history of, Going Deep
- IDE configuration, Hello World
- installing, Installing TensorFlow-Installing TensorFlow
- key features, A High-Level Overview-A High-Level Overview
- main phases of operation, Graphs, Sessions, and Fetches
- naming of, Nodes Are Operations, Edges Are Tensor Objects
- operators and shortcuts, Creating a Graph, Tensor Arrays and Shapes
- prerequisites to learning, Prerequisites
- tensorflow command, Installing TensorFlow
- TensorFlow elements, TensorFlow Elements
- TensorFlow ops, Writing your very own op
- TensorFlow Serving
- tensorflow.contrib.learn, Writing with TFRecordWriter
- Tensors
- attributes, Setting attributes with source operations
- basics of, Nodes Are Operations, Edges Are Tensor Objects
- data types, Data Types
- flowing data through, Nodes Are Operations, Edges Are Tensor Objects-Name scopes
- names, Names
- optimization, Optimization-Example 2: logistic regression
- placeholders, Placeholders
- purpose of, TensorFlow: What’s in a Name?
- Variables, Variables
- test(), Simple CIFAR10 Models
- test_accuracy, The Model
- text sequences
- text summarization, Text summarization
- TF-Slim
- tf.<operator> methods, Creating a Graph, Nodes Are Operations, Edges Are Tensor Objects, Tensor Arrays and Shapes
- tf.add(), Nodes Are Operations, Edges Are Tensor Objects
- tf.app.flags, tf.app.flags
- tf.app.flags.FLAGS, tf.app.flags
- tf.cast(), Casting
- tf.concat(), Bidirectional RNN and GRU Cells
- tf.constant(), Nodes Are Operations, Edges Are Tensor Objects, Setting attributes with source operations
- tf.contrib.rnn.BasicLSTMCell(), LSTM and Using Sequence Length
- tf.contrib.rnn.BasicRNNCell, TensorFlow Built-in RNN Functions
- tf.contrib.rnn.MultiRNNCell(), Stacking multiple LSTMs
- tf.expand_dims(), Matrix multiplication, Downloading and using a pretrained model
- tf.get_variables(), Variables, Variable sharing
- tf.global_variables_initializer(), Variables, Bidirectional RNN and GRU Cells
- tf.Graph(), Constructing and Managing Our Graph
- tf.InteractiveSession(), Tensor Arrays and Shapes
- tf.linspace(a, b, n), Tensor Arrays and Shapes
- tf.map_fn(), Sequential outputs
- tf.matmul(A,B), Matrix multiplication
- tf.nn.bidirectional_dynamic_rnn(), Bidirectional RNN and GRU Cells
- tf.nn.dynamic_rnn(), TensorFlow Built-in RNN Functions, Text Sequences, LSTM and Using Sequence Length
- tf.nn.embedding_lookup(), Supervised Word Embeddings, Embeddings in TensorFlow
- tf.nn.nce_loss(), The Noise-Contrastive Estimation (NCE) Loss Function
- tf.random.normal(), Tensor Arrays and Shapes
- tf.RandomShuffleQueue, tf.train.QueueRunner and tf.RandomShuffleQueue
- tf.reduce_mean(), MSE and cross entropy
- tf.reset_default_graph(), The Saver Class
- tf.scan(), Applying the RNN step with tf.scan(), tf.contrib.rnn.BasicRNNCell and tf.nn.dynamic_rnn()
- tf.Session, Creating a Session and Running It
- tf.SparseTensor(), FeatureColumn
- tf.square(), MSE and cross entropy
- tf.summary.histogram(), Visualizing the model with TensorBoard
- tf.TFRecordReader(), tf.train.string_input_producer() and tf.TFRecordReader()
- tf.train.Coordinator, Coordinator and QueueRunner
- tf.train.exponential_decay(), Learning Rate Decay
- tf.train.import_meta_graph(), The Saver Class
- tf.train.QueueRunner, Coordinator and QueueRunner
- tf.train.replica_device_setter(), Replicating a Computational Graph Across Devices, Distributed Example
- tf.train.Saver(), The Saver Class
- tf.train.shuffle_batch(), tf.train.shuffle_batch()
- tf.train.start_queue_runners(), tf.train.start_queue_runners() and Wrapping Up
- tf.train.string_input_producer(), tf.train.string_input_producer() and tf.TFRecordReader()
- tf.transpose(), Matrix multiplication
- tf.Variable(), Variables, Variable sharing
- tf.variable_scope.reuse_variable(), Variable sharing
- tf.While, tf.contrib.rnn.BasicRNNCell and tf.nn.dynamic_rnn()
- tf.zeros_initializer(), Variable sharing
- TFLearn
- benefits of, TFLearn
- custom CNN model creation, CNN-CNN
- epochs and iteration in, CNN
- installing, Installation
- Keras extension for, Keras-Autoencoders
- local response normalization (LRN), CNN
- overview of, High-Level Survey
- pre-trained models with TF-Slim, Pretrained models with TF-Slim-Downloading and using a pretrained model
- RNN text classification using, RNN
- standard operations, CNN
- tflearn.data_utils.pad_sequences(), RNN
- tflearn.DNN(), CNN
- tflearn.embedding(), RNN
- TFRecords, TFRecords
- TFRecordWriter, Writing with TFRecordWriter
- Theano, High-Level Survey, Keras
- three-dimensional arrays, Tensor Arrays and Shapes
- time_steps, MNIST images as sequences
- train/test validation, Softmax Regression
- train_accuracy, The Model
- transformations, FeatureColumn
- truncated normal initializers, Tensor Arrays and Shapes
- TSV (tab-separated values), Training and Visualizing with TensorBoard
- tuning (see optimization)
- type inference, Data Types, Tensor Arrays and Shapes
- typographical conventions, Conventions Used in This Book
V
- Vagrant, What Is a Docker Container and Why Do We Use It?
- Variables
- vectors, Tensor Arrays and Shapes
- versioning, Overview
- VGG model, Creating CNN models with TF-Slim
- virtual environments, Installing TensorFlow, What Is a Docker Container and Why Do We Use It?
- VirtualBox, What Is a Docker Container and Why Do We Use It?
- visualizations, using TensorBoard, A High-Level Overview, MNIST images as sequences, Training and Visualizing with TensorBoard
..................Content has been hidden....................
You can't read the all page of ebook, please click
here login for view all page.