Index

[A][B][C][D][E][F][G][H][I][J][K][L][M][N][O][P][R][S][T][U][V][W][Z]

A

accuracy testing
ACGAN
activation maps
Adam algorithm2nd3rd
adaptive moment estimation
adversarial attacks
adversarial examples
  adversaries to GANs
  context of
  distributions
  noise
  security implications of
  signal
  training
adversarial loss
adversarial training2nd
  cost functions
  training process
AI ethics
AlphaGo
artificial general intelligence
autoencoders
  GANs and
  generation with
  high-level function of
  overview of
  using
  variational

B

backpropagated errors
back-translation
batch normalization.
    See BN (batch normalization).
BEGAN
BigGAN
binary cross-entropy2nd
binary_crossentropy function
BN (batch normalization)2nd3rd4th5th6th
  computing
  overview of

C

CapsuleNets
categorical_crossentropy function
CGAN (Conditional GAN)2nd3rd4th5th6th
  architecture diagram
  building models
  Discriminator
  generating targeted data
  Generator2nd
  implementing
  outputting sample images
  overview of
  setup
  training
  training models
CGI (computer-generated imagery)
classification
classification loss
classifiers, fully supervised
Cloud AutoML, Google
CNNs (convolutional neural networks)2nd
CNTK (Cognitive Toolkit), Microsoft
colorization
computer vision (CV)
computer-generated imagery (CGI)
Conda2nd
Conditional GAN.
    See CGAN.
confusion matrix
contraction path
conv layer
conv2d() function
Conv2DTranspose
convenience function, Keras

ConvNet (convolutional neural network)
  convolutional filters
  parameter sharing
  visualizing
ConvNets2nd
convolutional filters
convolutional kernel
core Discriminator network
cost functions
covariate shift
cross-entropy loss
CV (computer vision)2nd
CyCADA (Cycle Consistent Adversarial Domain Adaptation)
cycle-consistency loss2nd
CycleGAN2nd3rd4th5th
  adversarial loss
  applications of
  architecture of
    building networks
    Discriminator architecture
    Generator architecture
  augmentations
  building networks
  cycle-consistency loss
  expansions
  identity loss
  image-to-image translation
  object-oriented design of GANs
  running

D

data denoising
data, targeted
Dataset object
datasets
DAWNBench
DCGAN (Deep Convolutional GAN)2nd3rd4th5th
  batch normalization
    computing
    overview of
  building
  ConvNet
    convolutional filters
    parameter sharing
    visualizing
  generating handwritten digits with
    implementing Discriminator
    implementing Generator
    importing modules
    model output
    specifying model input dimensions
  history of
  running
decision trees (DT)
decoder network2nd3rd4th5th
deconv layer
deconv2d() function
deconvolutional layers2nd
Deep Convolutional GAN.
    See DCGAN.
deep learning algorithms
Deep Learning with Python (Chollet)2nd
deep neural networks (DNNs)2nd
DeepFakes
DeepMind
DeOldify
design of GANs
Discriminator
  architecture of
  building
  confusion matrix
  core network
  Generator vs.
  implementing2nd
  supervised
  training
  unsupervised
Discriminator network2nd3rd4th5th6th7th8th9th10th11th12th13th14th15th16th17th
discriminator.trainable setting
distributions
DNNs (deep neural networks)2nd
Docker Hub
domain transferred image
don’t repeat yourself (DRY)
dropout
DRY (don’t repeat yourself)
DT (decision trees)
dynamic range

E

ϵ (epsilon)
earth mover’s distance2nd3rd
Embedding layers2nd
encoder network2nd3rd4th5th
ensembles (Ens.)
equalized learning rates
ethics
expanding path
expectation (Es)
explicit loss function
explicit objective function2nd

F

fake images
fake labels
false negative
false positive
fast sign gradient method (FSGM)
fc (fully connected) layer
feature detectors
FFDM (full-field digital mammography)
FID (Fréchet inception distance)2nd3rd4th5th
filters.
    See convolutional filters.
fit function
foolbox class
Fréchet inception distance.
    See FID.
FSGM (fast sign gradient method)
full-field digital mammography (FFDM)
fully connected (fc) layer
fully supervised classifiers
functional API2nd

G

game setups2nd
GAN Zoo
Ganbreeder app
GANs (Generative Adversarial Networks)
  adversaries to
  autoencoders and
  in action
  in fashion
  in medicine2nd
  Nash equilibrium
  object-oriented design of
  overview of
  practical application of
  training
  why study
Gaussian2nd3rd4th5th
GDPR (General Data Protection Regulation)
generation with autoencoders
Generative Adversarial Networks.
    See GANs.
generative modeling
Generator network2nd3rd4th5th6th7th8th9th10th11th12th13th14th15th16th17th
  confusion matrix
  Discriminator vs.
  pixel-wise feature normalization in
global variables

Goodfellow, Ian
  adversarial examples
  invention of GANs
  label smoothing recommendation
  SVHN benchmark
Google Earth
GP (gradient penalties)
gradient ascent
gradient descent
gradient-descent-based optimizer

gradients
  penalizing
  sparse
growing layers

H



handwritten digits
  adversarial training
    cost functions
    training process
  building models
  Discriminator
    confusion matrix
    Generator vs.
    implementing
  generating2nd
  Generator
    confusion matrix
    Discriminator vs.
    implementing
  importing statements
  inspecting results
  outputting sample images
  running models
  training algorithms2nd
hyperparameters2nd3rd
hyperrealistic imagery

I

ICLR (International Conference on Learning Representations)2nd
identity loss
ImageNet dataset2nd3rd4th5th
images, outputting2nd
image-to-image translation2nd
imitation game

importing
  modules
  statements
Inception network2nd
inception scores2nd
Inception V3 network2nd
inputs, normalizing
InstanceNormalization
interclass
International Conference on Learning Representations (ICLR)2nd
interpolating latent space
intraclass2nd
IS (inception score)2nd
Iterative training/tuning

J

JSD (Jensen-Shannon divergence)2nd3rd

K

Keras library2nd
Keras_contrib
keras.datasets
Keras-GAN repository2nd
keras.layers.BatchNormalization function
KL (Kullback–Leibler) divergence2nd3rd4th
kNN (nearest neighbors)

L

labels
lambda_cycle hyperparameter
lambda_id hyperparameter
LAPGAN
Laplacian pyramid
latent space2nd3rd4th5th
latent space interpolation
latent vector
latent_vector
layers
Leaky ReLU activation function2nd3rd
learnable classification functions
learning rates
loss function2nd3rd
LR (logistic regression)

M

machine learning (ML)
MAE (mean absolute error)
MammoGAN
maximum likelihood estimation2nd
MaxPool
mean squared deviation
mean squared error (MSE)
mini-batch standard deviation
ML (machine learning)
MM-GAN (Min-Max GAN)
MNIST dataset2nd3rd4th5th6th7th8th9th
mode collapse2nd
modules, importing
MSE (mean squared error)
Multiply layer

N

Nash equilibrium2nd
nearest neighbors (kNN)
network depth
networks2nd3rd
neural networks
NeurIPS, formerly NIPS (Neural Information Processing Systems)
NLP (natural language processing)
noise
noisy labels
Non-Saturating GAN.
    See NS-GAN.

normalizing
  features in Generator
  inputs
np.random.normal
NS-GAN (Non-Saturating GAN)2nd3rd4th
num_labeled parameter

O

object recognition algorithms
object-oriented design
OCR (optical character recognition)
off-by-one cycle
one-class classifier
one-hot-encoded labels
overall loss
overgeneralization

P

parameters, sharing
PatchGAN architecture
PCA (principal component analysis)
penalizing gradients
PGD (projected gradient descent)
PGGAN (Progressive GAN)2nd
pix2pix2nd
point estimates
predict() method2nd
prediction
preference maximization
principal component analysis (PCA)
ProGAN (Progressive GAN)2nd3rd4th5th
progressive growing
projected gradient descent (PGD)
PyTorch Hub

R

random noise vectors2nd
reconstruction loss2nd
regression
regular convolution
relative entropy
ReLU (rectified linear unit) activation
ResNet-502nd3rd
reward function
RGAN (Relativistic GAN)
RMSE (root mean squared error)
RMSprop2nd
Robust Manifold Defense
ROC curves2nd
rotations

S

saccades
SAGAN (Self-Attention GAN)2nd
sample images, outputting2nd
sample_images() function2nd
sample_interval iterations
saturation
scaling
Self-Attention GAN.
    See SAGAN.
self-supervised machine learning
set_title()method
SGAN (Semi-Supervised GAN)2nd3rd4th
  architecture diagram
  building models
  datasets
  Discriminator
    core network
    supervised
    unsupervised
  fully supervised classifiers vs.
  Generator
  implementing
  overview of
    architecture
    training objectives
    training process
  setup
  training
    testing accuracy
    training models
SGD (stochastic gradient descent)2nd3rd
sharing parameters
sigmoid activation function2nd3rd
signal
skip connections
sliced Wasserstein distance (SWD)2nd
slow convergence
smoothing layers
soft labels
softmax function
SPADE (GauGAN)
sparse gradients
spectral normalization
st (strides)
standard deviations
statements, importing
stochastic gradient descent (SGD)2nd3rd
Style GAN
supervised classifiers
supervised Discriminator
supervised loss
SVHN (Street View House Numbers) dataset
SVM (support-vector machine)
SWD (sliced Wasserstein distance)2nd

T

tanh activation function2nd3rd4th
targeted data
TensorFlow Extended
TensorFlow framework2nd3rd4th
testing accuracy
TFHub (TensorFlow Hub)2nd3rd
tf.shape list
Theano framework
Towards Data Science
training algorithms2nd
Training dataset
translation, image-to-image
transposed convolutions2nd3rd
true distributions2nd
true negative
true positive
truncation tricks
Turing test

U

U-Net architecture
unsupervised learning
  autoencoders
  generation using autoencoders
  variational autoencoders
unsupervised loss

V

VAE (variational autoencoder)2nd3rd
variational autoencoders
VGG-19 trained network
vid2vid

W

Wasserstein distance
WGAN (Wasserstein GAN)2nd3rd4th5th
WGAN-GP (gradient penalty) version2nd

Z

z_dim variable
zero-sum games

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset