[A][B][C][D][E][F][G][H][I][J][K][L][M][N][O][P][R][S][T][U][V][W][Z]
accuracy testing
ACGAN
activation maps
Adam algorithm, 2nd, 3rd
adaptive moment estimation
adversarial attacks
adversarial examples
adversaries to GANs
context of
distributions
noise
security implications of
signal
training
adversarial loss
adversarial training, 2nd
cost functions
training process
AI ethics
AlphaGo
artificial general intelligence
autoencoders
GANs and
generation with
high-level function of
overview of
using
variational
backpropagated errors
back-translation
batch normalization.
See BN (batch normalization).
BEGAN
BigGAN
binary cross-entropy, 2nd
binary_crossentropy function
BN (batch normalization), 2nd, 3rd, 4th, 5th, 6th
computing
overview of
CapsuleNets
categorical_crossentropy function
CGAN (Conditional GAN), 2nd, 3rd, 4th, 5th, 6th
architecture diagram
building models
Discriminator
generating targeted data
Generator, 2nd
implementing
outputting sample images
overview of
setup
training
training models
CGI (computer-generated imagery)
classification
classification loss
classifiers, fully supervised
Cloud AutoML, Google
CNNs (convolutional neural networks), 2nd
CNTK (Cognitive Toolkit), Microsoft
colorization
computer vision (CV)
computer-generated imagery (CGI)
Conda, 2nd
Conditional GAN.
See CGAN.
confusion matrix
contraction path
conv layer
conv2d() function
Conv2DTranspose
convenience function, Keras
ConvNet (convolutional neural network)
convolutional filters
parameter sharing
visualizing
ConvNets, 2nd
convolutional filters
convolutional kernel
core Discriminator network
cost functions
covariate shift
cross-entropy loss
CV (computer vision), 2nd
CyCADA (Cycle Consistent Adversarial Domain Adaptation)
cycle-consistency loss, 2nd
CycleGAN, 2nd, 3rd, 4th, 5th
adversarial loss
applications of
architecture of
building networks
Discriminator architecture
Generator architecture
augmentations
building networks
cycle-consistency loss
expansions
identity loss
image-to-image translation
object-oriented design of GANs
running
data denoising
data, targeted
Dataset object
datasets
DAWNBench
DCGAN (Deep Convolutional GAN), 2nd, 3rd, 4th, 5th
batch normalization
computing
overview of
building
ConvNet
convolutional filters
parameter sharing
visualizing
generating handwritten digits with
implementing Discriminator
implementing Generator
importing modules
model output
specifying model input dimensions
history of
running
decision trees (DT)
decoder network, 2nd, 3rd, 4th, 5th
deconv layer
deconv2d() function
deconvolutional layers, 2nd
Deep Convolutional GAN.
See DCGAN.
deep learning algorithms
Deep Learning with Python (Chollet), 2nd
deep neural networks (DNNs), 2nd
DeepFakes
DeepMind
DeOldify
design of GANs
Discriminator
architecture of
building
confusion matrix
core network
Generator vs.
implementing, 2nd
supervised
training
unsupervised
Discriminator network, 2nd, 3rd, 4th, 5th, 6th, 7th, 8th, 9th, 10th, 11th, 12th, 13th, 14th, 15th, 16th, 17th
discriminator.trainable setting
distributions
DNNs (deep neural networks), 2nd
Docker Hub
domain transferred image
don’t repeat yourself (DRY)
dropout
DRY (don’t repeat yourself)
DT (decision trees)
dynamic range
ϵ (epsilon)
earth mover’s distance, 2nd, 3rd
Embedding layers, 2nd
encoder network, 2nd, 3rd, 4th, 5th
ensembles (Ens.)
equalized learning rates
ethics
expanding path
expectation (Es)
explicit loss function
explicit objective function, 2nd
fake images
fake labels
false negative
false positive
fast sign gradient method (FSGM)
fc (fully connected) layer
feature detectors
FFDM (full-field digital mammography)
FID (Fréchet inception distance), 2nd, 3rd, 4th, 5th
filters.
See convolutional filters.
fit function
foolbox class
Fréchet inception distance.
See FID.
FSGM (fast sign gradient method)
full-field digital mammography (FFDM)
fully connected (fc) layer
fully supervised classifiers
functional API, 2nd
game setups, 2nd
GAN Zoo
Ganbreeder app
GANs (Generative Adversarial Networks)
adversaries to
autoencoders and
in action
in fashion
in medicine, 2nd
Nash equilibrium
object-oriented design of
overview of
practical application of
training
why study
Gaussian, 2nd, 3rd, 4th, 5th
GDPR (General Data Protection Regulation)
generation with autoencoders
Generative Adversarial Networks.
See GANs.
generative modeling
Generator network, 2nd, 3rd, 4th, 5th, 6th, 7th, 8th, 9th, 10th, 11th, 12th, 13th, 14th, 15th, 16th, 17th
confusion matrix
Discriminator vs.
pixel-wise feature normalization in
global variables
Goodfellow, Ian
adversarial examples
invention of GANs
label smoothing recommendation
SVHN benchmark
Google Earth
GP (gradient penalties)
gradient ascent
gradient descent
gradient-descent-based optimizer
gradients
penalizing
sparse
growing layers
handwritten digits
adversarial training
cost functions
training process
building models
Discriminator
confusion matrix
Generator vs.
implementing
generating, 2nd
Generator
confusion matrix
Discriminator vs.
implementing
importing statements
inspecting results
outputting sample images
running models
training algorithms, 2nd
hyperparameters, 2nd, 3rd
hyperrealistic imagery
ICLR (International Conference on Learning Representations), 2nd
identity loss
ImageNet dataset, 2nd, 3rd, 4th, 5th
images, outputting, 2nd
image-to-image translation, 2nd
imitation game
importing
modules
statements
Inception network, 2nd
inception scores, 2nd
Inception V3 network, 2nd
inputs, normalizing
InstanceNormalization
interclass
International Conference on Learning Representations (ICLR), 2nd
interpolating latent space
intraclass, 2nd
IS (inception score), 2nd
Iterative training/tuning
JSD (Jensen-Shannon divergence), 2nd, 3rd
Keras library, 2nd
Keras_contrib
keras.datasets
Keras-GAN repository, 2nd
keras.layers.BatchNormalization function
KL (Kullback–Leibler) divergence, 2nd, 3rd, 4th
kNN (nearest neighbors)
labels
lambda_cycle hyperparameter
lambda_id hyperparameter
LAPGAN
Laplacian pyramid
latent space, 2nd, 3rd, 4th, 5th
latent space interpolation
latent vector
latent_vector
layers
Leaky ReLU activation function, 2nd, 3rd
learnable classification functions
learning rates
loss function, 2nd, 3rd
LR (logistic regression)
machine learning (ML)
MAE (mean absolute error)
MammoGAN
maximum likelihood estimation, 2nd
MaxPool
mean squared deviation
mean squared error (MSE)
mini-batch standard deviation
ML (machine learning)
MM-GAN (Min-Max GAN)
MNIST dataset, 2nd, 3rd, 4th, 5th, 6th, 7th, 8th, 9th
mode collapse, 2nd
modules, importing
MSE (mean squared error)
Multiply layer
Nash equilibrium, 2nd
nearest neighbors (kNN)
network depth
networks, 2nd, 3rd
neural networks
NeurIPS, formerly NIPS (Neural Information Processing Systems)
NLP (natural language processing)
noise
noisy labels
Non-Saturating GAN.
See NS-GAN.
normalizing
features in Generator
inputs
np.random.normal
NS-GAN (Non-Saturating GAN), 2nd, 3rd, 4th
num_labeled parameter
object recognition algorithms
object-oriented design
OCR (optical character recognition)
off-by-one cycle
one-class classifier
one-hot-encoded labels
overall loss
overgeneralization
parameters, sharing
PatchGAN architecture
PCA (principal component analysis)
penalizing gradients
PGD (projected gradient descent)
PGGAN (Progressive GAN), 2nd
pix2pix, 2nd
point estimates
predict() method, 2nd
prediction
preference maximization
principal component analysis (PCA)
ProGAN (Progressive GAN), 2nd, 3rd, 4th, 5th
progressive growing
projected gradient descent (PGD)
PyTorch Hub
random noise vectors, 2nd
reconstruction loss, 2nd
regression
regular convolution
relative entropy
ReLU (rectified linear unit) activation
ResNet-50, 2nd, 3rd
reward function
RGAN (Relativistic GAN)
RMSE (root mean squared error)
RMSprop, 2nd
Robust Manifold Defense
ROC curves, 2nd
rotations
saccades
SAGAN (Self-Attention GAN), 2nd
sample images, outputting, 2nd
sample_images() function, 2nd
sample_interval iterations
saturation
scaling
Self-Attention GAN.
See SAGAN.
self-supervised machine learning
set_title()method
SGAN (Semi-Supervised GAN), 2nd, 3rd, 4th
architecture diagram
building models
datasets
Discriminator
core network
supervised
unsupervised
fully supervised classifiers vs.
Generator
implementing
overview of
architecture
training objectives
training process
setup
training
testing accuracy
training models
SGD (stochastic gradient descent), 2nd, 3rd
sharing parameters
sigmoid activation function, 2nd, 3rd
signal
skip connections
sliced Wasserstein distance (SWD), 2nd
slow convergence
smoothing layers
soft labels
softmax function
SPADE (GauGAN)
sparse gradients
spectral normalization
st (strides)
standard deviations
statements, importing
stochastic gradient descent (SGD), 2nd, 3rd
Style GAN
supervised classifiers
supervised Discriminator
supervised loss
SVHN (Street View House Numbers) dataset
SVM (support-vector machine)
SWD (sliced Wasserstein distance), 2nd
tanh activation function, 2nd, 3rd, 4th
targeted data
TensorFlow Extended
TensorFlow framework, 2nd, 3rd, 4th
testing accuracy
TFHub (TensorFlow Hub), 2nd, 3rd
tf.shape list
Theano framework
Towards Data Science
training algorithms, 2nd
Training dataset
translation, image-to-image
transposed convolutions, 2nd, 3rd
true distributions, 2nd
true negative
true positive
truncation tricks
Turing test
U-Net architecture
unsupervised learning
autoencoders
generation using autoencoders
variational autoencoders
unsupervised loss
VAE (variational autoencoder), 2nd, 3rd
variational autoencoders
VGG-19 trained network
vid2vid
Wasserstein distance
WGAN (Wasserstein GAN), 2nd, 3rd, 4th, 5th
WGAN-GP (gradient penalty) version, 2nd