Symbols and Abbreviations

The main symbols and abbreviations used throughout the text are listed as follows.

image    absolute value of a real number

image    Euclidean norm of a vector

image    inner product

image    indicator function

image    expectation value of a random variable

image    first-order derivative of the function image

image    second-order derivative of the function image

image    gradient of the function image with respect to image

image    sign function

image    Gamma function

image    vector or matrix transposition

image    identity matrix

image    inverse of matrix image

image    determinant of matrix image

image    trace of matrix image

image    rank of matrix image

image    natural logarithm function

image    unit delay operator

image    real number space

image    image-dimensional real Euclidean space

image    correlation coefficient between random variables image and image

image    variance of random variable image

image    probability of event image

image    Gaussian distribution with mean vector image and covariance matrix image

image    uniform distribution over interval image

image    chi-squared distribution with image degree of freedom

image    Shannon entropy of random variable image

image    image-entropy of random variable image

image    image-order Renyi entropy of random variable image

image    image-order information potential of random variable image

image    survival information potential of random variable image

image    image-entropy of discrete random variable image

image    mutual information between random variables image and image

image    KL-divergence between random variables image and image

image    image-divergence between random variables image and image

image    Fisher information matrix

image    Fisher information rate matrix

image    probability density function

image    Mercer kernel function

image    kernel function for density estimation

image    kernel function with width image

image    Gaussian kernel function with width image

image    reproducing kernel Hilbert space induced by Mercer kernel image

image    feature space induced by Mercer kernel image

image    weight vector

image    weight vector in feature space

image    weight error vector

image    step size

image    sliding data length

MSE    mean square error

LMS    least mean square

NLMS    normalized least mean square

LS    least squares

RLS    recursive least squares

MLE    maximum likelihood estimation

EM    expectation-maximization

FLOM    fractional lower order moment

LMP    least mean image-power

LAD    least absolute deviation

LMF    least mean fourth

FIR    finite impulse response

IIR    infinite impulse response

AR    auto regressive

ADALINE    adaptive linear neuron

MLP    multilayer perceptron

RKHS    reproducing kernel Hilbert space

KAF    kernel adaptive filtering

KLMS    kernel least mean square

KAPA    kernel affine projection algorithm

KMEE    kernel minimum error entropy

KMC    kernel maximum correntropy

PDF    probability density function

KDE    kernel density estimation

GGD    generalized Gaussian density

image    symmetric image-stable

MEP    maximum entropy principle

DPI    data processing inequality

EPI    entropy power inequality

MEE    minimum error entropy

MCC    maximum correntropy criterion

IP    information potential

QIP    quadratic information potential

CRE    cumulative residual entropy

SIP    survival information potential

QSIP    survival quadratic information potential

KLID    Kullback–Leibler information divergence

EDC    Euclidean distance criterion

MinMI    minimum mutual information

MaxMI    maximum mutual information

AIC    Akaike’s information criterion

BIC    Bayesian information criterion

MDL    minimum description length

FIM    Fisher information matrix

FIRM    Fisher information rate matrix

MIH    minimum identifiable horizon

ITL    information theoretic learning

BIG    batch information gradient

FRIG    forgetting recursive information gradient

SIG    stochastic information gradient

SIDG    stochastic information divergence gradient

SMIG    stochastic mutual information gradient

FP    fixed point

FP-MEE    fixed-point minimum error entropy

RFP-MEE    recursive fixed-point minimum error entropy

EDA    estimation of distribution algorithm

SNR    signal to noise ratio

WEP    weight error power

EMSE    excess mean square error

IEP    intrinsic error power

ICA    independent component analysis

BSS    blind source separation

CRLB    Cramer–Rao lower bound

AEC    acoustic echo canceller

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset