30 3. STATE ESTIMATION OF CYBER-PHYSICAL VEHICLE SYSTEMS
is recurrence relation is initialized at the final layer as
s
M
i
D
@
O
F
@n
M
i
D
@..t a/
T
.t a//
@n
M
i
D
@
P
S
M
j D1
.t
j
a
j
/
2
@n
M
i
D 2
.
t
i
a
i
/
@a
i
@n
M
i
D 2
.
t
i
a
i
/
P
f
m
n
m
i
:
(3.24)
us, the recurrence relation of the sensitivity matrix can be expressed as
s
M
D 2
P
F
M
.n
M
/.t a/: (3.25)
e overall BP learning algorithm is now finalized and can be summarized as the follow-
ing steps: (1) propagate the input forward through the network; (2) propagate the sensitivities
backward through the network from the last layer to the first layer; and, finally (3) update the
weights and biases using the approximate steepest descent rule.
3.3 LEVENBERG–MARQUARDT BACKPROPAGATION
While backpropagation is a steepest descent algorithm, the Levenberg–Marquardt algorithm
is derived from Newton’s method that was designed for minimizing functions that are sums of
squares of nonlinear functions [68, 69].
Newton’s method for optimizing a performance index F .x/ is
x
kC1
D x
k
A
1
k
g
k
(3.26)
A
k
r
2
F .x/j
XDX
k
(3.27)
g
k
rF .x/j
XDX
k
; (3.28)
where r
2
F .x/ is the Hessian matrix and rF .x/ is the gradient.
Assume that F.x/ is a sum of squares function:
F .x/ D
N
X
iD1
v
2
i
.x/ D v
T
.x/v.x/ (3.29)
then the gradient and Hessian matrix are
rF.x/ D 2J
T
.x/v.x/ (3.30)
r
2
F .x/ D 2J
T
.x/J.x/ C 2S.x/; (3.31)