Let's run this method with this example:
def g(x): return -np.exp(-x)*np.sin(x)
f = np.vectorize(lambda x: max(1-x, 2+x))
print (good_bracket(f, [-1, -0.5, 1]))
print (minimize_scalar(f, bracket=[-1, -0.5, 1], method=parabolic_step))
print (good_bracket(g, [0, 1.2, 1.5]))
True
print (minimize_scalar(g, bracket=[0,1.2,1.5], method=parabolic_step))
fun: -0.32239694192707452 nfev: 54 nit: 18 x: 0.78540558549352946
There are two methods already coded for univariate scalar minimization, golden, using a golden section search, and brent, following an algorithm by Brent and Dekker:
minimize_scalar(f, method='brent', bracket=[-1, -0.5, 1])
fun: array(1.5) nfev: 22 nit: 21 success: True x: -0.5
minimize_scalar(f, method='golden', bracket=[-1, -0.5, 1])
fun: array(1.5) nfev: 44 nit: 39 success: True x: -0.5
minimize_scalar(g, method='brent', bracket=[0, 1.2, 1.5])
fun: -0.32239694194483448 nfev: 11 nit: 10 success: True x: 0.78539816017203079
minimize_scalar(g, method='golden', bracket=[0, 1.2, 1.5])
fun: -0.32239694194483448
nfev: 43
nit: 38
success: True
x: 0.7853981573284226