Authors: Neculai Andrei
Publish Date: 2009/07/25
Volume: 54, Issue: 1, Pages: 23-46
Abstract
An accelerated hybrid conjugate gradient algorithm represents the subject of this paper The parameter β k is computed as a convex combination of beta kHS Hestenes and Stiefel J Res Nat Bur Stand 49409–436 1952 and beta kDY Dai and Yuan SIAM J Optim 10177–182 1999 ie beta kC =left1theta krightbeta kHS + theta k beta kDY The parameter θ k in the convex combinaztion is computed in such a way the direction corresponding to the conjugate gradient algorithm is the best direction we know ie the Newton direction while the pair s k y k satisfies the modified secant condition given by Li et al J Comput Appl Math 202523–539 2007 B k + 1 s k = z k where z k =y k +lefteta k / left s k right2 rights k eta k =2left f k f k+1 right+left g k +g k+1 rightTs k s k = x k + 1 − x k and y k = g k + 1 − g k It is shown that both for uniformly convex functions and for general nonlinear functions the algorithm with strong Wolfe line search is globally convergent The algorithm uses an acceleration scheme modifying the steplength α k for improving the reduction of the function values along the iterations Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms a variant of the hybrid conjugate gradient algorithm given by Andrei Numer Algorithms 47143–156 2008 in which the pair s k y k satisfies the classical secant condition B k + 1 s k = y k as well as some other conjugate gradient algorithms including HestenesStiefel DaiYuan PolackRibièrePolyak LiuStorey hybrid DaiYuan GilbertNocedal etc A set of 75 unconstrained optimization problems with 10 different dimensions is being used Andrei Adv Model Optim 10147–161 2008
Keywords: