Optimization and Operations Research (e-bog) af -
Ritter, K. (redaktør)

Optimization and Operations Research e-bog

436,85 DKK (inkl. moms 546,06 DKK)
The variable metric algorithm is widely recognised as one of the most efficient ways of solving the following problem:- Locate x* a local minimum point n ( 1) of f(x) x E R Considerable attention has been given to the study of the convergence prop- ties of this algorithm especially for the case where analytic expressions are avai- ble for the derivatives g. = af/ax. i 1 *** n * (2) ~ ~ In parti...
E-bog 436,85 DKK
Forfattere Ritter, K. (redaktør)
Forlag Springer
Udgivet 6 december 2012
Genrer Management decision making
Sprog English
Format pdf
Beskyttelse LCP
ISBN 9783642463297
The variable metric algorithm is widely recognised as one of the most efficient ways of solving the following problem:- Locate x* a local minimum point n ( 1) of f(x) x E R Considerable attention has been given to the study of the convergence prop- ties of this algorithm especially for the case where analytic expressions are avai- ble for the derivatives g. = af/ax. i 1 *** n * (2) ~ ~ In particular we shall mention the results of Wolfe (1969) and Powell (1972), (1975). Wolfe established general conditions under which a descent algorithm will converge to a stationary point and Powell showed that two particular very efficient algorithms that cannot be shown to satisfy \,olfe's conditions do in fact converge to the minimum of convex functions under certain conditions. These results will be st- ed more completely in Section 2. In most practical problems analytic expressions for the gradient vector g (Equ. 2) are not available and numerical derivatives are subject to truncation error. In Section 3 we shall consider the effects of these errors on Wolfe's convergent prop- ties and will discuss possible modifications of the algorithms to make them reliable in these circumstances. The effects of rounding error are considered in Section 4, whilst in Section 5 these thoughts are extended to include the case of on-line fu- tion minimisation where each function evaluation is subject to random noise.