Abstract
A Monte Carlo procedure to estimate efficiently the gradient of a generic function in high dimensions is presented. It is shown that, adopting an orthogonal linear transformation, it is possible to identify a new coordinate system where a relatively small subset of the variables causes most of the variation of the gradient. This property is exploited further in gradient-based algorithms to reduce the computational effort for the gradient evaluation in higher dimensions. Working in this transformed space, only few function evaluations, i.e. considerably less than the dimensionality of the problem, are required. The procedure is simple and can be applied by any gradient-based method. A number of different examples are presented to show the accuracy and the efficiency of the proposed approach and the applicability of this procedure for the optimization problem using the well-known gradient-based optimization algorithms such as the descent gradient, quasi-Newton methods and Sequential Quadratic Programming.
Original language | English |
---|---|
Pages (from-to) | 172-188 |
Number of pages | 17 |
Journal | International Journal for Numerical Methods in Engineering |
Volume | 81 |
Issue number | 2 |
DOIs | |
Publication status | Published - 8 Jan 2010 |
Keywords
- gradient estimation
- high dimensions
- Monte Carlo simulation
- optimization techniques