Abstract:
In this thesis, we study a derivative-free trust-region algorithm for large-scale unconstrained optimization, using symmetric-rank1 (SR1) to update the Hessian at every iteration. The centeral finite-difference iterations are used to approximate the gradient of the function. The iterative solution method and truncated Newton method were used to solve the trust-region sub-problem. Its performance is tested on some problems and compared the solutions found by truncated Newton method and iterative solution method.