In this paper we introduce a class of iterative methods for solution of monotone variational inequalities. The method can be viewed as an extension of the Levenberg-Marquardt method for unconstrained optimization, or ...In this paper we introduce a class of iterative methods for solution of monotone variational inequalities. The method can be viewed as an extension of the Levenberg-Marquardt method for unconstrained optimization, or the generalization of the Douglas-Rachford operator splitting methods when applied to monotone variational inequalities. Each iteration of the method consists essentially of solving a system of nonlinear equations. The convergence proof for the presented method is very展开更多
Mehrotra's recent suggestion of a predictor corrector variant of primal dual interior point method for linear programming is currently the interior point method of choice for linear programming. In this work t...Mehrotra's recent suggestion of a predictor corrector variant of primal dual interior point method for linear programming is currently the interior point method of choice for linear programming. In this work the authors give a predictor corrector interior point algorithm for monotone variational inequality problems. The algorithm was proved to be equivalent to a level 1 perturbed composite Newton method. Computations in the algorithm do not require the initial iteration to be feasible. Numerical results of experiments are presented.展开更多
Proximal point algorithms (PPA) are attractive methods for solving monotone variational inequalities (MVI). Since solving the sub-problem exactly in each iteration is costly or sometimes impossible, various approx...Proximal point algorithms (PPA) are attractive methods for solving monotone variational inequalities (MVI). Since solving the sub-problem exactly in each iteration is costly or sometimes impossible, various approximate versions ofPPA (APPA) are developed for practical applications. In this paper, we compare two APPA methods, both of which can be viewed as prediction-correction methods. The only difference is that they use different search directions in the correction-step. By extending the general forward-backward splitting methods, we obtain Algorithm Ⅰ; in the same way, Algorithm Ⅱ is proposed by spreading the general extra-gradient methods. Our analysis explains theoretically why Algorithm Ⅱ usually outperforms Algorithm Ⅰ. For computation practice, we consider a class of MVI with a special structure, and choose the extending Algorithm Ⅱ to implement, which is inspired by the idea of Gauss-Seidel iteration method making full use of information about the latest iteration. And in particular, self-adaptive techniques are adopted to adjust relevant parameters for faster convergence. Finally, some numerical experiments are reported on the separated MVI. Numerical results showed that the extending Algorithm II is feasible and easy to implement with relatively low computation load.展开更多
文摘In this paper we introduce a class of iterative methods for solution of monotone variational inequalities. The method can be viewed as an extension of the Levenberg-Marquardt method for unconstrained optimization, or the generalization of the Douglas-Rachford operator splitting methods when applied to monotone variational inequalities. Each iteration of the method consists essentially of solving a system of nonlinear equations. The convergence proof for the presented method is very
文摘Mehrotra's recent suggestion of a predictor corrector variant of primal dual interior point method for linear programming is currently the interior point method of choice for linear programming. In this work the authors give a predictor corrector interior point algorithm for monotone variational inequality problems. The algorithm was proved to be equivalent to a level 1 perturbed composite Newton method. Computations in the algorithm do not require the initial iteration to be feasible. Numerical results of experiments are presented.
基金Project (No. 1027054) supported by the National Natural Science Foundation of China
文摘Proximal point algorithms (PPA) are attractive methods for solving monotone variational inequalities (MVI). Since solving the sub-problem exactly in each iteration is costly or sometimes impossible, various approximate versions ofPPA (APPA) are developed for practical applications. In this paper, we compare two APPA methods, both of which can be viewed as prediction-correction methods. The only difference is that they use different search directions in the correction-step. By extending the general forward-backward splitting methods, we obtain Algorithm Ⅰ; in the same way, Algorithm Ⅱ is proposed by spreading the general extra-gradient methods. Our analysis explains theoretically why Algorithm Ⅱ usually outperforms Algorithm Ⅰ. For computation practice, we consider a class of MVI with a special structure, and choose the extending Algorithm Ⅱ to implement, which is inspired by the idea of Gauss-Seidel iteration method making full use of information about the latest iteration. And in particular, self-adaptive techniques are adopted to adjust relevant parameters for faster convergence. Finally, some numerical experiments are reported on the separated MVI. Numerical results showed that the extending Algorithm II is feasible and easy to implement with relatively low computation load.