In this paper, a modified version of the Classical Lagrange Multiplier method is developed for convex quadratic optimization problems. The method, which is evolved from the first order derivative test for optimality o...In this paper, a modified version of the Classical Lagrange Multiplier method is developed for convex quadratic optimization problems. The method, which is evolved from the first order derivative test for optimality of the Lagrangian function with respect to the primary variables of the problem, decomposes the solution process into two independent ones, in which the primary variables are solved for independently, and then the secondary variables, which are the Lagrange multipliers, are solved for, afterward. This is an innovation that leads to solving independently two simpler systems of equations involving the primary variables only, on one hand, and the secondary ones on the other. Solutions obtained for small sized problems (as preliminary test of the method) demonstrate that the new method is generally effective in producing the required solutions.展开更多
In this paper,we mainly focus on proving the existence of lump solutions to a generalized(3+1)-dimensional nonlinear differential equation.Hirota’s bilinear method and a quadratic function method are employed to deri...In this paper,we mainly focus on proving the existence of lump solutions to a generalized(3+1)-dimensional nonlinear differential equation.Hirota’s bilinear method and a quadratic function method are employed to derive the lump solutions localized in the whole plane for a(3+1)-dimensional nonlinear differential equation.Three examples of such a nonlinear equation are presented to investigate the exact expressions of the lump solutions.Moreover,the 3d plots and corresponding density plots of the solutions are given to show the space structures of the lump waves.In addition,the breath-wave solutions and several interaction solutions of the(3+1)-dimensional nonlinear differential equation are obtained and their dynamics are analyzed.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
In this paper, we investigate the quadratic approximation methods. After studying the basic idea of simplex methods, we construct several new search directions by combining the local information progressively obtained...In this paper, we investigate the quadratic approximation methods. After studying the basic idea of simplex methods, we construct several new search directions by combining the local information progressively obtained during the iterates of the algorithm to form new subspaces. And the quadratic model is solved in the new subspaces. The motivation is to use the information disclosed by the former steps to construct more promising directions. For most tested problems, the number of functions evaluations have been reduced obviously through our algorithms.展开更多
We consider an inverse quadratic programming (IQP) problem in which the parameters in the objective function of a given quadratic programming (QP) problem are adjusted as little as possible so that a known feasibl...We consider an inverse quadratic programming (IQP) problem in which the parameters in the objective function of a given quadratic programming (QP) problem are adjusted as little as possible so that a known feasible solution becomes the optimal one. This problem can be formulated as a minimization problem with a positive semidefinite cone constraint and its dual (denoted IQD(A, b)) is a semismoothly differentiable (SC^1) convex programming problem with fewer variables than the original one. In this paper a smoothing Newton method is used for getting a Karush-Kuhn-Tucker point of IQD(A, b). The proposed method needs to solve only one linear system per iteration and achieves quadratic convergence. Numerical experiments are reported to show that the smoothing Newton method is effective for solving this class of inverse quadratic programming problems.展开更多
In this paper,following the method of replacing the lower level problem with its Kuhn-Tucker optimality condition,we transform the nonlinear bilevel programming problem into a normal nonlinear programming problem with...In this paper,following the method of replacing the lower level problem with its Kuhn-Tucker optimality condition,we transform the nonlinear bilevel programming problem into a normal nonlinear programming problem with the complementary slackness constraint condition.Then,we get the penalized problem of the normal nonlinear programming problem by appending the complementary slackness condition to the upper level objective with a penalty.We prove that this penalty function is exact and the penalized problem and the nonlinear bilevel programming problem have the same global optimal solution set.Finally,we propose an algorithm for the nonlinear bilevel programming problem.The numerical results show that the algorithm is feasible and efficient.展开更多
We present a direct analytical algorithm for solving transportation problems with quadratic function cost coefficients. The algorithm uses the concept of absolute points developed by the authors in earlier works. The ...We present a direct analytical algorithm for solving transportation problems with quadratic function cost coefficients. The algorithm uses the concept of absolute points developed by the authors in earlier works. The versatility of the proposed algorithm is evidenced by the fact that quadratic functions are often used as approximations for other functions, as in, for example, regression analysis. As compared with the earlier international methods for quadratic transportation problem (QTP) which are based on the Lagrangian relaxation approach, the proposed algorithm helps to understand the structure of the QTP better and can guide in managerial decisions. We present a numerical example to illustrate the application of the proposed method.展开更多
Image restoration is often solved by minimizing an energy function consisting of a data-fidelity term and a regularization term.A regularized convex term can usually preserve the image edges well in the restored image...Image restoration is often solved by minimizing an energy function consisting of a data-fidelity term and a regularization term.A regularized convex term can usually preserve the image edges well in the restored image.In this paper,we consider a class of convex and edge-preserving regularization functions,i.e.,multiplicative half-quadratic regularizations,and we use the Newton method to solve the correspondingly reduced systems of nonlinear equations.At each Newton iterate,the preconditioned conjugate gradient method,incorporated with a constraint preconditioner,is employed to solve the structured Newton equation that has a symmetric positive definite coefficient matrix. The eigenvalue bounds of the preconditioned matrix are deliberately derived,which can be used to estimate the convergence speed of the preconditioned conjugate gradient method.We use experimental results to demonstrate that this new approach is efficient, and the effect of image restoration is reasonably well.展开更多
Artificial neural network techniques have been introduced into the area of optimization in the recent decade. Some neural network models have been suggested to solve linear and quadratic programming problems. The Kenn...Artificial neural network techniques have been introduced into the area of optimization in the recent decade. Some neural network models have been suggested to solve linear and quadratic programming problems. The Kennedy and Chua model[5] is one of these networks. In this paper results about the convergence of the model are obtained. Another related problem is how to choose a parameter value s so that the equilibrium point of the network immediately and properly approximates the original solution. Such an estimation for the parameter is given in a closed form when the network is used to solve linear programming.展开更多
文摘In this paper, a modified version of the Classical Lagrange Multiplier method is developed for convex quadratic optimization problems. The method, which is evolved from the first order derivative test for optimality of the Lagrangian function with respect to the primary variables of the problem, decomposes the solution process into two independent ones, in which the primary variables are solved for independently, and then the secondary variables, which are the Lagrange multipliers, are solved for, afterward. This is an innovation that leads to solving independently two simpler systems of equations involving the primary variables only, on one hand, and the secondary ones on the other. Solutions obtained for small sized problems (as preliminary test of the method) demonstrate that the new method is generally effective in producing the required solutions.
基金supported by the National Natural Science Foundation of China(Nos.12101572,12371256)2023 Shanxi Province Graduate Innovation Project(No.2023KY614)the 19th Graduate Science and Technology Project of North University of China(No.20231943)。
文摘In this paper,we mainly focus on proving the existence of lump solutions to a generalized(3+1)-dimensional nonlinear differential equation.Hirota’s bilinear method and a quadratic function method are employed to derive the lump solutions localized in the whole plane for a(3+1)-dimensional nonlinear differential equation.Three examples of such a nonlinear equation are presented to investigate the exact expressions of the lump solutions.Moreover,the 3d plots and corresponding density plots of the solutions are given to show the space structures of the lump waves.In addition,the breath-wave solutions and several interaction solutions of the(3+1)-dimensional nonlinear differential equation are obtained and their dynamics are analyzed.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
基金This work was partially supported by the Doctoral Foundation of Hebei University(Grant No.Y2006084)the National Natural Science Foundation of China(Grant No.10231060)
文摘In this paper, we investigate the quadratic approximation methods. After studying the basic idea of simplex methods, we construct several new search directions by combining the local information progressively obtained during the iterates of the algorithm to form new subspaces. And the quadratic model is solved in the new subspaces. The motivation is to use the information disclosed by the former steps to construct more promising directions. For most tested problems, the number of functions evaluations have been reduced obviously through our algorithms.
基金supported by the National Natural Science Foundation of China under project No. 10771026by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry of China
文摘We consider an inverse quadratic programming (IQP) problem in which the parameters in the objective function of a given quadratic programming (QP) problem are adjusted as little as possible so that a known feasible solution becomes the optimal one. This problem can be formulated as a minimization problem with a positive semidefinite cone constraint and its dual (denoted IQD(A, b)) is a semismoothly differentiable (SC^1) convex programming problem with fewer variables than the original one. In this paper a smoothing Newton method is used for getting a Karush-Kuhn-Tucker point of IQD(A, b). The proposed method needs to solve only one linear system per iteration and achieves quadratic convergence. Numerical experiments are reported to show that the smoothing Newton method is effective for solving this class of inverse quadratic programming problems.
基金Supported by the Key Project on Science and Technology of Hubei Provincial Department of Education (D20103001)
文摘In this paper,following the method of replacing the lower level problem with its Kuhn-Tucker optimality condition,we transform the nonlinear bilevel programming problem into a normal nonlinear programming problem with the complementary slackness constraint condition.Then,we get the penalized problem of the normal nonlinear programming problem by appending the complementary slackness condition to the upper level objective with a penalty.We prove that this penalty function is exact and the penalized problem and the nonlinear bilevel programming problem have the same global optimal solution set.Finally,we propose an algorithm for the nonlinear bilevel programming problem.The numerical results show that the algorithm is feasible and efficient.
文摘We present a direct analytical algorithm for solving transportation problems with quadratic function cost coefficients. The algorithm uses the concept of absolute points developed by the authors in earlier works. The versatility of the proposed algorithm is evidenced by the fact that quadratic functions are often used as approximations for other functions, as in, for example, regression analysis. As compared with the earlier international methods for quadratic transportation problem (QTP) which are based on the Lagrangian relaxation approach, the proposed algorithm helps to understand the structure of the QTP better and can guide in managerial decisions. We present a numerical example to illustrate the application of the proposed method.
基金supported by the National Basic Research Program (No.2005CB321702)the National Outstanding Young Scientist Foundation(No. 10525102)the Specialized Research Grant for High Educational Doctoral Program(Nos. 20090211120011 and LZULL200909),Hong Kong RGC grants and HKBU FRGs
文摘Image restoration is often solved by minimizing an energy function consisting of a data-fidelity term and a regularization term.A regularized convex term can usually preserve the image edges well in the restored image.In this paper,we consider a class of convex and edge-preserving regularization functions,i.e.,multiplicative half-quadratic regularizations,and we use the Newton method to solve the correspondingly reduced systems of nonlinear equations.At each Newton iterate,the preconditioned conjugate gradient method,incorporated with a constraint preconditioner,is employed to solve the structured Newton equation that has a symmetric positive definite coefficient matrix. The eigenvalue bounds of the preconditioned matrix are deliberately derived,which can be used to estimate the convergence speed of the preconditioned conjugate gradient method.We use experimental results to demonstrate that this new approach is efficient, and the effect of image restoration is reasonably well.
文摘Artificial neural network techniques have been introduced into the area of optimization in the recent decade. Some neural network models have been suggested to solve linear and quadratic programming problems. The Kennedy and Chua model[5] is one of these networks. In this paper results about the convergence of the model are obtained. Another related problem is how to choose a parameter value s so that the equilibrium point of the network immediately and properly approximates the original solution. Such an estimation for the parameter is given in a closed form when the network is used to solve linear programming.