Although frequently encountered in many practical applications, singular nonlinear optimization has been always recognized as a difficult problem. In the last decades, classical numerical techniques have been proposed...Although frequently encountered in many practical applications, singular nonlinear optimization has been always recognized as a difficult problem. In the last decades, classical numerical techniques have been proposed to deal with the singular problem. However, the issue of numerical instability and high computational complexity has not found a satisfactory solution so far. In this paper, we consider the singular optimization problem with bounded variables constraint rather than the common unconstraint model. A novel neural network model was proposed for solving the problem of singular convex optimization with bounded variables. Under the assumption of rank one defect, the original difficult problem is transformed into nonsingular constrained optimization problem by enforcing a tensor term. By using the augmented Lagrangian method and the projection technique, it is proven that the proposed continuous model is convergent to the solution of the singular optimization problem. Numerical simulation further confirmed the effectiveness of the proposed neural network approach.展开更多
In the present paper, the singular perturbations for the higher-order scalar nonlinear boundary value problem epsilon(2)y(n)=f(t,epsilon y,y',...,y((n-2)),epsilon y((n-1)), t is an element of[0,1] H1(y(0,epsilon),...In the present paper, the singular perturbations for the higher-order scalar nonlinear boundary value problem epsilon(2)y(n)=f(t,epsilon y,y',...,y((n-2)),epsilon y((n-1)), t is an element of[0,1] H1(y(0,epsilon),...,y((n-3))(0,epsilon),epsilon y((n-2))(0,epsilon),epsilon y((n-1))(0,epsilon),epsilon)=0, H2(y(0,epsilon),y((n-1))(0,epsilon),y(1,epsilon)...,y((n-1))(1,epsilon),epsilon=0 are studied, where epsilon > 0 is a small parameter, n greater than or equal to 2. Under some mild assumptions, we prove the existence and local uniqueness of the perturbed solution and give out the uniformly valid asymptotic expansions up to its nth-order derivative function by employing the Banach/Picard fixed-point theorem. Then the existing results are extended and improved.展开更多
文摘Although frequently encountered in many practical applications, singular nonlinear optimization has been always recognized as a difficult problem. In the last decades, classical numerical techniques have been proposed to deal with the singular problem. However, the issue of numerical instability and high computational complexity has not found a satisfactory solution so far. In this paper, we consider the singular optimization problem with bounded variables constraint rather than the common unconstraint model. A novel neural network model was proposed for solving the problem of singular convex optimization with bounded variables. Under the assumption of rank one defect, the original difficult problem is transformed into nonsingular constrained optimization problem by enforcing a tensor term. By using the augmented Lagrangian method and the projection technique, it is proven that the proposed continuous model is convergent to the solution of the singular optimization problem. Numerical simulation further confirmed the effectiveness of the proposed neural network approach.
文摘In the present paper, the singular perturbations for the higher-order scalar nonlinear boundary value problem epsilon(2)y(n)=f(t,epsilon y,y',...,y((n-2)),epsilon y((n-1)), t is an element of[0,1] H1(y(0,epsilon),...,y((n-3))(0,epsilon),epsilon y((n-2))(0,epsilon),epsilon y((n-1))(0,epsilon),epsilon)=0, H2(y(0,epsilon),y((n-1))(0,epsilon),y(1,epsilon)...,y((n-1))(1,epsilon),epsilon=0 are studied, where epsilon > 0 is a small parameter, n greater than or equal to 2. Under some mild assumptions, we prove the existence and local uniqueness of the perturbed solution and give out the uniformly valid asymptotic expansions up to its nth-order derivative function by employing the Banach/Picard fixed-point theorem. Then the existing results are extended and improved.