This paper developed the dynamic feedback neural network model to solve the convex nonlinear programming problem proposed by Leung et al. and introduced subgradient-based dynamic feedback neural networks to solve non-...This paper developed the dynamic feedback neural network model to solve the convex nonlinear programming problem proposed by Leung et al. and introduced subgradient-based dynamic feedback neural networks to solve non-differentiable convex optimization problems. For unconstrained non-differentiable convex optimization problem, on the assumption that the objective function is convex coercive, we proved that with arbitrarily given initial value, the trajectory of the feedback neural network constructed by a projection subgradient converges to an asymptotically stable equilibrium point which is also an optimal solution of the primal unconstrained problem. For constrained non-differentiable convex optimization problem, on the assumption that the objective function is convex coercive and the constraint functions are convex also, the energy functions sequence and corresponding dynamic feedback subneural network models based on a projection subgradient are successively constructed respectively, the convergence theorem is then obtained and the stopping condition is given. Furthermore, the effective algorithms are designed and some simulation experiments are illustrated.展开更多
基金the National 973 Project (Grant No. 2002cb312205) the National Natural Science Foundation of China (Grant No. 60574077).
文摘This paper developed the dynamic feedback neural network model to solve the convex nonlinear programming problem proposed by Leung et al. and introduced subgradient-based dynamic feedback neural networks to solve non-differentiable convex optimization problems. For unconstrained non-differentiable convex optimization problem, on the assumption that the objective function is convex coercive, we proved that with arbitrarily given initial value, the trajectory of the feedback neural network constructed by a projection subgradient converges to an asymptotically stable equilibrium point which is also an optimal solution of the primal unconstrained problem. For constrained non-differentiable convex optimization problem, on the assumption that the objective function is convex coercive and the constraint functions are convex also, the energy functions sequence and corresponding dynamic feedback subneural network models based on a projection subgradient are successively constructed respectively, the convergence theorem is then obtained and the stopping condition is given. Furthermore, the effective algorithms are designed and some simulation experiments are illustrated.
文摘近年来,深度学习(Deep Learning,DL)在通信场景中的应用逐渐兴起,其中就包括射频发射机的数字预失真(Digital Predistortion,DPD)处理。然而,由于射频功率放大器(Power Amplifier,PA)固有的非线性失真和记忆效应特点,如果直接应用传统DL算法去实现DPD会出现拟合效果不佳、自适应性差等现象。针对这个问题,本文提出了一种由多智能体反馈神经网络实现的数字预失真器(Multi-Agent Feedback Enabled Neural Network for Digital Predistortion,MAFENN-DPD),该网络引入了具有高纠错能力的反馈智能体结构,其主要特点是基于Stackelberg博弈理论去加速网络训练和收敛,同时我们还应用信息瓶颈理论指导网络超参数设计以增强MAFENN-DPD对PA记忆效应变化的动态适应能力。我们进行了一系列的实验来验证MAFENN-DPD的有效性。与使用典型前馈网络实现的DPD方案相比,基于MAFENN-DPD的方案在相邻信道功率比(Adjacent Channel Power Ratio,ACPR)指标上提高了约5 dB。同时,在没有通信过程中的大量先验知识的情况下,MAFENN-DPD实现了与使用记忆多项式方法建模的DPD方案十分接近的ACPR性能。仿真结果说明MAFENN-DPD相比传统神经网络可进一步提升ACPR性能,同时相比记忆多项式方法具有更好的自适应建模能力和通用性,并且具有多智能体反馈结构特征的神经网络未来在其他的通信场景中也具有应用推广的潜力。