Discontinuous deformation analysis (DDA) method is a newly developed discrete element method which employs the implicit time-integration scheme to solve the governing equations and the open-close iteration (OCI) m...Discontinuous deformation analysis (DDA) method is a newly developed discrete element method which employs the implicit time-integration scheme to solve the governing equations and the open-close iteration (OCI) method to deal with contact prob- lem, its computational efficiency is relatively low. However, spherical element based discontinuous deformation analysis (SDDA), which uses very simple contact type like point-to-point contact, has higher calculation speed. In the framework of SDDA, this paper presents a very simple contact calculation approach by removing the OCI scheme and by adopting the maximal displacement increment (MDI). Through some verification examples, it is proved that the proposed method is correct and effective, and a higher computational efficiency is obtained.展开更多
Bolt assembly by robots is a vital and difficult task for replacing astronauts in extravehicular activities(EVA),but the trajectory efficiency still needs to be improved during the wrench insertion into hex hole of bo...Bolt assembly by robots is a vital and difficult task for replacing astronauts in extravehicular activities(EVA),but the trajectory efficiency still needs to be improved during the wrench insertion into hex hole of bolt.In this paper,a policy iteration method based on reinforcement learning(RL)is proposed,by which the problem of trajectory efficiency improvement is constructed as an issue of RL-based objective optimization.Firstly,the projection relation between raw data and state-action space is established,and then a policy iteration initialization method is designed based on the projection to provide the initialization policy for iteration.Policy iteration based on the protective policy is applied to continuously evaluating and optimizing the action-value function of all state-action pairs till the convergence is obtained.To verify the feasibility and effectiveness of the proposed method,a noncontact demonstration experiment with human supervision is performed.Experimental results show that the initialization policy and the generated policy can be obtained by the policy iteration method in a limited number of demonstrations.A comparison between the experiments with two different assembly tolerances shows that the convergent generated policy possesses higher trajectory efficiency than the conservative one.In addition,this method can ensure safety during the training process and improve utilization efficiency of demonstration data.展开更多
This paper discusses a self-modified iterative method. It is a new method for simultaneously finding all roots of a nonlinear algebraic equation. The convergence and the convergence rate with higher order are obtained...This paper discusses a self-modified iterative method. It is a new method for simultaneously finding all roots of a nonlinear algebraic equation. The convergence and the convergence rate with higher order are obtained. The results of efficiency analysis and numerical example are satisfactory.展开更多
Parameter optimization of nodes communication is the foundation of underwater sensor networks.The packet size is an important indicator of the impact of communication performance.As a result,the optimal packet size se...Parameter optimization of nodes communication is the foundation of underwater sensor networks.The packet size is an important indicator of the impact of communication performance.As a result,the optimal packet size selection is a critical issue in improving the communication performance.This paper aims to make a model reflecting the communication characteristics as the optimization target,because underwater sensor networks have the characteristics of high time delay,high energy consumption and high bit error rate.Finally,simulation experiments and theory have demonstrated the effectiveness and timeliness of simultaneous perturbation stochastic approximation(SPSA) algorithm.展开更多
为提高迭代法的收敛速度,对非线性方程求根的迭代法,从积分形式方程角度,将经典的牛顿迭代法与两点高斯积分公式相结合,提出一种新的预估校正格式--高斯勒让德牛顿法(Gauss Legendre Newton Method,简称GN法),并证明该方法对单根至少三...为提高迭代法的收敛速度,对非线性方程求根的迭代法,从积分形式方程角度,将经典的牛顿迭代法与两点高斯积分公式相结合,提出一种新的预估校正格式--高斯勒让德牛顿法(Gauss Legendre Newton Method,简称GN法),并证明该方法对单根至少三阶收敛,比同阶的Simpson牛顿法具有更高的效率指数.数值算例结果表明,与其他3种迭代法相比,该方法具有较快的收敛速度和较高的效率指数.高温光谱温度反演结果表明,该方法具有较高的精度和一定的应用价值.展开更多
基金supported by the National Basic Research Program of China("973" Project)(Grant Nos.2014CB046904&2014CB047101)the National Natural Science Foundation of China(Grant Nos.51479191&51509242)
文摘Discontinuous deformation analysis (DDA) method is a newly developed discrete element method which employs the implicit time-integration scheme to solve the governing equations and the open-close iteration (OCI) method to deal with contact prob- lem, its computational efficiency is relatively low. However, spherical element based discontinuous deformation analysis (SDDA), which uses very simple contact type like point-to-point contact, has higher calculation speed. In the framework of SDDA, this paper presents a very simple contact calculation approach by removing the OCI scheme and by adopting the maximal displacement increment (MDI). Through some verification examples, it is proved that the proposed method is correct and effective, and a higher computational efficiency is obtained.
基金supported by the National Natural Science Foundation of China(No.91848202)the Special Foundation(Pre-Station)of China Postdoctoral Science(No.2021TQ0089)。
文摘Bolt assembly by robots is a vital and difficult task for replacing astronauts in extravehicular activities(EVA),but the trajectory efficiency still needs to be improved during the wrench insertion into hex hole of bolt.In this paper,a policy iteration method based on reinforcement learning(RL)is proposed,by which the problem of trajectory efficiency improvement is constructed as an issue of RL-based objective optimization.Firstly,the projection relation between raw data and state-action space is established,and then a policy iteration initialization method is designed based on the projection to provide the initialization policy for iteration.Policy iteration based on the protective policy is applied to continuously evaluating and optimizing the action-value function of all state-action pairs till the convergence is obtained.To verify the feasibility and effectiveness of the proposed method,a noncontact demonstration experiment with human supervision is performed.Experimental results show that the initialization policy and the generated policy can be obtained by the policy iteration method in a limited number of demonstrations.A comparison between the experiments with two different assembly tolerances shows that the convergent generated policy possesses higher trajectory efficiency than the conservative one.In addition,this method can ensure safety during the training process and improve utilization efficiency of demonstration data.
文摘This paper discusses a self-modified iterative method. It is a new method for simultaneously finding all roots of a nonlinear algebraic equation. The convergence and the convergence rate with higher order are obtained. The results of efficiency analysis and numerical example are satisfactory.
文摘Parameter optimization of nodes communication is the foundation of underwater sensor networks.The packet size is an important indicator of the impact of communication performance.As a result,the optimal packet size selection is a critical issue in improving the communication performance.This paper aims to make a model reflecting the communication characteristics as the optimization target,because underwater sensor networks have the characteristics of high time delay,high energy consumption and high bit error rate.Finally,simulation experiments and theory have demonstrated the effectiveness and timeliness of simultaneous perturbation stochastic approximation(SPSA) algorithm.
文摘为提高迭代法的收敛速度,对非线性方程求根的迭代法,从积分形式方程角度,将经典的牛顿迭代法与两点高斯积分公式相结合,提出一种新的预估校正格式--高斯勒让德牛顿法(Gauss Legendre Newton Method,简称GN法),并证明该方法对单根至少三阶收敛,比同阶的Simpson牛顿法具有更高的效率指数.数值算例结果表明,与其他3种迭代法相比,该方法具有较快的收敛速度和较高的效率指数.高温光谱温度反演结果表明,该方法具有较高的精度和一定的应用价值.