Background Rhodococcus equi (R. equl) infection commonly occurs in grazing areas, especially in patients with AIDS or with T-lymphocyte immuno-deficiencies. Literature reviews revealed that cases radiologically and ...Background Rhodococcus equi (R. equl) infection commonly occurs in grazing areas, especially in patients with AIDS or with T-lymphocyte immuno-deficiencies. Literature reviews revealed that cases radiologically and pathologically diagnosed of AIDS complicated by R. equi infection are rare. This study aimed to investigate the imaging features and Datholoqical basis of AIDS complicated by pulmonary R. equi infection.展开更多
In this paper, a detailed theoretical study on the characteristics of cone-shaped inwall capillary-based microsphere resonators is described and demonstrated for sensing applications. The maximum, minimum, slope, cont...In this paper, a detailed theoretical study on the characteristics of cone-shaped inwall capillary-based microsphere resonators is described and demonstrated for sensing applications. The maximum, minimum, slope, contrast, and width of the Fano resonance are analyzed. As the transmission coefficient of the capillary resonator increases, the absolute value of the slope of Fano resonances increases to reach its maximum, which is useful for sensors with an ultra-high sensitivity. There occurs another phenomenon of electromagnetically induced transparency when the reflectivity at the capillary–environment interface is close to 100%. We also experimentally demonstrated its capability for temperature and refractive index sensing, with a sensitivity of 10.9 pm∕°C and 431 d B∕RIU basedon the Fano resonance and the Lorentzian line shape, respectively.展开更多
In this article,a robot skills learning framework is developed,which considers both motion modeling and execution.In order to enable the robot to learn skills from demonstrations,a learning method called dynamic movem...In this article,a robot skills learning framework is developed,which considers both motion modeling and execution.In order to enable the robot to learn skills from demonstrations,a learning method called dynamic movement primitives(DMPs)is introduced to model motion.A staged teaching strategy is integrated into DMPs frameworks to enhance the generality such that the complicated tasks can be also performed for multi-joint manipulators.The DMP connection method is used to make an accurate and smooth transition in position and velocity space to connect complex motion sequences.In addition,motions are categorized into different goals and durations.It is worth mentioning that an adaptive neural networks(NNs)control method is proposed to achieve highly accurate trajectory tracking and to ensure the performance of action execution,which is beneficial to the improvement of reliability of the skills learning system.The experiment test on the Baxter robot verifies the effectiveness of the proposed method.展开更多
Vocabulary teaching is one aspect of language teaching that has not been given the attention it deserves until recent years. For a long period of time, vocabulary is simply taught in the way by asking students to stud...Vocabulary teaching is one aspect of language teaching that has not been given the attention it deserves until recent years. For a long period of time, vocabulary is simply taught in the way by asking students to study and memorize its meaning and spelling, its part of speech and its general function in a sentence. Thus, a student with a command of five thousand English vocabulary still finds it hard to adapt himself to the requirement of our demanding reading assignments, in particular, to the extensive reading task, which is more demanding due to its wide range of materials and large amount of vocabularies. According to Wilkins (1979: 111) "Without grammar very little can be conveyed, without vocabulary, nothing can be conveyed." Yet without a deeper understanding of how vocabulary is taught in the classroom and which methods of teaching are more effective for learners, the teaching of vocabulary may not achieve the desired effects. By researching the topic on vocabulary learning and instruction, this essay intends to bring the attention of both teachers and learners to the weaknesses of the traditional approach of teaching vocabulary and some different strategies in vocabulary instruction with the aim of improving the students’ reading comprehension.\;展开更多
Autonomous planning is a significant development direction of the space manipulator,and learning from demonstrations(LfD)is a potential strategy for complex tasks in the field.However,separating control from planning ...Autonomous planning is a significant development direction of the space manipulator,and learning from demonstrations(LfD)is a potential strategy for complex tasks in the field.However,separating control from planning may cause large torque fluctuations and energy consumptions,even instability or danger in control of space manipulators,especially for the planning based on the human demonstrations.Therefore,we present an autonomous planning and control strategy for space manipulators based on LfD and focus on the dynamics uncertainty problem,a common problem of actual manipulators.The process can be divided into three stages:firstly,we reproduced the stochastic directed trajectory based on the Gaussian process-based LfD;secondly,we built the model of the stochastic dynamics of the actual manipulator with Gaussian process;thirdly,we designed an optimal controller based on the dynamics model to obtain the improved commanded torques and trajectory,and used the separation theorem to deal with stochastic characteristics during control.We evaluated the strategy with locating pre-screwed bolts experiment by Tiangong-2 manipulator system on the ground.The result showed that,compared with other strategies,the strategy proposed in this paper could significantly reduce torque fluctuations and energy consumptions,and its precision can meet the task requirements.展开更多
The main idea of reinforcement learning is evaluating the chosen action depending on the current reward.According to this concept,many algorithms achieved proper performance on classic Atari 2600 games.The main challe...The main idea of reinforcement learning is evaluating the chosen action depending on the current reward.According to this concept,many algorithms achieved proper performance on classic Atari 2600 games.The main challenge is when the reward is sparse or missing.Such environments are complex exploration environments likeMontezuma’s Revenge,Pitfall,and Private Eye games.Approaches built to deal with such challenges were very demanding.This work introduced a different reward system that enables the simple classical algorithm to learn fast and achieve high performance in hard exploration environments.Moreover,we added some simple enhancements to several hyperparameters,such as the number of actions and the sampling ratio that helped improve performance.We include the extra reward within the human demonstrations.After that,we used Prioritized Double Deep Q-Networks(Prioritized DDQN)to learning from these demonstrations.Our approach enabled the Prioritized DDQNwith a short learning time to finish the first level of Montezuma’s Revenge game and to perform well in both Pitfall and Private Eye.We used the same games to compare our results with several baselines,such as the Rainbow and Deep Q-learning from demonstrations(DQfD)algorithm.The results showed that the new rewards system enabled Prioritized DDQN to out-perform the baselines in the hard exploration games with short learning time.展开更多
文摘Background Rhodococcus equi (R. equl) infection commonly occurs in grazing areas, especially in patients with AIDS or with T-lymphocyte immuno-deficiencies. Literature reviews revealed that cases radiologically and pathologically diagnosed of AIDS complicated by R. equi infection are rare. This study aimed to investigate the imaging features and Datholoqical basis of AIDS complicated by pulmonary R. equi infection.
基金National Natural Science Foundation of China(NSFC)(61377081,61675126)
文摘In this paper, a detailed theoretical study on the characteristics of cone-shaped inwall capillary-based microsphere resonators is described and demonstrated for sensing applications. The maximum, minimum, slope, contrast, and width of the Fano resonance are analyzed. As the transmission coefficient of the capillary resonator increases, the absolute value of the slope of Fano resonances increases to reach its maximum, which is useful for sensors with an ultra-high sensitivity. There occurs another phenomenon of electromagnetically induced transparency when the reflectivity at the capillary–environment interface is close to 100%. We also experimentally demonstrated its capability for temperature and refractive index sensing, with a sensitivity of 10.9 pm∕°C and 431 d B∕RIU basedon the Fano resonance and the Lorentzian line shape, respectively.
基金National Natural Science Foundation of China(Nos.62225304,92148204 and 62061160371)National Key Research and Development Program of China(Nos.2021ZD0114503 and 2019YFB1703600)Beijing Top Discipline for Artificial Intelligence Science and Engineering,University of Science and Technology Beijing,and the Beijing Natural Science Foundation(No.JQ20026).
文摘In this article,a robot skills learning framework is developed,which considers both motion modeling and execution.In order to enable the robot to learn skills from demonstrations,a learning method called dynamic movement primitives(DMPs)is introduced to model motion.A staged teaching strategy is integrated into DMPs frameworks to enhance the generality such that the complicated tasks can be also performed for multi-joint manipulators.The DMP connection method is used to make an accurate and smooth transition in position and velocity space to connect complex motion sequences.In addition,motions are categorized into different goals and durations.It is worth mentioning that an adaptive neural networks(NNs)control method is proposed to achieve highly accurate trajectory tracking and to ensure the performance of action execution,which is beneficial to the improvement of reliability of the skills learning system.The experiment test on the Baxter robot verifies the effectiveness of the proposed method.
文摘Vocabulary teaching is one aspect of language teaching that has not been given the attention it deserves until recent years. For a long period of time, vocabulary is simply taught in the way by asking students to study and memorize its meaning and spelling, its part of speech and its general function in a sentence. Thus, a student with a command of five thousand English vocabulary still finds it hard to adapt himself to the requirement of our demanding reading assignments, in particular, to the extensive reading task, which is more demanding due to its wide range of materials and large amount of vocabularies. According to Wilkins (1979: 111) "Without grammar very little can be conveyed, without vocabulary, nothing can be conveyed." Yet without a deeper understanding of how vocabulary is taught in the classroom and which methods of teaching are more effective for learners, the teaching of vocabulary may not achieve the desired effects. By researching the topic on vocabulary learning and instruction, this essay intends to bring the attention of both teachers and learners to the weaknesses of the traditional approach of teaching vocabulary and some different strategies in vocabulary instruction with the aim of improving the students’ reading comprehension.\;
基金the Foundation for Innovative Research Groups of the National Natural Science Foundation of China(Grant No.51521003)the National Natural Science Foundation of China(Grant No.61803124)the Post-doctor Research Startup Foundation of Heilongjiang Province。
文摘Autonomous planning is a significant development direction of the space manipulator,and learning from demonstrations(LfD)is a potential strategy for complex tasks in the field.However,separating control from planning may cause large torque fluctuations and energy consumptions,even instability or danger in control of space manipulators,especially for the planning based on the human demonstrations.Therefore,we present an autonomous planning and control strategy for space manipulators based on LfD and focus on the dynamics uncertainty problem,a common problem of actual manipulators.The process can be divided into three stages:firstly,we reproduced the stochastic directed trajectory based on the Gaussian process-based LfD;secondly,we built the model of the stochastic dynamics of the actual manipulator with Gaussian process;thirdly,we designed an optimal controller based on the dynamics model to obtain the improved commanded torques and trajectory,and used the separation theorem to deal with stochastic characteristics during control.We evaluated the strategy with locating pre-screwed bolts experiment by Tiangong-2 manipulator system on the ground.The result showed that,compared with other strategies,the strategy proposed in this paper could significantly reduce torque fluctuations and energy consumptions,and its precision can meet the task requirements.
文摘The main idea of reinforcement learning is evaluating the chosen action depending on the current reward.According to this concept,many algorithms achieved proper performance on classic Atari 2600 games.The main challenge is when the reward is sparse or missing.Such environments are complex exploration environments likeMontezuma’s Revenge,Pitfall,and Private Eye games.Approaches built to deal with such challenges were very demanding.This work introduced a different reward system that enables the simple classical algorithm to learn fast and achieve high performance in hard exploration environments.Moreover,we added some simple enhancements to several hyperparameters,such as the number of actions and the sampling ratio that helped improve performance.We include the extra reward within the human demonstrations.After that,we used Prioritized Double Deep Q-Networks(Prioritized DDQN)to learning from these demonstrations.Our approach enabled the Prioritized DDQNwith a short learning time to finish the first level of Montezuma’s Revenge game and to perform well in both Pitfall and Private Eye.We used the same games to compare our results with several baselines,such as the Rainbow and Deep Q-learning from demonstrations(DQfD)algorithm.The results showed that the new rewards system enabled Prioritized DDQN to out-perform the baselines in the hard exploration games with short learning time.