Understanding how signal properties are optimized for the reliable transmission of information requires accurate de- scription of the signal in time and space. For movement-based signals where movement is restricted t...Understanding how signal properties are optimized for the reliable transmission of information requires accurate de- scription of the signal in time and space. For movement-based signals where movement is restricted to a single plane, measure- ments from a single viewpoint can be used to consider a range of viewing positions based on simple geometric calculations. However, considerations of signal properties from a range of viewing positions for movements extending into three-dimensions (3D) are more problematic. We present here a new framework that overcomes this limitation, and enables us to quantify the extent to which movement-based signals are view-specific. To illustrate its application, a Jacky lizard tail flick signal was filmed with synchronized cameras and the position of the tail tip digitized for both recordings. Camera aligmnent enabled tl^e construction of a 3D display action pattern profile. We analyzed the profile directly and used it to create a detailed 3D animation. In the virtual environment, we were able to film the same signal from multiple viewing positions and using a computational motion analysis algorithm (gradient detector model) to measure local image velocity in order to predict view dependent differences in signal properties. This approach will enable consideration of a range of questions concerning movement-based signal design and evolu- tion that were previously out of reach [Current Zoology 56 (3): 327-336, 2010].展开更多
为了研究飞行员在使用平视显示器(head up display,HUD)执行不同飞行任务时的行为模式,提出了一种包括飞行员眼动、头部运动和手部运动多种特征的行为识别框架。首先,开展行为模式研究实验,通过眼动仪获取眼部运动和头部运动,通过基于...为了研究飞行员在使用平视显示器(head up display,HUD)执行不同飞行任务时的行为模式,提出了一种包括飞行员眼动、头部运动和手部运动多种特征的行为识别框架。首先,开展行为模式研究实验,通过眼动仪获取眼部运动和头部运动,通过基于视频的手动跟踪获取手部运动。之后采用实验得到的结果对模型进行训练和测试。最后,对比了条件随机场和隐动态条件随机场在不同特征下的识别效果。结果表明,采用眼动特征加手部特征时,隐动态条件随机场模型对不同飞行任务的识别效果较好。展开更多
文摘Understanding how signal properties are optimized for the reliable transmission of information requires accurate de- scription of the signal in time and space. For movement-based signals where movement is restricted to a single plane, measure- ments from a single viewpoint can be used to consider a range of viewing positions based on simple geometric calculations. However, considerations of signal properties from a range of viewing positions for movements extending into three-dimensions (3D) are more problematic. We present here a new framework that overcomes this limitation, and enables us to quantify the extent to which movement-based signals are view-specific. To illustrate its application, a Jacky lizard tail flick signal was filmed with synchronized cameras and the position of the tail tip digitized for both recordings. Camera aligmnent enabled tl^e construction of a 3D display action pattern profile. We analyzed the profile directly and used it to create a detailed 3D animation. In the virtual environment, we were able to film the same signal from multiple viewing positions and using a computational motion analysis algorithm (gradient detector model) to measure local image velocity in order to predict view dependent differences in signal properties. This approach will enable consideration of a range of questions concerning movement-based signal design and evolu- tion that were previously out of reach [Current Zoology 56 (3): 327-336, 2010].
文摘为了研究飞行员在使用平视显示器(head up display,HUD)执行不同飞行任务时的行为模式,提出了一种包括飞行员眼动、头部运动和手部运动多种特征的行为识别框架。首先,开展行为模式研究实验,通过眼动仪获取眼部运动和头部运动,通过基于视频的手动跟踪获取手部运动。之后采用实验得到的结果对模型进行训练和测试。最后,对比了条件随机场和隐动态条件随机场在不同特征下的识别效果。结果表明,采用眼动特征加手部特征时,隐动态条件随机场模型对不同飞行任务的识别效果较好。