Accurately recognizing facial expressions is essential for effective social interactions.Non-human primates(NHPs)are widely used in the study of the neural mechanisms underpinning facial expression processing,yet it r...Accurately recognizing facial expressions is essential for effective social interactions.Non-human primates(NHPs)are widely used in the study of the neural mechanisms underpinning facial expression processing,yet it remains unclear how well monkeys can recognize the facial expressions of other species such as humans.In this study,we systematically investigated how monkeys process the facial expressions of conspecifics and humans using eye-tracking technology and sophisticated behavioral tasks,namely the temporal discrimination task(TDT)and face scan task(FST).We found that monkeys showed prolonged subjective time perception in response to Negative facial expressions in monkeys while showing longer reaction time to Negative facial expressions in humans.Monkey faces also reliably induced divergent pupil contraction in response to different expressions,while human faces and scrambled monkey faces did not.Furthermore,viewing patterns in the FST indicated that monkeys only showed bias toward emotional expressions upon observing monkey faces.Finally,masking the eye region marginally decreased the viewing duration for monkey faces but not for human faces.By probing facial expression processing in monkeys,our study demonstrates that monkeys are more sensitive to the facial expressions of conspecifics than those of humans,thus shedding new light on inter-species communication through facial expressions between NHPs and humans.展开更多
This paper aims to investigate the eye-voice span (EVS), the distance between eyeandvoice,insighttranslatingmetaphoricalexpressions(MEs).24MAtranslation students, with no professional translation or interpreting exper...This paper aims to investigate the eye-voice span (EVS), the distance between eyeandvoice,insighttranslatingmetaphoricalexpressions(MEs).24MAtranslation students, with no professional translation or interpreting experience, were asked to conduct a sight translation (STR) task, and the processes were registered by eye-tracker and audio recorder. The qualified eye-tracking and audio data were further analysed by Tobii Studio and Audacity audio processing software. Our findings suggest that the time of the pause preceding an ME was largely, but not entirely, spent in processing the ensuing ME. However, due to the general existence of reading ahead activities in STR, the planning step for sight translating an ME takes place prior to the preceding pause; moreover, due to local processing difficulty causedbytheME,thetimeforreadingaheadintoME(temporalEVS)ismostlygreater than for reading ahead beyond ME. Our findings also reveal that the rate of methodological deviation(causedbythetwodifferentcalculationapproaches)forMEprocessingtimeis around10%,butthetwoprocessingtimeshavedemonstratednostatisticallysignificant difference, validating the processing time calculated by audio data in Zheng&Xiang (2013).We conclude this paper with some reservation on eye-tracking translation research:though powerful in providing solid and informative process data, it has some limitations in clearly probing into intricate human cognitive process.展开更多
This article proposes a feature extraction method for an integrated face tracking and facial expression recognition in real time video. The method proposed by Viola and Jones [1] is used to detect the face region in t...This article proposes a feature extraction method for an integrated face tracking and facial expression recognition in real time video. The method proposed by Viola and Jones [1] is used to detect the face region in the first frame of the video. A rectangular bounding box is fitted over for the face region and the detected face is tracked in the successive frames using the cascaded Support vector machine (SVM) and cascaded Radial basis function neural network (RBFNN). The haar-like features are extracted from the detected face region and they are used to create a cascaded SVM and RBFNN classifiers. Each stage of the SVM classifier and RBFNN classifier rejects the non-face regions and pass the face regions to the next stage in the cascade thereby efficiently tracking the face. The performance of tracking is evaluated using one hour video data. The performance of the cascaded SVM is compared with the cascaded RBFNN. The experiment results show that the proposed cascaded SVM classifier method gives better performance over the RBFNN and also the methods described in the literature using single SVM classifier [2]. While the face is being tracked, features are extracted from the mouth region for expression recognition. The features are modelled using a multi-class SVM. The SVM finds an optimal hyperplane to distinguish different facial expressions with an accuracy of 96.0%.展开更多
Autism is a neurodevelopmental condition with associated difficulties that present differently across individuals.One such difficulty is recognizing basic and complex facial expressions.Research has previously found t...Autism is a neurodevelopmental condition with associated difficulties that present differently across individuals.One such difficulty is recognizing basic and complex facial expressions.Research has previously found that there are many evidence-based support programs available for building non-verbal communication skills.These programs are frequently administered with a therapist or in a group setting,making them inflexible in nature.Programs hosted on e-technology are becoming increasingly popular,with many parents supportive of them.Applications(apps)that are hosted on technology such as iPads or mobile phones allow users to engage in building skills in real-time social settings and own what they are learning.These technologies are frequently used by autistic children,with apps typically focusing on identifying facial features.Yet at this current time,there are mixed reviews of how to design such programs and what their theoretical backing is,with many studies using a mix of observation and psychological assessments as outcome measures.Eye-tracking and electroencephalography are established methodologies that measure neural processing and gaze behaviors while viewing faces.To better support the field moving forward,objective measures such as these are a way to measure outcomes of apps that are designed for helping children on the spectrum build skills in understanding facial expressions.展开更多
基金supported by the National Natural Science Foundation of China (U20A2017)Guangdong Basic and Applied Basic Research Foundation (2022A1515010134,2022A1515110598)+2 种基金Youth Innovation Promotion Association of Chinese Academy of Sciences (2017120)Shenzhen-Hong Kong Institute of Brain Science–Shenzhen Fundamental Research Institutions (NYKFKT2019009)Shenzhen Technological Research Center for Primate Translational Medicine (F-2021-Z99-504979)。
文摘Accurately recognizing facial expressions is essential for effective social interactions.Non-human primates(NHPs)are widely used in the study of the neural mechanisms underpinning facial expression processing,yet it remains unclear how well monkeys can recognize the facial expressions of other species such as humans.In this study,we systematically investigated how monkeys process the facial expressions of conspecifics and humans using eye-tracking technology and sophisticated behavioral tasks,namely the temporal discrimination task(TDT)and face scan task(FST).We found that monkeys showed prolonged subjective time perception in response to Negative facial expressions in monkeys while showing longer reaction time to Negative facial expressions in humans.Monkey faces also reliably induced divergent pupil contraction in response to different expressions,while human faces and scrambled monkey faces did not.Furthermore,viewing patterns in the FST indicated that monkeys only showed bias toward emotional expressions upon observing monkey faces.Finally,masking the eye region marginally decreased the viewing duration for monkey faces but not for human faces.By probing facial expression processing in monkeys,our study demonstrates that monkeys are more sensitive to the facial expressions of conspecifics than those of humans,thus shedding new light on inter-species communication through facial expressions between NHPs and humans.
文摘This paper aims to investigate the eye-voice span (EVS), the distance between eyeandvoice,insighttranslatingmetaphoricalexpressions(MEs).24MAtranslation students, with no professional translation or interpreting experience, were asked to conduct a sight translation (STR) task, and the processes were registered by eye-tracker and audio recorder. The qualified eye-tracking and audio data were further analysed by Tobii Studio and Audacity audio processing software. Our findings suggest that the time of the pause preceding an ME was largely, but not entirely, spent in processing the ensuing ME. However, due to the general existence of reading ahead activities in STR, the planning step for sight translating an ME takes place prior to the preceding pause; moreover, due to local processing difficulty causedbytheME,thetimeforreadingaheadintoME(temporalEVS)ismostlygreater than for reading ahead beyond ME. Our findings also reveal that the rate of methodological deviation(causedbythetwodifferentcalculationapproaches)forMEprocessingtimeis around10%,butthetwoprocessingtimeshavedemonstratednostatisticallysignificant difference, validating the processing time calculated by audio data in Zheng&Xiang (2013).We conclude this paper with some reservation on eye-tracking translation research:though powerful in providing solid and informative process data, it has some limitations in clearly probing into intricate human cognitive process.
文摘This article proposes a feature extraction method for an integrated face tracking and facial expression recognition in real time video. The method proposed by Viola and Jones [1] is used to detect the face region in the first frame of the video. A rectangular bounding box is fitted over for the face region and the detected face is tracked in the successive frames using the cascaded Support vector machine (SVM) and cascaded Radial basis function neural network (RBFNN). The haar-like features are extracted from the detected face region and they are used to create a cascaded SVM and RBFNN classifiers. Each stage of the SVM classifier and RBFNN classifier rejects the non-face regions and pass the face regions to the next stage in the cascade thereby efficiently tracking the face. The performance of tracking is evaluated using one hour video data. The performance of the cascaded SVM is compared with the cascaded RBFNN. The experiment results show that the proposed cascaded SVM classifier method gives better performance over the RBFNN and also the methods described in the literature using single SVM classifier [2]. While the face is being tracked, features are extracted from the mouth region for expression recognition. The features are modelled using a multi-class SVM. The SVM finds an optimal hyperplane to distinguish different facial expressions with an accuracy of 96.0%.
基金Hunter Medical Research Institute(Happy,Healthy Kids),No.G1801008an Australian Government Research Training Program Fee Offset and Scholarship.
文摘Autism is a neurodevelopmental condition with associated difficulties that present differently across individuals.One such difficulty is recognizing basic and complex facial expressions.Research has previously found that there are many evidence-based support programs available for building non-verbal communication skills.These programs are frequently administered with a therapist or in a group setting,making them inflexible in nature.Programs hosted on e-technology are becoming increasingly popular,with many parents supportive of them.Applications(apps)that are hosted on technology such as iPads or mobile phones allow users to engage in building skills in real-time social settings and own what they are learning.These technologies are frequently used by autistic children,with apps typically focusing on identifying facial features.Yet at this current time,there are mixed reviews of how to design such programs and what their theoretical backing is,with many studies using a mix of observation and psychological assessments as outcome measures.Eye-tracking and electroencephalography are established methodologies that measure neural processing and gaze behaviors while viewing faces.To better support the field moving forward,objective measures such as these are a way to measure outcomes of apps that are designed for helping children on the spectrum build skills in understanding facial expressions.