Background The vertebral artery (VA) and atlantoaxial joint (AAJ), with complicated structures, are located in the depths of the head-neck boundary area, the regional anatomy of which cannot be shown globally and ...Background The vertebral artery (VA) and atlantoaxial joint (AAJ), with complicated structures, are located in the depths of the head-neck boundary area, the regional anatomy of which cannot be shown globally and directly. This study aims to evaluate three-dimensional CT angiography (3DCTA) in displaying the AAJ, atlantoaxial segment of the vertebral artery (ASVA) and the identification of their interrelations. Methods Sixty-eight subjects without pathology of the ASVA and AAJ were selected from head-neck CTA examination. All the 3D images were formed with volume rendering (VR) together with techniques of separating, fusing, opacifying and false-coloring (SFOF). On the 3D images, the ASVA and AAJ were observed, and their interrelations were measured. Results All the 3DCTA images were of high quality and up to our requirements. They could clearly and directly show the ASVA, ascending along the AAJ. There were 5 curves in the course of the ASVA, of which 2 curves were away from the atlantoaxial joint, one in the 2rid curve of 0.0 mm-5.4 mm, the other in the 4th of 2.6 mm-9.2 mm. There was no significant difference in the measurements between left and right (P 〉0.05). The curved parts of the ASVA slightly expanded, with the biggest diameter of 5.6 mm in the 4th curve. Statistical comparison shows that the left ASVA is larger than the right (P 〈0.05). Variations of the ASVA were found in 8 cases and of the AAJ in 12. Conclusions 3DCTA can globally and directly demonstrate the structures of the AAJ, ASVA and their interrelations. The 3D imaging data make up and enrich the research contents of regional anatomy and lay the foundation for related study and applications.展开更多
跨模态图像-文本检索是一项在给定一种模态(如文本)的查询条件下检索另一种模态(如图像)的任务.该任务的关键问题在于如何准确地测量图文两种模态之间的相似性,在减少视觉和语言这两种异构模态之间的视觉语义差异中起着至关重要的作用....跨模态图像-文本检索是一项在给定一种模态(如文本)的查询条件下检索另一种模态(如图像)的任务.该任务的关键问题在于如何准确地测量图文两种模态之间的相似性,在减少视觉和语言这两种异构模态之间的视觉语义差异中起着至关重要的作用.传统的检索范式依靠深度学习提取图像和文本的特征表示,并将其映射到一个公共表示空间中进行匹配.然而,这种方法更多地依赖数据表面的相关关系,无法挖掘数据背后真实的因果关系,在高层语义信息的表示和可解释性方面面临着挑战.为此,在深度学习的基础上引入因果推断和嵌入共识知识,提出嵌入共识知识的因果图文检索方法.具体而言,将因果干预引入视觉特征提取模块,通过因果关系替换相关关系学习常识因果视觉特征,并与原始视觉特征进行连接得到最终的视觉特征表示.为解决本方法文本特征表示不足的问题,采用更强大的文本特征提取模型BERT(Bidirectional encoder representations from transformers,双向编码器表示),并且嵌入两种模态数据之间共享的共识知识对图文特征进行共识级的表示学习.在MS-COCO数据集以及MS-COCO到Flickr30k上的跨数据集实验,证明了本文方法可以在双向图文检索任务上实现召回率和平均召回率的一致性改进.展开更多
Knowledge of impact conditions is critical to evaluating the terminal impact performance of a projectile.For a small caliber bullet,in-flight velocity has been precisely measured for decades using detection screens,bu...Knowledge of impact conditions is critical to evaluating the terminal impact performance of a projectile.For a small caliber bullet,in-flight velocity has been precisely measured for decades using detection screens,but accurately quantifying the orientation of the bullet on a target has been more challenging.This report introduces the Automated Small-Arms Photogrammetry(ASAP)analysis method used to measure,model,and predict the orientation of a small caliber bullet before reaching an impact surface.ASAP uses advanced hardware developed by Sydor Technologies to record a series of infrared digital photographs.Individual images(four orthogonal pairs)are processed using computer vision algorithms to quantify the orientation of the projectile and re-project its precise position and orientation into a three-dimensional muzzle-fixed coordinate system.An epicyclic motion model is fit to the measured data,and the epicyclic motion is extrapolated to the target location.Analysis results are fairly immediate and may be reviewed during testing.Prove-out demonstrations have shown that the impact-angle prediction capability is less than six hundredths of a degree for the 5.56 mm ball round tested.Keywords:Yaw,Terminal ballistics,Exterior ballistics,Test&evaluation,Computer vision,Image processing,Angle of展开更多
A computer vision approach through Open AI’s CLIP, a model capable of predicting text-image pairs, is used to create an AI agent for Dixit, a game which requires creative linking between images and text. This paper c...A computer vision approach through Open AI’s CLIP, a model capable of predicting text-image pairs, is used to create an AI agent for Dixit, a game which requires creative linking between images and text. This paper calculates baseline accuracies for both the ability to match the correct image to a hint and the ability to match up with human preferences. A dataset created by previous work on Dixit is used for testing. CLIP is utilized through the comparison of a hint to multiple images, and previous hints, achieving a final accuracy of 0.5011 which surpasses previous results.展开更多
The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is present...The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment.展开更多
文摘Background The vertebral artery (VA) and atlantoaxial joint (AAJ), with complicated structures, are located in the depths of the head-neck boundary area, the regional anatomy of which cannot be shown globally and directly. This study aims to evaluate three-dimensional CT angiography (3DCTA) in displaying the AAJ, atlantoaxial segment of the vertebral artery (ASVA) and the identification of their interrelations. Methods Sixty-eight subjects without pathology of the ASVA and AAJ were selected from head-neck CTA examination. All the 3D images were formed with volume rendering (VR) together with techniques of separating, fusing, opacifying and false-coloring (SFOF). On the 3D images, the ASVA and AAJ were observed, and their interrelations were measured. Results All the 3DCTA images were of high quality and up to our requirements. They could clearly and directly show the ASVA, ascending along the AAJ. There were 5 curves in the course of the ASVA, of which 2 curves were away from the atlantoaxial joint, one in the 2rid curve of 0.0 mm-5.4 mm, the other in the 4th of 2.6 mm-9.2 mm. There was no significant difference in the measurements between left and right (P 〉0.05). The curved parts of the ASVA slightly expanded, with the biggest diameter of 5.6 mm in the 4th curve. Statistical comparison shows that the left ASVA is larger than the right (P 〈0.05). Variations of the ASVA were found in 8 cases and of the AAJ in 12. Conclusions 3DCTA can globally and directly demonstrate the structures of the AAJ, ASVA and their interrelations. The 3D imaging data make up and enrich the research contents of regional anatomy and lay the foundation for related study and applications.
文摘跨模态图像-文本检索是一项在给定一种模态(如文本)的查询条件下检索另一种模态(如图像)的任务.该任务的关键问题在于如何准确地测量图文两种模态之间的相似性,在减少视觉和语言这两种异构模态之间的视觉语义差异中起着至关重要的作用.传统的检索范式依靠深度学习提取图像和文本的特征表示,并将其映射到一个公共表示空间中进行匹配.然而,这种方法更多地依赖数据表面的相关关系,无法挖掘数据背后真实的因果关系,在高层语义信息的表示和可解释性方面面临着挑战.为此,在深度学习的基础上引入因果推断和嵌入共识知识,提出嵌入共识知识的因果图文检索方法.具体而言,将因果干预引入视觉特征提取模块,通过因果关系替换相关关系学习常识因果视觉特征,并与原始视觉特征进行连接得到最终的视觉特征表示.为解决本方法文本特征表示不足的问题,采用更强大的文本特征提取模型BERT(Bidirectional encoder representations from transformers,双向编码器表示),并且嵌入两种模态数据之间共享的共识知识对图文特征进行共识级的表示学习.在MS-COCO数据集以及MS-COCO到Flickr30k上的跨数据集实验,证明了本文方法可以在双向图文检索任务上实现召回率和平均召回率的一致性改进.
文摘Knowledge of impact conditions is critical to evaluating the terminal impact performance of a projectile.For a small caliber bullet,in-flight velocity has been precisely measured for decades using detection screens,but accurately quantifying the orientation of the bullet on a target has been more challenging.This report introduces the Automated Small-Arms Photogrammetry(ASAP)analysis method used to measure,model,and predict the orientation of a small caliber bullet before reaching an impact surface.ASAP uses advanced hardware developed by Sydor Technologies to record a series of infrared digital photographs.Individual images(four orthogonal pairs)are processed using computer vision algorithms to quantify the orientation of the projectile and re-project its precise position and orientation into a three-dimensional muzzle-fixed coordinate system.An epicyclic motion model is fit to the measured data,and the epicyclic motion is extrapolated to the target location.Analysis results are fairly immediate and may be reviewed during testing.Prove-out demonstrations have shown that the impact-angle prediction capability is less than six hundredths of a degree for the 5.56 mm ball round tested.Keywords:Yaw,Terminal ballistics,Exterior ballistics,Test&evaluation,Computer vision,Image processing,Angle of
文摘A computer vision approach through Open AI’s CLIP, a model capable of predicting text-image pairs, is used to create an AI agent for Dixit, a game which requires creative linking between images and text. This paper calculates baseline accuracies for both the ability to match the correct image to a hint and the ability to match up with human preferences. A dataset created by previous work on Dixit is used for testing. CLIP is utilized through the comparison of a hint to multiple images, and previous hints, achieving a final accuracy of 0.5011 which surpasses previous results.
基金This project was supported by the National Natural Science Foundation of China (60135020).
文摘The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment.