In this paper,a new method is presented for 3D motion estimation by image region correspon- dences using stereo cameras.Under the weak perspectivity assumption,we first employ the moment tensor theory(Cyganski and Orr...In this paper,a new method is presented for 3D motion estimation by image region correspon- dences using stereo cameras.Under the weak perspectivity assumption,we first employ the moment tensor theory(Cyganski and Orr)to compute the monocular affine transformations relating images taken by the same camera at different time instants and the binocular affine transformations relating images taken by different cameras at the same time instant.We then show that 3D motion can he recovered from these 2D transformations.A space-time fusion strategy is proposed to aim at robust results.No knowledge of point correspondences is required in the above processes and the computa- lions involved are linear.To find corresponding image regions,new affine invariants,which show stronger invariance,are derived in term of tensor contraction theory.Experiments on real motion images are conducted to verify the proposed method.展开更多
采用GDS VIS 400 HPTAS三轴流变仪与PCI-2声发射系统,对红砂岩进行了单轴压缩条件下声发射震源定位试验,以矩张量分析方法为主要分析方法,研究了红砂岩破坏过程中剪切型、张拉型与混合型声发射震源空间分布特征与演化规律及其对应的主...采用GDS VIS 400 HPTAS三轴流变仪与PCI-2声发射系统,对红砂岩进行了单轴压缩条件下声发射震源定位试验,以矩张量分析方法为主要分析方法,研究了红砂岩破坏过程中剪切型、张拉型与混合型声发射震源空间分布特征与演化规律及其对应的主频特征,提出了基于累计声发射震源数非线性增长特征的起裂应力与损伤应力估测方法,与基于微裂纹非稳定发展阶段震源主频特征的岩石破坏预测方法。研究结果表明:在微孔隙压密阶段声发射震源分布远离试件的轴线,主要分布于试件上、下两端面附近。在弹性至微裂纹稳定发展阶段声发射震源遍布整个试件,并且具有向试件中心发展的趋势。在微裂纹非稳定发展阶段,声发射震源主要集中于试件中上部与中下部。不同类型震源演化规律与加载应力大小有关。当加载应力大于起裂应力但小于损伤应力时,剪切型震源迅速增长;当加载应力大于损伤应力时,张拉型与混合型震源进入相对快速增长期,并且剪切型震源快速增长起始应力与张拉型、混合型震源快速增长起始应力,分别与起裂应力、损伤应力具有良好的对应关系。在红砂岩破坏过程中,剪切型、张拉型与混合型震源的主频范围主要位于0~50,100~150与250~350 kHz。在微裂纹非稳定发展阶段,仅剪切型震源存在200~250 kHz信号,该特征可作为单轴压缩条件下红砂岩破坏的前兆特征。研究结果在一定程度上验证了在岩石破坏过程中不同类型震源具有不同的主频特征,可为基于声发射监测的岩石破坏预测方法研究提供参考。展开更多
文摘In this paper,a new method is presented for 3D motion estimation by image region correspon- dences using stereo cameras.Under the weak perspectivity assumption,we first employ the moment tensor theory(Cyganski and Orr)to compute the monocular affine transformations relating images taken by the same camera at different time instants and the binocular affine transformations relating images taken by different cameras at the same time instant.We then show that 3D motion can he recovered from these 2D transformations.A space-time fusion strategy is proposed to aim at robust results.No knowledge of point correspondences is required in the above processes and the computa- lions involved are linear.To find corresponding image regions,new affine invariants,which show stronger invariance,are derived in term of tensor contraction theory.Experiments on real motion images are conducted to verify the proposed method.