Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking di...Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera's method and 1/10 of Raposo's method) due to the advantage of using hybrid parameters.展开更多
Reconstructing a three-dimensional(3D)environment is an indispensable technique to make augmented reality and augmented virtuality feasible.A Kinect device is an efficient tool for reconstructing 3D environments,and u...Reconstructing a three-dimensional(3D)environment is an indispensable technique to make augmented reality and augmented virtuality feasible.A Kinect device is an efficient tool for reconstructing 3D environments,and using multiple Kinect devices enables the enhancement of reconstruction density and expansion of virtual spaces.To employ multiple devices simultaneously,Kinect devices need to be calibrated with respect to each other.There are several schemes available that calibrate 3D images generated frommultiple Kinect devices,including themarker detection method.In this study,we introduce a markerless calibration technique for Azure Kinect devices that avoids the drawbacks of marker detection,which directly affects calibration accuracy;it offers superior userfriendliness,efficiency,and accuracy.Further,we applied a joint tracking algorithm to approximate the calibration.Traditional methods require the information of multiple joints for calibration;however,Azure Kinect,the latest version of Kinect,requires the information of only one joint.The obtained result was further refined using the iterative closest point algorithm.We conducted several experimental tests that confirmed the enhanced efficiency and accuracy of the proposed method for multiple Kinect devices when compared to the conventional markerbased calibration.展开更多
文摘Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera's method and 1/10 of Raposo's method) due to the advantage of using hybrid parameters.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Korea Government(MSIT)(Grant No.NRF-2022R1A2C1004588).
文摘Reconstructing a three-dimensional(3D)environment is an indispensable technique to make augmented reality and augmented virtuality feasible.A Kinect device is an efficient tool for reconstructing 3D environments,and using multiple Kinect devices enables the enhancement of reconstruction density and expansion of virtual spaces.To employ multiple devices simultaneously,Kinect devices need to be calibrated with respect to each other.There are several schemes available that calibrate 3D images generated frommultiple Kinect devices,including themarker detection method.In this study,we introduce a markerless calibration technique for Azure Kinect devices that avoids the drawbacks of marker detection,which directly affects calibration accuracy;it offers superior userfriendliness,efficiency,and accuracy.Further,we applied a joint tracking algorithm to approximate the calibration.Traditional methods require the information of multiple joints for calibration;however,Azure Kinect,the latest version of Kinect,requires the information of only one joint.The obtained result was further refined using the iterative closest point algorithm.We conducted several experimental tests that confirmed the enhanced efficiency and accuracy of the proposed method for multiple Kinect devices when compared to the conventional markerbased calibration.