Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from th...Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from the given sequence.Although such a learning scheme is generally effective,it has a critical limitation,i.e.,the model learned on sparse frames only possesses weak generalization ability.This situation could become worse on“long”videos since they tend to have intensive scene variations.Moreover,in such videos,the keyframe information from a longer time span is less relevant to the previous,which could also cause learning conflict and deteriorate the model performance.Thus,the learning scheme is usually incapable of handling complex pattern modeling.To solve this problem,we propose a divide-and-conquer framework,which can convert a complex problem domain into multiple simple ones.First,we devise a novel background consistency analysis(BCA)which effectively divides the mined frames into disjoint groups.Then for each group,we assign an individual deep model on it to capture its key attribute during the fine-tuning phase.During the testing phase,we design a model-matching strategy,which could dynamically select the best-matched model from those fine-tuned ones to handle the given testing frame.Comprehensive experiments show that our method can adapt severe background appearance variation coupling with object movement and obtain robust saliency detection compared with the previous scheme and the state-of-the-art methods.展开更多
Specifying software requirements is an important, complicated and error prone task. It involves the collaboration of several people specifying requirements that are gathered through several stakeholders. During this p...Specifying software requirements is an important, complicated and error prone task. It involves the collaboration of several people specifying requirements that are gathered through several stakeholders. During this process, developers working in parallel introduce and make modifications to requirements until reaching a specification that satisfies the stakeholders’ requirements. Merge conflicts are inevitable when integrating the modifications made by different developers to a shared specification. Thus, detecting and resolving these conflicts is critical to ensure a consistent resulting specification. A conflicts detection approach for merging Object-Oriented formal specifications is proposed in this paper. Conflicts are classified, formally defined and detected based on the results of a proposed differencing algorithm. The proposed approach has been empirically evaluated, and the experimental results are discussed in this paper.展开更多
本文提出了一种基于深度学习的动态场景视觉里程计方法。使用轻量级Ghost模块与目标检测网络YOLOv5s结合构建C3Ghost模块,引入坐标注意力机制(coordinate attention, CA),在提高网络检测速度的同时保证检测准确性。并将其与运动一致性...本文提出了一种基于深度学习的动态场景视觉里程计方法。使用轻量级Ghost模块与目标检测网络YOLOv5s结合构建C3Ghost模块,引入坐标注意力机制(coordinate attention, CA),在提高网络检测速度的同时保证检测准确性。并将其与运动一致性算法结合,剔除动态特征点,仅利用静态特征点进行位姿估计。实验结果表明,与传统的ORB-SLAM3(orient FAST and rotated BRIEF-simultaneous localization and mapping 3)算法相比,在慕尼黑工业大学(technical university of Munich, TUM)RGB-D(RGB-depth)高动态数据集上绝对轨迹误差(absolute trajectory error, ATE)和相对位姿误差(relative pose error, RPE)平均有了90%以上的改善。相较于先进的同时定位与地图构建SLAM算法,也有相对提升。因此,该算法有效提升了视觉SLAM在动态环境下的稳定性和鲁棒性。展开更多
基金supported in part by the CAMS Innovation Fund for Medical Sciences,China(No.2019-I2M5-016)National Natural Science Foundation of China(No.62172246)+1 种基金the Youth Innovation and Technology Support Plan of Colleges and Universities in Shandong Province,China(No.2021KJ062)National Science Foundation of USA(Nos.IIS-1715985 and IIS1812606).
文摘Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from the given sequence.Although such a learning scheme is generally effective,it has a critical limitation,i.e.,the model learned on sparse frames only possesses weak generalization ability.This situation could become worse on“long”videos since they tend to have intensive scene variations.Moreover,in such videos,the keyframe information from a longer time span is less relevant to the previous,which could also cause learning conflict and deteriorate the model performance.Thus,the learning scheme is usually incapable of handling complex pattern modeling.To solve this problem,we propose a divide-and-conquer framework,which can convert a complex problem domain into multiple simple ones.First,we devise a novel background consistency analysis(BCA)which effectively divides the mined frames into disjoint groups.Then for each group,we assign an individual deep model on it to capture its key attribute during the fine-tuning phase.During the testing phase,we design a model-matching strategy,which could dynamically select the best-matched model from those fine-tuned ones to handle the given testing frame.Comprehensive experiments show that our method can adapt severe background appearance variation coupling with object movement and obtain robust saliency detection compared with the previous scheme and the state-of-the-art methods.
文摘Specifying software requirements is an important, complicated and error prone task. It involves the collaboration of several people specifying requirements that are gathered through several stakeholders. During this process, developers working in parallel introduce and make modifications to requirements until reaching a specification that satisfies the stakeholders’ requirements. Merge conflicts are inevitable when integrating the modifications made by different developers to a shared specification. Thus, detecting and resolving these conflicts is critical to ensure a consistent resulting specification. A conflicts detection approach for merging Object-Oriented formal specifications is proposed in this paper. Conflicts are classified, formally defined and detected based on the results of a proposed differencing algorithm. The proposed approach has been empirically evaluated, and the experimental results are discussed in this paper.
文摘本文提出了一种基于深度学习的动态场景视觉里程计方法。使用轻量级Ghost模块与目标检测网络YOLOv5s结合构建C3Ghost模块,引入坐标注意力机制(coordinate attention, CA),在提高网络检测速度的同时保证检测准确性。并将其与运动一致性算法结合,剔除动态特征点,仅利用静态特征点进行位姿估计。实验结果表明,与传统的ORB-SLAM3(orient FAST and rotated BRIEF-simultaneous localization and mapping 3)算法相比,在慕尼黑工业大学(technical university of Munich, TUM)RGB-D(RGB-depth)高动态数据集上绝对轨迹误差(absolute trajectory error, ATE)和相对位姿误差(relative pose error, RPE)平均有了90%以上的改善。相较于先进的同时定位与地图构建SLAM算法,也有相对提升。因此,该算法有效提升了视觉SLAM在动态环境下的稳定性和鲁棒性。