期刊文献+
共找到8篇文章
< 1 >
每页显示 20 50 100
基于深度卷积神经网络的场景自适应道路分割算法 被引量:19
1
作者 王海 蔡英凤 +2 位作者 贾允毅 陈龙 江浩斌 《电子与信息学报》 EI CSCD 北大核心 2017年第2期263-269,共7页
现有基于机器学习的道路分割方法存在当训练样本和目标场景样本分布不匹配时检测效果下降显著的缺陷。针对该问题,该文提出一种基于深度卷积网络和自编码器的场景自适应道路分割算法。首先,采用较为经典的基于慢特征分析(SFA)和Gentle B... 现有基于机器学习的道路分割方法存在当训练样本和目标场景样本分布不匹配时检测效果下降显著的缺陷。针对该问题,该文提出一种基于深度卷积网络和自编码器的场景自适应道路分割算法。首先,采用较为经典的基于慢特征分析(SFA)和Gentle Boost的方法,实现了带标签置信度样本的在线选取;其次,利用深度卷积神经网络(DCNN)深度结构的特征自动抽取能力,辅以特征自编码器对源-目标场景下特征相似度度量,提出了一种采用复合深度结构的场景自适应分类器模型并设计了训练方法。在KITTI测试库的测试结果表明,所提算法较现有非场景自适应道路分割算法具有较大的优越性,在检测率上平均提升约4.5%。 展开更多
关键词 道路分割 场景自适应 深度卷积神经网络 复合深度结构 自编码器
下载PDF
场景自适应的红外图像动态范围压缩算法 被引量:1
2
作者 马晓楠 洪普 宫文峰 《光学与光电技术》 2020年第4期25-31,共7页
经典的红外图像动态范围压缩算法存在低动态范围时增强噪声,高动态范围时压低背景亮度以及由此带来的视频闪烁等问题,其场景适应性较差。提出了一种场景自适应的红外图像动态范围压缩算法。首先统计原始图像的最小值、均值和最大值,然... 经典的红外图像动态范围压缩算法存在低动态范围时增强噪声,高动态范围时压低背景亮度以及由此带来的视频闪烁等问题,其场景适应性较差。提出了一种场景自适应的红外图像动态范围压缩算法。首先统计原始图像的最小值、均值和最大值,然后计算一套自适应参数集,最后使用自适应参数分别对均值两侧的像素按照特定的规则进行灰度映射,得到结果图像。实验结果表明,当原始图像的动态范围产生剧烈变化时,使用所设计的算法对经典的动态范围压缩算法改造后,能得到观感舒适层次分明的红外图像,较原始算法具有较大的效果提升。 展开更多
关键词 红外图像 动态范围压缩 场景自适应 直方图投影 平台直方图均衡化
原文传递
H.263视频场景自适应编码的改进算法 被引量:1
3
作者 李虓江 陈抗生 《浙江大学学报(工学版)》 EI CAS CSCD 北大核心 2004年第3期292-296,共5页
针对现有场景自适应编码方案中存在的检测结果不准确问题,引入图像相似度检测算子,提出了一种基于二级筛选机制的图像相似度检测方案.在使用直方图法进行图像相似度一级检测的基础上,利用相似度检测算子对检测结果进行二级筛选,剔除一... 针对现有场景自适应编码方案中存在的检测结果不准确问题,引入图像相似度检测算子,提出了一种基于二级筛选机制的图像相似度检测方案.在使用直方图法进行图像相似度一级检测的基础上,利用相似度检测算子对检测结果进行二级筛选,剔除一级检测中引入的伪突变帧,找出真正的突变帧.然后对该突变帧进行帧内编码,开始一个新的视频片段,从而使由于场景切换所造成的视频图像失真得到有效抑制.对标准视频序列的仿真试验表明,提出的改进方案可以更准确地检测出序列中的突变帧,从而降低输出编码视频的比特数,提高视频压缩效率. 展开更多
关键词 H.263建议 场景自适应编码 二级筛选机制 图像相似度检测算子
下载PDF
基于场景自适应的船舶吃水值计算方法
4
作者 付豪 张渤 +1 位作者 杜京义 梁大明 《无线电工程》 2024年第3期725-736,共12页
在船舶吃水值检测工作中,传统的人工水尺读数方式往往会受场景限制,例如到港船只型号各不相同;船体水尺标志倾斜程度以及水尺本身刮擦、锈蚀和残缺的程度也不同;不同气候及时间段采集的光照条件差别很大,这些因素都将影响水尺图像识别... 在船舶吃水值检测工作中,传统的人工水尺读数方式往往会受场景限制,例如到港船只型号各不相同;船体水尺标志倾斜程度以及水尺本身刮擦、锈蚀和残缺的程度也不同;不同气候及时间段采集的光照条件差别很大,这些因素都将影响水尺图像识别读数的结果,导致货物质量计量不准确。针对以上问题,提出了一种基于场景自适应的船舶吃水精确检测的方法。对不同场景下采集的图像用亮度表达值,对图像的明暗进行判断分类,采用不同阈值的修正型伽马校正最大化图像信息;利用改进的语义分割算法对船体、水体和水尺字符进行分割;结合分割后的水体与水尺字符信息对图像进行矫正,对大M字符进行识别,并换算出吃水深度。通过实测表明,该算法能适应不同场景下的船舶水尺吃水值的计算,比例换算为像素级别,与人工读数对比,计算结果更贴近标准值,为水尺计重提供了更精准的数值。 展开更多
关键词 场景自适应 修正型伽马校正 语义分割 图像矫正
下载PDF
Scene-adaptive crowd counting method based on meta learning with dual-input network DMNet
5
作者 Haoyu ZHAO Weidong MIN +3 位作者 Jianqiang XU Qi WANG Yi ZOU Qiyan FU 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第1期91-100,共10页
Crowd counting is recently becoming a hot research topic, which aims to count the number of the people in different crowded scenes. Existing methods are mainly based on training-testing pattern and rely on large data ... Crowd counting is recently becoming a hot research topic, which aims to count the number of the people in different crowded scenes. Existing methods are mainly based on training-testing pattern and rely on large data training, which fails to accurately count the crowd in real-world scenes because of the limitation of model’s generalization capability. To alleviate this issue, a scene-adaptive crowd counting method based on meta-learning with Dual-illumination Merging Network (DMNet) is proposed in this paper. The proposed method based on learning-to-learn and few-shot learning is able to adapt different scenes which only contain a few labeled images. To generate high quality density map and count the crowd in low-lighting scene, the DMNet is proposed, which contains Multi-scale Feature Extraction module and Element-wise Fusion Module. The Multi-scale Feature Extraction module is used to extract the image feature by multi-scale convolutions, which helps to improve network accuracy. The Element-wise Fusion module fuses the low-lighting feature and illumination-enhanced feature, which supplements the missing illumination in low-lighting environments. Experimental results on benchmarks, WorldExpo’10, DISCO, USCD, and Mall, show that the proposed method outperforms the existing state-of-the-art methods in accuracy and gets satisfied results. 展开更多
关键词 crowd counting META-LEARNING scene-adaptive Dual-illumination Merging Network
原文传递
Unsupervised object detection with scene-adaptive concept learning 被引量:2
6
作者 Shiliang PU Wei ZHAO +3 位作者 Weijie CHEN Shicai YANG Di XIE Yunhe PAN 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2021年第5期638-651,共14页
Object detection is one of the hottest research directions in computer vision,has already made impressive progress in academia,and has many valuable applications in the industry.However,the mainstream detection method... Object detection is one of the hottest research directions in computer vision,has already made impressive progress in academia,and has many valuable applications in the industry.However,the mainstream detection methods still have two shortcomings:(1)even a model that is well trained using large amounts of data still cannot generally be used across different kinds of scenes;(2)once a model is deployed,it cannot autonomously evolve along with the accumulated unlabeled scene data.To address these problems,and inspired by visual knowledge theory,we propose a novel scene-adaptive evolution unsupervised video object detection algorithm that can decrease the impact of scene changes through the concept of object groups.We first extract a large number of object proposals from unlabeled data through a pre-trained detection model.Second,we build the visual knowledge dictionary of object concepts by clustering the proposals,in which each cluster center represents an object prototype.Third,we look into the relations between different clusters and the object information of different groups,and propose a graph-based group information propagation strategy to determine the category of an object concept,which can effectively distinguish positive and negative proposals.With these pseudo labels,we can easily fine-tune the pretrained model.The effectiveness of the proposed method is verified by performing different experiments,and the significant improvements are achieved. 展开更多
关键词 Visual knowledge Unsupervised video object detection scene-adaptive learning
原文传递
Scene-adaptive hierarchical data association and depth-invariant part-based appearance model for indoor multiple objects tracking 被引量:1
7
作者 Hong Liu Can Wang Yuan Gao 《CAAI Transactions on Intelligence Technology》 2016年第3期210-224,共15页
Indoor multi-tracking is more challenging compared with outdoor tasks due to frequent occlusion, view-truncation, severe scale change and pose variation, which may bring considerable unreliability and ambiguity to tar... Indoor multi-tracking is more challenging compared with outdoor tasks due to frequent occlusion, view-truncation, severe scale change and pose variation, which may bring considerable unreliability and ambiguity to target representation and data association. So discriminative and reliable target representation is vital for accurate data association in multi-tracking. Pervious works always combine bunch of features to increase the discriminative power, but this is prone to error accumulation and unnecessary computational cost, which may increase ambiguity on the contrary. Moreover, reliability of a same feature in different scenes may vary a lot, especially for currently widespread network cameras, which are settled in various and complex indoor scenes, previous fixed feature selection schemes cannot meet general requirements. To properly handle these problems, first, we propose a scene-adaptive hierarchical data association scheme, which adaptively selects features with higher reliability on target representation in the applied scene, and gradually combines features to the minimum requirement of discriminating ambiguous targets; second, a novel depth-invariant part-based appearance model using RGB-D data is proposed which makes the appearance model robust to scale change, partial occlusion and view-truncation. The introduce of RGB-D data increases the diversity of features, which provides more types of features for feature selection in data association and enhances the final multi-tracking performance. We validate our method from several aspects including scene-adaptive feature selection scheme, hierarchical data association scheme and RGB-D based appearance modeling scheme in various indoor scenes, which demonstrates its effectiveness and efficiency on improving multi-tracking performances in various indoor scenes. 展开更多
关键词 Multiple objects tracking scene-adaptive Data association Appearance model RGB-D data
下载PDF
基于轨迹关联的多目标跟踪
8
作者 许正 朱松豪 +1 位作者 梁志伟 徐国政 《南京邮电大学学报(自然科学版)》 北大核心 2017年第2期38-45,共8页
介绍了一种基于轨迹关联的多目标跟踪算法,该算法通过两种不同的关联策略,生成跟踪目标的全局轨迹与局部轨迹,进而实现多目标跟踪。首先,基于场景自适应方法生成局部轨迹,实现检测响应与原有轨迹关联;然后,基于增量线性判决的表观模型,... 介绍了一种基于轨迹关联的多目标跟踪算法,该算法通过两种不同的关联策略,生成跟踪目标的全局轨迹与局部轨迹,进而实现多目标跟踪。首先,基于场景自适应方法生成局部轨迹,实现检测响应与原有轨迹关联;然后,基于增量线性判决的表观模型,实现全局轨迹关联;最后,基于非线性运动模型,实现轨迹片段间空缺填补,以获取完整且平滑的跟踪轨迹。在PETS 2009/2010视频库及TUD-Stadtmitte视频库的实验表明,文中所提方法能在目标遮挡、不同目标具有相似外貌特征、运动目标方向突变等复杂情况下,实现多目标的正确关联,最终得到稳定、连续的跟踪轨迹。 展开更多
关键词 轨迹关联 场景自适应关联 增量线性判决分析 判别性表观模型 非线性运动模型
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部