that are duplicate or near duplicate to a query image.One of the most popular and practical methods in near-duplicate image retrieval is based on bag-of-words(BoW)model.However,the fundamental deficiency of current Bo...that are duplicate or near duplicate to a query image.One of the most popular and practical methods in near-duplicate image retrieval is based on bag-of-words(BoW)model.However,the fundamental deficiency of current BoW method is the gap between visual word and image’s semantic meaning.Similar problem also plagues existing text retrieval.A prevalent method against such issue in text retrieval is to eliminate text synonymy and polysemy and therefore improve the whole performance.Our proposed approach borrows ideas from text retrieval and tries to overcome these deficiencies of BoW model by treating the semantic gap problem as visual synonymy and polysemy issues.We use visual synonymy in a very general sense to describe the fact that there are many different visual words referring to the same visual meaning.By visual polysemy,we refer to the general fact that most visual words have more than one distinct meaning.To eliminate visual synonymy,we present an extended similarity function to implicitly extend query visual words.To eliminate visual polysemy,we use visual pattern and prove that the most efficient way of using visual pattern is merging visual word vector together with visual pattern vector and obtain the similarity score by cosine function.In addition,we observe that there is a high possibility that duplicates visual words occur in an adjacent area.Therefore,we modify traditional Apriori algorithm to mine quantitative pattern that can be defined as patterns containing duplicate items.Experiments prove quantitative patterns improving mean average precision(MAP)significantly.展开更多
针对传统BOW(Bag of Words)模型用于场景图像分类时的不足,通过引入关联规则的MFI(Maximum Frequent Itemsets)和Topology模型对其进行改进。为了突出同类图像的视觉单词,提取同类图像的MFI后,对其中频繁出现的视觉单词进行加权处理,增...针对传统BOW(Bag of Words)模型用于场景图像分类时的不足,通过引入关联规则的MFI(Maximum Frequent Itemsets)和Topology模型对其进行改进。为了突出同类图像的视觉单词,提取同类图像的MFI后,对其中频繁出现的视觉单词进行加权处理,增强同类图像的共有特征。同时,为了提高视觉词典的生成效率,利用Topology模型对原始模型进行分工并行处理。通过COREL和Caltech-256图像库的实验,证明改进后的模型提高了对场景图像的分类性能,并验证了其Topology模型的有效性和可行性。展开更多
为弥补传统Bo W(Bag of Words)模型缺失的颜色信息和空间信息,提出了基于多特征索引和局部约束的服饰检索方法。基于Bo W模型分别建立关于颜色特征和SIFT特征的两种倒排文件索引结构,检索相似服饰图像,并提出了局部约束的后验证方法。...为弥补传统Bo W(Bag of Words)模型缺失的颜色信息和空间信息,提出了基于多特征索引和局部约束的服饰检索方法。基于Bo W模型分别建立关于颜色特征和SIFT特征的两种倒排文件索引结构,检索相似服饰图像,并提出了局部约束的后验证方法。实验结果表明,该方法在不同环境采集的服饰数据库的测试中,得到了理想的检索性能。展开更多
文摘that are duplicate or near duplicate to a query image.One of the most popular and practical methods in near-duplicate image retrieval is based on bag-of-words(BoW)model.However,the fundamental deficiency of current BoW method is the gap between visual word and image’s semantic meaning.Similar problem also plagues existing text retrieval.A prevalent method against such issue in text retrieval is to eliminate text synonymy and polysemy and therefore improve the whole performance.Our proposed approach borrows ideas from text retrieval and tries to overcome these deficiencies of BoW model by treating the semantic gap problem as visual synonymy and polysemy issues.We use visual synonymy in a very general sense to describe the fact that there are many different visual words referring to the same visual meaning.By visual polysemy,we refer to the general fact that most visual words have more than one distinct meaning.To eliminate visual synonymy,we present an extended similarity function to implicitly extend query visual words.To eliminate visual polysemy,we use visual pattern and prove that the most efficient way of using visual pattern is merging visual word vector together with visual pattern vector and obtain the similarity score by cosine function.In addition,we observe that there is a high possibility that duplicates visual words occur in an adjacent area.Therefore,we modify traditional Apriori algorithm to mine quantitative pattern that can be defined as patterns containing duplicate items.Experiments prove quantitative patterns improving mean average precision(MAP)significantly.
文摘针对传统BOW(Bag of Words)模型用于场景图像分类时的不足,通过引入关联规则的MFI(Maximum Frequent Itemsets)和Topology模型对其进行改进。为了突出同类图像的视觉单词,提取同类图像的MFI后,对其中频繁出现的视觉单词进行加权处理,增强同类图像的共有特征。同时,为了提高视觉词典的生成效率,利用Topology模型对原始模型进行分工并行处理。通过COREL和Caltech-256图像库的实验,证明改进后的模型提高了对场景图像的分类性能,并验证了其Topology模型的有效性和可行性。
文摘为弥补传统Bo W(Bag of Words)模型缺失的颜色信息和空间信息,提出了基于多特征索引和局部约束的服饰检索方法。基于Bo W模型分别建立关于颜色特征和SIFT特征的两种倒排文件索引结构,检索相似服饰图像,并提出了局部约束的后验证方法。实验结果表明,该方法在不同环境采集的服饰数据库的测试中,得到了理想的检索性能。