본문 바로가기
HOME> 논문 > 논문 검색상세

논문 상세정보

Computer vision and image understanding : CVIU v.163, 2017년, pp.41 - 57   SCI SCIE SCOPUS
본 등재정보는 저널의 등재정보를 참고하여 보여주는 베타서비스로 정확한 논문의 등재여부는 등재기관에 확인하시기 바랍니다.

Vision-language integration using constrained local semantic features

Tamaazousti, Youssef (Corresponding author at: CEA, LIST, Laboratory of Vision and Content Engineering, F-91191 Gif-sur-Yvette, France. ) ; Le Borgne, Hervé (CEA, LIST, Laboratory of Vision and Content Engineering, F-91191 Gif-sur-Yvette, France ) ; Popescu, Adrian (CEA, LIST, Laboratory of Vision and Content Engineering, F-91191 Gif-sur-Yvette, France ) ; Gadeski, Etienne (CEA, LIST, Laboratory of Vision and Content Engineering, F-91191 Gif-sur-Yvette, France ) ; Ginsca, Alexandru (CEA, LIST, Laboratory of Vision and Content Engineering, F-91191 Gif-sur-Yvette, France ) ; Hudelot, Céline (University of Paris-Saclay, Mathematics in Interaction with Computer Science (MICS), 92295 Châtenay-Malabry, France ) ;
  • 초록  

    Abstract This paper tackles two recent promising issues in the field of computer vision, namely “the integration of linguistic and visual information” and “the use of semantic features to represent the image content”. Semantic features represent images according to some visual concepts that are detected into the image by a set of base classifiers. Recent works exhibit competitive performances in image classification and retrieval using such features. We propose to rely on this type of image descriptions to facilitate its integration with linguistic data. More precisely, the contribution of this paper is threefold. First, we propose to automatically determine the most useful dimensions of a semantic representation according to the actual image content. Hence, it results into a level of sparsity for the semantic features that is adapted to each image independently. Our model takes into account both the confidence on each base classifier and the global amount of information of the semantic signature, defined in the Shannon sense. This contribution is further extended to better reflect the detection of a visual concept at a local scale. Second, we introduce a new strategy to learn an efficient mid-level representation by CNNs that boosts the performance of semantic signatures. Last, we propose several schemes to integrate a visual representation based on semantic features with some linguistic piece of information, leading to the nesting of linguistic information at two levels of the visual features. Experimental validation is conducted on four benchmarks (VOC 2007, VOC 2012, Nus-Wide and MIT Indoor) for classification, three of them for retrieval and two of them for bi-modal classification. The proposed semantic feature achieves state-of-the-art performances on three classification benchmarks and all retrieval ones. Regarding our vision-language integration method, it achieves state-of-the-art performances in bi-modal classification. Highlights Vision and language integration at two levels, including the semantic level. A semantic signature that adapts its sparsity to the actual visual content of images. CNN-based mid-level features boosting semantic signatures. Top performances on publicly available benchmarks for several tasks. Graphical abstract [DISPLAY OMISSION]


  • 주제어

    Image classification .   Image retrieval .   Bi-modal classification .   Semantic features .   Concept-based sparsification .   Constrained local regions .   Vision-language integration .   Common latent space .   Pure concept space.  

 활용도 분석

  • 상세보기

    amChart 영역
  • 원문보기

    amChart 영역

원문보기

무료다운로드
  • 원문이 없습니다.
유료다운로드

유료 다운로드의 경우 해당 사이트의 정책에 따라 신규 회원가입, 로그인, 유료 구매 등이 필요할 수 있습니다. 해당 사이트에서 발생하는 귀하의 모든 정보활동은 NDSL의 서비스 정책과 무관합니다.

원문복사신청을 하시면, 일부 해외 인쇄학술지의 경우 외국학술지지원센터(FRIC)에서
무료 원문복사 서비스를 제공합니다.

NDSL에서는 해당 원문을 복사서비스하고 있습니다. 위의 원문복사신청 또는 장바구니 담기를 통하여 원문복사서비스 이용이 가능합니다.

이 논문과 함께 출판된 논문 + 더보기