본문 바로가기
HOME> 논문 > 논문 검색상세

논문 상세정보

Computer vision and image understanding : CVIU v.163, 2017년, pp.126 - 138   SCI SCIE SCOPUS
본 등재정보는 저널의 등재정보를 참고하여 보여주는 베타서비스로 정확한 논문의 등재여부는 등재기관에 확인하시기 바랍니다.

Learning explicit video attributes from mid-level representation for video captioning

Nian, Fudong    (Key Laboratory of Intelligent Computing & Signal Processing, Ministry of Education, Anhui University, Hefei, China   ); Li, Teng    (Corresponding author at: Key Laboratory of Intelligent Computing & Signal Processing, Ministry of Education, Anhui University, Hefei, China.   ); Wang, Yan    (Key Laboratory of Intelligent Computing & Signal Processing, Ministry of Education, Anhui University, Hefei, China   ); Wu, Xinyu    (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China   ); Ni, Bingbing    (Shanghai Jiaotong University, Shanghai, China   ); Xu, Changsheng    (National Lab of Pattern Recognition, Institute of Automation, CAS, Beijing, China  );
  • 초록  

    Abstract Recent works on video captioning mainly learn the map from low-level visual features to language description directly without explicitly representing the high-level semantic video concepts (e.g. objects, actions in the annotated sentences). To bridge the semantic gap, in this paper, addressing it, we propose a novel video attribute representation learning algorithm for video concept understanding and utilize the learned explicit video attribute representation to improve video captioning performance. To achieve it, firstly, inspired by the success of spectrogram in audio processing, a novel mid-level video representation named “ video response map ” (VRM) is proposed, by which the frame sequence could be represented by a single image representation. Therefore, the video attributes representation learning could be converted to a well-studied multi-label image classification problem. Then in the captions prediction step, the learned video attributes feature is integrated with the single frame feature to improve previous sequence-to-sequence language generation model by adjusting the LSTM (Long-Short Term Memory) input units. The proposed video captioning framework could both handle variable frame inputs and utilize high-level semantic video attribute features. Experimental results on video captioning tasks show that the proposed method, utilizing only RGB frames as input without extra video or text training data, could achieve competitive performance with state-of-the-art methods. Furthermore, the extensive experimental evaluations on the UCF-101 action classification benchmark well demonstrate the representation capability of the proposed VRM. Highlights A novel and efficient mid-level video representation named VRM is presented, which could represent a sequence of frames by a single image. A novel video attributes representation learning method is presented. We achieve the superior performance to the previous sequence to sequence model without extra training data utilized.


  • 주제어

    Mid-level video representation .   Video attributes learning .   Video caption .   Sequence-to-sequence learning.  

 활용도 분석

  • 상세보기

    amChart 영역
  • 원문보기

    amChart 영역

원문보기

무료다운로드
  • 원문이 없습니다.
유료다운로드

유료 다운로드의 경우 해당 사이트의 정책에 따라 신규 회원가입, 로그인, 유료 구매 등이 필요할 수 있습니다. 해당 사이트에서 발생하는 귀하의 모든 정보활동은 NDSL의 서비스 정책과 무관합니다.

원문복사신청을 하시면, 일부 해외 인쇄학술지의 경우 외국학술지지원센터(FRIC)에서
무료 원문복사 서비스를 제공합니다.

NDSL에서는 해당 원문을 복사서비스하고 있습니다. 위의 원문복사신청 또는 장바구니 담기를 통하여 원문복사서비스 이용이 가능합니다.

이 논문과 함께 출판된 논문 + 더보기