Welcome to Journal of University of Chinese Academy of Sciences,Today is

›› 2018, Vol. 35 ›› Issue (4): 544-549.DOI: 10.7523/j.issn.2095-6134.2018.04.018

Previous Articles     Next Articles

Spatio-temporal-fused no-reference video quality assessment based on convolutional neural network

WANG Chunfeng1, SU Li1,2, HUANG Qingming1,2   

  1. 1. Key Laboratory of Big Data Mining and Knowledge Management of CAS, University of Chinese Academy of Sciences, Beijing 100049, China;
    2. Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
  • Received:2017-03-31 Revised:2017-04-25 Online:2018-07-15

Abstract: No-reference video quality assessment (NR-VQA) measures distorted videos quantitatively without the reference of original distorted-less videos. Most conventional NR-VQA methods are based on statistical analysis, and the majority of them are generally designed for specific types of distortions or consider less about the temporal information, which limits their application scenarios as well as their speeds. In this paper, we propose a spatio-temporal no-reference video quality assessment method based on convolutional neural network, which is not designed for specific types of distortions. We divide the method into spatial and temporal processes. We redesign a convolutional neural network in spatiality to learn the distortion features in frames. A group of SSIM-like features are exploited in temporality. Finally, we train a linear regression model using the spatio-temporal features to predict the video quality. Experiments demonstrate that the proposed method is similar to other state-of-the-art no-reference VQA methods in performance. Fourthermore, the proposed method runs much faster than other VQA methods, which makes the proposed method have better application prospects.

Key words: video quality assessment, convolutional neural network, no-reference, spatio-temporal information

CLC Number: