Visual Quality Assessment (VQA)

Visual Quality Assessment (VQA)

We design prediction algorithms for the visual quality of images and videos, with respect to technical and perceptual aspects e.g. quality of experience (QoE). The tools of our trade include crowdsourcing, machine learning i.e. deep networks, eye-tracking. Consequently, we are creating massive multimedia databases  that are suitable for training generic and accurate VQA models.


The MMSP VQA Database Collection

The KonIQ-10k IQA Database

The main challenge in applying state-of-the-art deep learning methods to predict image quality in-the-wild is the relatively small size of existing quality scored datasets. The reason for the lack of larger datasets is the massive resources required in generating diverse and publishable content. To this purpose, we have created a large IQA database of natural, real-world images with corresponding mean opinion scores (MOS) gathered through crowdsourcing. 

Learn more

The ​KonViD-150k VQA Database

Deep learning approaches have had limited success on existing VQA datasets, either artificial or authentically distorted. We introduce KonViD-150k, an in-the-wild VQA dataset that is substantially larger and diverse, allowing the exploration of training DNNs on massive video collections with coarse annotations. 

​The database consists of two parts:​

  1. KonVid-150k-A: a coarsely annotated set of 152,265 videos, 5 seconds long, having five quality ratings each.
  2. KonVid-150k-B: 1,577 videos with a minimum of 89 ratings each. 

KonViD-150k provides a good testing ground for efficient VQA approaches, that are suitable to learn from large collections of videos, and can generalize well based on coarse annotations. Additionally, it is a great tool to investigate VQA methods with different annotation budget distribution strategies.

The KoNViD-1k VQA Database

Subjective video quality assessment (VQA) strongly depends on semantics, context, and the types of visual distortions. A lot of existing VQA databases cover small numbers of video sequences with artificial distortions. When testing newly developed Quality of Experience (QoE) models and metrics, they are commonly evaluated against subjective data from such databases, that are the result of perception experiments. However, since the aim of these QoE models is to accurately predict natural videos, these artificially distorted video databases are an insufficient basis for learning. Additionally, the small sizes make them only marginally usable for state-of-the-art learning systems, such as deep learning. In order to give a better basis for development and evaluation of objective VQA methods, we have created a larger datasets of natural, real-world video sequences with corresponding subjective mean opinion scores (MOS) gathered through crowdsourcing. 

Learn more

The IQA-Experts-300​ Database

Experts and naive observers have very different opinions in their judgments of aesthetics. ​Does this apply to image quality assessment as well? If it does, should we care more about expert-like opinions or those of lay-people? In our paper we propose a screening approach to find reliable and effectively expert crowd workers in image quality assessment (IQA). Our method measures the users' ability to identify image degradations by using test questions, together with several relaxed reliability checks.

Learn more

The KonPatch-30k IQA Database

Image quality assessment (IQA) has been studied almost exclusively as a global image property. It is common practice for IQA databases and metrics to quantify this abstract concept with a single score per image. In an attempt to extend the notion of quality to spatially restricted sub-regions of images, we designed a novel database of individually quality-annotated image patches.

Learn more

The KoSMo-1k VQA Database

The Konstanz interpolated slow-motion video dataset (KoSMo-1k) consists of 1,350 interpolated video sequences, from 30 different content sources, along with their subjective quality ratings from up to ten subjective comparisons per video pair.

Learn more

Publications

2021

  • Su, S., Hosu, V., Lin, H., Zhang, Y., Saupe, D., - KonIQ++: Boosting No-Reference Image Quality Assessment in the Wild by Jointly Predicting Image Quality and Defects, The 32nd British Machine Vision Conference (BMVC), November 2021
  • Lou, J., Lin, H., Marshall, D., Saupe, D., Liu, H., - TranSalNet: Visual saliency prediction using transformers, arXiv:2110.03593 (cs.CV), October 2021.
  • Men, H., Lin, H., Jenadeleh, M., Saupe, D., - Subjective image quality assessment with boosted triplet comparisons, IEEE Access,  October 2021.
  • Lin, H., Chen, G, Siebert, F. W., - Positional Encoding: Improving class-imbalanced motorcycle helmet use classification, IEEE International Conference on Image Processing (ICIP), Anchorage, Alaska, USA, September 2021.
  • Hahn, F., Hosu, V., Lin, H., Saupe, D., - KonVid-150k: A dataset for no-reference video quality assessment of videos in-the-wild, IEEE Access, May 2021.
  • Hahn, F., Hosu, V., Saupe, D., - Critical analysis on the reproducibility of visual quality assessment using deep features, arXiv:2009.05369 (cs.CV), March 2021, revised.
  • Roziere, B., Carraz Rakotonirina, N., Hosu, V., Rasoanaivo, A., Lin, H., Couprie, C., Teytaud, O., - Tarsier: Evolving noise injection in super-resolution GANs, 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, January 2021.
  • Burger, R., - Modelling, analysis and comparison of optimal and empirical pacing strategies in road cycling, University of Konstanz, 2021.

2020

  • Roziere, B., Teytaud, F., Hosu, V., Lin, H., Rapin, J., Zameshina, M., Teytaud, O., - EvolGAN: Evolutionary generative adversarial networks, Proceedings of the Asian Conference on Computer Vision (ACCV), November 2020.
  • Hosu, V., Saupe, D., Goldlücke, B., Lin, W., Cheng, W. H., See, J., Wong, L. K., Guha, T., Kumar, N., Narayanan, S., Somandepalli, K., Martinez, V., Adam, H., McLaughlin, K., - ATQAM/MAST’20: Workshop on aesthetic and technical quality assessment of multimedia and media analytics for societal trends, Proceedings of the 28th ACM International Conference on Multimedia (MM ’20), October 2020.
    Hosu, V., Saupe, D., Goldlücke, B., Lin, W., Cheng, W. H., See, J., Wong, L. K., - From Technical to Aesthetics Quality Assessment and Beyond: Challenges and Potential, ATQAM/MAST'20: Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, October 2020.
  • Men, H., Hosu, V., Lin, H., Bruhn, A., Saupe, D., - Subjective annotation for a frame interpolation benchmark using artefact amplification, Quality and User Experience, September 2020.
  • Zhao, X., Lin, H., Guo, P., Saupe, D., Liu, H., - Deep learning vs. traditional algorithms for saliency prediction of distorted images, IEEE International Conference on Image Processing 2020 (ICIP), September 2020.
  • Siebert, F. W., Lin, H., - Detecting motorcycle helmet use with deep learning, Accident Analysis & Prevention, IEEE Access, September 2020.
  • Roziere, B., Carraz Rakotonirina, N., Hosu, V., Lin, H., Rasoanaivo, A., Teytaud, O., Couprie, C., - Evolutionary super-resolution, Genetic and Evolutionary Computation Conference (GECCO), July 2020.
  • Lin, H., Jenadeleh, M., Chen, G, Reips, U, Hamzaoui, R., Saupe, D., - Subjective assessment of global picture-wise just noticeable difference, IEEE International Conference on Multimedia and Expo (ICME), In Proceedings: Workshop Data-driven Just Noticeable Difference for Multimedia Communication, July 2020.
  • Wiedemann, O., Saupe, D., - Gaze data for quality assessment of foveated video, ACM Symposium on Eye Tracking Research and Application (ETRA2020), Workshop on Eye Tracking for Quality of Experience in Multimedia (ET-MM), June 2020.
  • Wied­emann, O., Hosu, V., Lin, H., Saupe, D.,Foveated video coding for real-time streaming applications, International Conference on Quality of Multimedia Experience (QoMEX), Athlone, Ireland, May 2020, IEEE Press.
  • Men, H., Hosu, V., Lin, H., Bruhn, A., Saupe, D., - Visual quality assessment for interpolated slow-motion videos based on a novel database, International Conference on Quality of Multimedia Experience (QoMEX), Athlone, Ireland, May 2020, IEEE Press.
  • Ha, M. L., Hosu, V., Blanz, V., - Color Composition Similarity and its Application in Fine-Grained Similarity, IEEE Winter Conference on Applications of Computer Vision (WACV), March 2020.
  • Jenadeleh, M., Pedersen, M., Saupe, D., - Blind quality assessment of iris images acquired in visible light for biometric recognition, Sensors, Vol. 20, No. 5, pp. 1308, February 2020.
  • Hosu, V., Lin, H., Sziranyi, T., Saupe, D., - KonIQ-10k: An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment, IEEE Transactions on Image Processing, Vol. 29, pp. 4041-4056, January 2020; available on arXiv:1910.06180 [cs.CV], October 2019.
  • Lin, H., Hosu, V., Saupe, D.,DeepFL-IQA: Weak supervision for deep IQA feature learning, arXiv:2001.08113 [cs.CV], January 2020.
  • Lin, H., Hosu, V., Fan, C., Zhang, Y., Mu, Y., Hamzaoui, R., Saupe, D., - SUR-FeatNet: Predicting the satisfied user ratio curve for image compression with deep feature learning, Quality and User Experience, (2020), January 2020.

2019

  • Götz-Hahn, F., Hosu, V., Lin, H., Saupe, D., - No-Reference Video Quality Assessment using Multi-Level Spatially Pooled Features, arXiv:1912.07966 [cs.CV], December 2019.
  • Wagner, M., Lin, H., Li, S., Saupe, D.,Algorithm selection for image quality assessment, arXiv:1908.06911 [cs.CV], August 2019.
  • Fan, C., Lin, H., Hosu, V., Zhang, Y., Jiang, Q., Hamzaoui, R., Saupe, D.,SUR-Net: Predicting the satisfied user ratio curve for image compression with deep learning, International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, June 2019, IEEE Press.
  • Hosu, V., Goldlücke, B., Saupe, D.,Effective aesthetics prediction with multi-level spatially pooled features, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Press, pp. 9375-9383, L.A., USA, June 2019.
  • Men, H., Lin, H., Hosu, V., Maurer, D., Bruhn, A., Saupe, D.,Visual quality assessment for motion compensated frame interpolation, International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, June 2019, IEEE Press.
  • Lin, H., Hosu, V., Saupe, D.,KADID-10k: A large-scale artificially distorted IQA database, International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, June 2019, IEEE Press.
  • Saupe, D., Kaup, A., Ohm, J. (eds.), - 5th ITG/VDE Summer School on Video Compression and Processing (SVCP), Institutional Repository of the University of Konstanz (KOPS), June 2019.
  • Men, H., Lin, H., Hosu, V., Maurer, D., Bruhn, A., Saupe, D. - Technical report on visual quality assessment for frame interpolation, arXiv:1901.05362 [cs.CV], 2019.

2018

  • Spicker, M., Hahn, F., Lindemeier, T., Saupe, D., Deussen, O. - Quantifying visual abstraction quality for computer-generated illustrations, ACM Transactions on Applied Perception (TAP), December 2018, in press.
  • Jenadeleh, M., Pedersen, M., Saupe, D.Realtime quality assessment of iris biometrics under visible light, IEEE Computer Society Workshop on Biometrics (CVPR), 2018.
  • Varga, D., Sziranyi, T.,  Saupe, D. - DeepRN: A content preserving deep architecture for blind image quality assessment, IEEE International Conference on Multimedia and Expo (ICME), 2018. (method code, reimplemented)
  • Wiedemann, O., Hosu, V., Lin, H., and Saupe D. - Disregarding the big picture: Towards local image quality assessment, 10th International Conference on Quality of Multimedia Experience (QoMEX), 2018.
  • Hosu, V., Lin, H., Saupe, D. - Expertise screening in crowdsourcing image quality, 10th International Conference on Quality of Multimedia Experience (QoMEX), 2018.
  • Men, H., Lin, H., and Saupe D. - Spatiotemporal feature combination model for no-reference video quality assessment, 10th International Conference on Quality of Multimedia Experience (QoMEX), 2018.

2017

  • Egger-Lampl, S., Redi, J., Hoßfeld, T., Hirth, M., Möller, S., Naderi, B., Keimel, Ch., Saupe, D. - Crowdsourcing quality of experience experiments, Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, Springer-Verlag, 2017.
  • Gadiraju, U., Möller, S., Nöllenburg, M., Saupe, D., Egger-Lampl, S., Archambault, D., Fisher, B. - Crowdsourcing versus the laboratory: Towards human-centered experiments using the crowd, Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, Daniel Archambault, Helen Purchase, Tobias Hossfeld (eds.) , Springer-Verlag, 2017.
  • Spicker, M., Hahn, F., Lindemeier, T., Saupe, D., Deussen, O. - Quantifying visual abstraction quality for stipple drawings, Symposium on Non-Photorealistic Animation and Rendering, Best Paper Award, 2017.
  • Jenadeleh, M., Masaeli, M. M., Moghaddam, M. E.Blind image quality assessment based on aesthetic and statistical quality-aware features, Journal of Electronic Imaging, 2017.
  • Hosu, V., Hahn, F., Jenadeleh, M., Lin, H., Men, H., Szirányi, T., Li, S., Saupe, D. - The Konstanz natural video database (KoNViD-1k), 9th International Conference on Quality of Multimedia Experience (QoMEX), 2017.
  • Men, H., Lin, H., Saupe, D. - Empirical evaluation of no-reference VQA methods on a natural video quality database, 9th International Conference on Quality of Multimedia Experience (QoMEX), 2017.

2016

  • Hosu, V., Hahn, F., Wiedemann, O., Jung, S.-H., Saupe, D. - Saliency-driven image coding improves overall perceived JPEG quality, IEEE Picture Coding Symposium (PCS), 2016.
  • Hosu, V., Hahn, F., Zingman, I., Saupe, D. - Reported attention as a promising alternative to gaze in IQA tasks, 5th International Workshop on Perceptual Quality of Systems (PQS), 2016.
  • Saupe, D., Hahn, F., Hosu, V., Zingman, I., Rana, R., Li, S. - Crowd workers proven useful: A comparative study of subjective video quality assessment, Eight International Workshop on Quality of Multimedia Experience (QoMEX), 2016.
  • Zingman, I., Saupe, D., Penatti, O., Lambers, K. - Detection of fragmented rectangular enclosures in very high resolution remote sensing images, IEEE Transactions on Geoscience and Remote Sensing (IEEE), 2016.

2015

  • Zingman, I., Saupe, D., Lambers, K. - Detection of incomplete enclosures of rectangular shape in remotely sensed images, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 

Project Members

Researchers

  • Prof. Dr. Dietmar Saupe
  • Dr. Vlad Hosu
  • Dr. Mohsen Jenadeleh
  • Oliver Wiedemann

Collaborating

  • Prof. Dr. Raouf Hamzaoui
  • Prof. Dr. Shujun Li
  • Dr. Hantao Liu
  • Prof. Dr. Tamás Szirányi

Former Members

  • Prof. Dr. Sung-Hwan Jung
  • M.Sc. Masud Rana
  • Dr. Igor Zingman
  • M.Sc. Franz Hahn
  • Dr. Hanhe Lin
  • M.Sc. Hui Men