Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels

ICML 2024
1Nanyang Technological University 2Shanghai Jiao Tong University 3Sensetime Research
*Equal Contribution. +Project Lead. #Corresponding Author(s).

Abstract

The explosion of visual content available online underscores the requirement for an accurate machine assessor to robustly evaluate scores across diverse types of visual contents. While recent studies have demonstrated the exceptional potentials of large multi-modality models (LMMs) on a wide range of related fields, in this work, we explore how to teach them for visual rating aligned with human opinions. Observing that human raters only learn and judge discrete text-defined levels in subjective studies, we propose to emulate this subjective process and teach LMMs with text-defined rating levels instead of scores. The proposed Q-Align achieves state-of-the-art performance on image quality assessment (IQA), image aesthetic assessment (IAA), as well as video quality assessment (VQA) tasks under the original LMM structure. With the syllabus, we further unify the three tasks into one model, termed the OneAlign. In our experiments, we demonstrate the advantage of the discrete-level-based syllabus over direct-score-based variants for LMMs.

Technical Report

Overview

Based on the general principle of teaching LMMs with text-defined rating levels, we generate the instruction-response pairs by converting existing score labels in image quality assessment (IQA), image aesthetic assessment (IAA) and video quality assessment (VQA) datasets. During inference, by simulating the process of collecting mean opinion scores (MOS) from annotators, we extract the close-set probabilinities of rating levels and perform weighted averages to obtain the LMM-predicted score.

Insight: How Do Humans Rate?

Typically, it includes three stages: (1) Training human raters with text-defined rating levels. Simulating this, we propose the rating-level-based syllabus for LMMs. (2) Collecting human ratings. Human raters choose levels (Type 1) or toggle level-guided sliders to score (Type 2), without directly inputting the score in either way. (3) Converting initial ratings to MOS via weighted average. Following this stage, we propose the probability-based inference for LMMs to predict final scores.

Structure

The model structure of the Q-Align reduces tokens per image to 64 through the visual abstractor, effectively unifying images and videos (as sequences of images) under one general structure. This approach allows for a streamlined and efficient processing of both static images and dynamic video content within the same framework.

BibTeX

@article{wu2023qalign,
  title={Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels},
  author={Wu, Haoning and Zhang, Zicheng and Zhang, Weixia and Chen, Chaofeng and Li, Chunyi and Liao, Liang and Wang, Annan and Zhang, Erli and Sun, Wenxiu and Yan, Qiong and Min, Xiongkuo and Zhai, Guangtao and Lin, Weisi},
  journal={arXiv preprint arXiv:2312.17090},
  year={2023},
  institution={Nanyang Technological University and Shanghai Jiao Tong University and Sensetime Research},
  note={Equal Contribution by Wu, Haoning and Zhang, Zicheng. Corresponding Authors: Zhai, Guangtao and Lin, Weisi.}
}