[go: up one dir, main page]

Learning to Predict Localized Distortions in Rendered Images - Supplementary Materials

This supplementary material shows stimuli, mean subjective distortion maps for LOCCG dataset, distortion maps predicted by the new metric and state-of-the-art metrics, and the statistics calculated for purposes of analysis and evaluation, as described in the paper. Furthermore, it shows and describes input stimuli of the new CLFM dataset. Please use zooming feature of your web browser for an overview.


Results of the new metric and comparison to state-of-the-art using LOCCG dataset (LOCalized Computer Graphics artefacts)

New CLFM dataset stimuli (Contrast-Luminance-Frequency-Masking) dataset

CLFM dataset - human responses subjective annotations of CLFM dataset

LOCCG visual saliency dataset visual attention maps acquired experimentally by an eye tracker