Notice: Undefined index: scheme in /home/users/00/10/6b/home/www/xypor/index.php on line 191

Notice: Undefined index: host in /home/users/00/10/6b/home/www/xypor/index.php on line 191

Notice: Undefined index: scheme in /home/users/00/10/6b/home/www/xypor/index.php on line 199

Notice: Undefined index: scheme in /home/users/00/10/6b/home/www/xypor/index.php on line 250

Notice: Undefined index: host in /home/users/00/10/6b/home/www/xypor/index.php on line 250

Warning: Cannot modify header information - headers already sent by (output started at /home/users/00/10/6b/home/www/xypor/index.php:191) in /home/users/00/10/6b/home/www/xypor/index.php on line 1169

Warning: Cannot modify header information - headers already sent by (output started at /home/users/00/10/6b/home/www/xypor/index.php:191) in /home/users/00/10/6b/home/www/xypor/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /home/users/00/10/6b/home/www/xypor/index.php:191) in /home/users/00/10/6b/home/www/xypor/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /home/users/00/10/6b/home/www/xypor/index.php:191) in /home/users/00/10/6b/home/www/xypor/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /home/users/00/10/6b/home/www/xypor/index.php:191) in /home/users/00/10/6b/home/www/xypor/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /home/users/00/10/6b/home/www/xypor/index.php:191) in /home/users/00/10/6b/home/www/xypor/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /home/users/00/10/6b/home/www/xypor/index.php:191) in /home/users/00/10/6b/home/www/xypor/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /home/users/00/10/6b/home/www/xypor/index.php:191) in /home/users/00/10/6b/home/www/xypor/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /home/users/00/10/6b/home/www/xypor/index.php:191) in /home/users/00/10/6b/home/www/xypor/index.php on line 1176
IS&T | Library
[go: up one dir, main page]

Regular
AFFINE TRANSFORMAUTO WHITE BALANCEAUDIO-VISUAL QUALITY
BENCHMARKBOKEH
CSFCAMERA IMAGE QUALITYCOLOR REPRODUCTION QUALITYCYANOBACTERIACOLOR DISCRIMINATIONCOLORCONTRAST RESOLUTIONCROWDSOURCINGCAMERA CHARACTERIZATIONCOLOR CALIBRATION
DEAD LEAVESDEPTH MAPDEPTH OF FIELDDEPTH ORDERINGDEGRADATION IDENTIFICATIONDUAL-CAMERADISPLAY QUALITYDYNAMIC RANGE
EXTRAPOLATION
FLAREFULL-REFERENCEFEATURES FUSION
GREEN ALGAE
HUMAN VISUAL SYSTEM MODELINGHIGH DYNAMIC RANGEHUMAN VISUAL PERCEPTIONHDRHIGH DYNAMIC RANGE (HDR)
IMAGE PROCESSINGIMAGE ANALYST EXPLOITATIONIMAGING STANDARDSIMAGE FUSIONIMMERSIVE METHODOLOGYIMAGE QUALITY EVALUATIONIMAGE EQUALIZATIONIMAGE RESOLUTIONIMAGE QUALITYIMAGE QUALITY ASSESSMENTIMAGE OPTIMIZATIONIMAGE QUALITY METRICISO 20462-3:2012
JUST NOTICEABLE DIFFERENCES
LENS DISTORTIONLOCAL BINARY PATTERNSLOCAL CONTRAST
MULTI-IMAGEMETHOD OF ADJUSTMENTMTFMachine LearningMULTISPECTRALMACHINE LEARNING
NOISE REDUCTIONNOISENPSNR Quality AssessmentNEQ
OUT-OF-FOCUS BLUR
PSYCHOPHYSICSPERCEPTUAL TRANSFORMSPSYCHOPHYSICAL EXPERIMENTS
QUALITY ASSESSMENTQUALITY OF COMMUNICATIONQUALITY OF EXPERIENCEQUALITY RULERQUALITY LOSS JND
RECONNAISSANCE
SYNTHESIZED VIEWSCENE-DEPENDENCESUBJECTIVE QUALITYSOFTCOPY QUALITY RULERSFRSUPPORT VECTOR MACHINESPECTRAL VS. SPATIAL PERFORMANCESALIENCY MAPSSYSTEM PERFORMANCESMARTPHONE IMAGINGSUBJECTIVE AND OBJECTIVE QUALITY ASSESSMENTSTEREOSCOPIC IMAGES
TRANSMISSIONTEMPORAL NOISE REDUCTION (TNR)TEXTURETEXTURE ANALYSISTONE MAPPINGTEXTURE LOSS
UNMANNED AERIAL VEHICLES
VIDEOCONFERENCEVISUAL QUALITYVISUALLY IMPAIREDVIDEO QUALITY ASSESSMENTVisual Quality AssessmentVIDEO QUALITYVEILING GLAREVISUAL DEFICIENCY
WIDE COLOR GAMUT
 Filters
Month and year
 
  20  2
Image
Pages 562-1 - 562-6,  © Society for Imaging Science and Technology 2018
Digital Library: EI
Published Online: January  2018
  29  10
Image
Pages 107-1 - 107-5,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 12

There are currently no standards for characterization and calibration of the cameras used on unmanned aerial systems (UAS's). Without such standards, the color information in the images captured with these devices is not meaningful. By providing standard color calibration targets, code, and procedures, users will be empowered with the ability to obtain color images that can provide information valuable for agriculture, infrastructure, water quality, even cultural heritage applications. The objective of this project is to develop the test targets and methodology for color calibrating unmanned aerial vehicle cameras. We are working to develop application-specific color targets, the necessary code, and a qualitative procedure for conducting UAS camera calibration. To generate the color targets, we will be following approaches used in the development of ISO 17321-1: Graphic technology and photography — Colour characterisation of digital still cameras (DSCs) — Part 1: Stimuli, metrology and test procedures as well as research evaluating application-specific camera targets. This report reviews why a new industry standard is needed and the questions that must be addressed in developing a new standard.

Digital Library: EI
Published Online: January  2018
  19  2
Image
Pages 108-1 - 108-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 12

The framework for this research work is the acquisition of Infrared (IR) images from Unmanned Aerial Vehicles (UAV). In this paper we consider the No-Reference (NR) prediction of Full Reference Quality Metrics for Infrared (IR) video sequences which are compressed and thus distorted by an H.264 codec. The proposed method works as a Bitstream Based (BB) approach and it may thus be applied on-ground. Three different types of features are first computed: codec features (based on information extracted from the bitstream), image quality features (based on BRISQUE evaluations) and Spatial and temporal perceptual information. Those features are then mapped, using a machine learning (ML) algorithm, the Support Vector Regression (SVR), to the quality scores of Full Reference (FR) quality metrics. The novelty of this work is to design a NR framework for the prediction of quality metrics by applying ML algorithm in the IR domain. A set of 5 drone energy leakage image sequences and 3 ground IR image sequences are used for evaluating the performance of the proposed method. Each of the image sequences are encoded at 4 different bitrates and the prediction of the proposed method is compared with the true FR quality metrics scores of four images metrics: PSNR, NQM, SSIM and UQI and one video metric: VQM. Results show that our technique achieves a fairly reasonable performance. The improved performance obtained in SROCC and LCC is up to 0.99 and the RMSE is reduced to as little as 0.01 between the actual FR and the estimated quality scores for the H.264 coded IR sequences.

Digital Library: EI
Published Online: January  2018
  27  6
Image
Pages 169-1 - 169-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 12

The dynamic range (DR; defined as the range of exposure between saturation and 0 dB SNR) of recent High Dynamic Range (HDR) image sensors, can be extremely high: 120 dB or more. But the dynamic range of real imaging systems that include lenses is limited by veiling glare (susceptibility to flare light from reflections inside the lens), and hence rarely approaches this level. Standard veiling glare measurements, such as ISO 18844, made from charts with black cavities on white fields, yield numbers (expressed as a percentage of the pixel level in nearby light areas) that are much worse than expected for actual camera dynamic range. Camera dynamic range is typically measured from grayscale charts, and is strongly affected by veiling glare, which is a function of the lens, chart design, and the surrounding field. Many HDR systems employ tone mapping— which enables HDR scenes to be rendered in displays with limited dynamic range by compressing (flattening) tones over large areas while attempting to maintain local contrast in small areas. Measurements of tone-mapped images from standard grayscale charts often show low contrast over a wide tonal range, and give no indication of local contrast, which is especially important for the automotive and security industries, where lighting is uncontrolled and the visibility of low contrast features in shadow regions is critical. We discuss the interaction between veiling glare and dynamic range measurements and we propose a new transmissive test chart and dynamic range definition that directly indicates the visibility of low contrast features over a wide range of scene brightness.

Digital Library: EI
Published Online: January  2018
  184  24
Image
Pages 170-1 - 170-10,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 12

Today, most advanced mobile phone cameras integrate multi-image technologies such as high dynamic range (HDR) imaging. The objective of HDR imaging is to overcome some of the limitations imposed by the sensor physics, which limit the performance of small camera sensors used in mobile phones compared to larger sensors used in digital single-lens reflex (DSLR) cameras. In this context, it becomes more and more important to establish new image quality measurement protocols and test scenes that can differentiate the image quality performance of these devices. In this work, we describe image quality measurements for HDR scenes covering local contrast preservation, texture preservation, color consistency, and noise stability. By monitoring these four attributes in both the bright and dark parts of the image, over different dynamic ranges, we benchmarked four leading smartphone cameras using different technologies and contrasted the results with subjective evaluations.

Digital Library: EI
Published Online: January  2018
  128  21
Image
Pages 171-1 - 171-5,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 12

A frequently used method for camera imaging performance evaluation is that based on the ISO standard for resolution and spatial frequency responses (SFR). This standard, ISO 12233, defines a method based on a straight edge element in a test chart. While the method works as intended, results can be influenced by lens distortion due to curvature in the captured edge feature. We interpret this as the introduction of a bias (error) into the measurement, and describe a method to reduce or eliminate its effect. We use a polynomial edge-fitting method, currently being considered for a revised IS012233. Evaluation of image distortion is addressed in two more recent standards, ISO 17850 and 19084. Applying these methods along with the SFR analysis complements the SFR analysis discussed here.

Digital Library: EI
Published Online: January  2018
  79  20
Image
Pages 231-1 - 231-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 12

Imaging system performance measures and Image Quality Metrics (IQM) are reviewed from a systems engineering perspective, focusing on spatial quality of still image capture systems. We classify IQMs broadly as: Computational IQMs (CP-IQM), Multivariate Formalism IQMs (MF-IQM), Image Fidelity Metrics (IF-IQM), and Signal Transfer Visual IQMs (STV-IQM). Comparison of each genre finds STV-IQMs well suited for capture system quality evaluation: they incorporate performance measures relevant to optical systems design, such as Modulation Transfer Function (MTF) and Noise-Power Spectrum (NPS); their bottom-up, modular approach enables system components to be optimized separately. We suggest that correlation between STV-IQMs and observer quality scores is limited by three factors: current MTF and NPS measures do not characterize scene-dependent performance introduced by imaging system non-linearities; contrast sensitivity models employed do not account for contextual masking effects; cognitive factors are not considered. We hypothesize that implementation of scene and process-dependent MTF (SPD-MTF) and NPS (SPD-NPS) measures should mitigate errors originating from scene dependent system performance. Further, we propose implementation of contextual contrast detection and discrimination models to better represent low-level visual performance in image quality analysis. Finally, we discuss image quality optimization functions that may potentially close the gap between contrast detection/discrimination and quality.

Digital Library: EI
Published Online: January  2018
  22  4
Image
Pages 232-1 - 232-5,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 12

Color imaging is such a ubiquitous capability in daily life that a general preference for color over black-and-white images is often simply assumed. However, tactical reconnaissance applications that involve visual detection and identification have historically relied on spatial information alone. In addition, realtime transmission over narrow communication channels often restricts the amount of image data, requiring tradeoffs in spectral vs. spatial content. For these reasons, an assessment of the discrimination differences between color and monochrome systems is of significant interest to optimize the visual detection and identification of objects of interest. We demonstrate the amount of visual image "utility" difference provided by color systems through a series of subjective experiments that pair spatially degraded color images with a reference monochrome sample. The quality comparisons show a performance improvement in intelligence value equivalent to that achieved from a spatial improvement of about a factor of two (approximately 1.0 NIIRS). Observers were also asked to perform specific detection tasks with both types of systems and their performance and confidence results were measured. On average, a 25 percent accuracy improvement and a 30 percent corresponding confidence improvement were measured for the color presentation vs. the same image presented in black-and-white (monochrome).

Digital Library: EI
Published Online: January  2018
  14  1
Image
Pages 233-1 - 233-7,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 12

In this work, we present the results of a psycho-physical experiment in which a group of volunteers rated the quality of a set of audio-visual sequences. The sequences had up to three types of distortions: video coding, packet-loss, and frame freezing distortions. The original content used for the experiment consisted of a set of high definition audio-visual sequences. Impairments were only inserted into the video component of the sequences, while the audio component remained unimpaired. The objective of this particular experiment was to analyze different types of source degradations and compare the transmission scenarios where they occur. Given the nature of these degradations, the analysis is focused on the visual component of the sequence. The experiment was conducted using the basic directions of the immersive experimental methodology.

Digital Library: EI
Published Online: January  2018
  28  1
Image
Pages 234-1 - 234-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 12

In this paper, we report the results of a set of psychophysical experiments that measure the perceptual strengths of videos with different combinations of blockiness, blurriness, and packet-loss artifacts and the overall annoyance. Participants were instructed to search each video for impairments and rate the strength of their individual features (artifacts). A repeated measure Anova (RM-ANOVA) performed on the data showed that artifact physical strengths have a significant effect on annoyance judgments. We tested and reported a set of linear models on the experimental data and we found that all these models give a good description of the relation between individual artifact perceptual strengths and the overall annoyance. In other words, all models presented a very good correlation with the experimental data, showing that annoyance can be modeled as a multidimensional function of the individual artifact perceptual strengths. Additionally, results show that there are interactions among artifact signals.

Digital Library: EI
Published Online: January  2018

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]