9.2012
Up one level- CVMP 2009
-
-
2013-06-28
Constructing And Rendering Vectorised Photographic ImagesWe address the problem of representing captured images in the continuous mathematical space more usually associated with certain forms of drawn ('vector') images. Such an image is resolution-independent so can be used as a master for varying resolution-specific formats. We briefly describe the main features of a vectorising codec for photographic images, whose significance is that drawing programs can access images and image components as first-class vector objects. This paper focuses on the problem of rendering from the isochromic contour form of a vectorised image and demonstrates a new fill algorithm which could also be used in drawing generally. The fill method is described in terms of level set diffusion equations for clarity. Finally we show that image warping is both simplified and enhanced in the vector form and that we can demonstrate real histogram equalisation with genuinely rectangular histograms straightforwardly.
JVRB, 9(2012), no. 3.
-
2013-06-27
A multi-modal approach to perceptual tone mappingWe present an improvement of TSTM, a recently proposed tone mapping operator for High Dynamic Range (HDR) images, based on a multi-modal analysis. One of the key features of TSTM is a suitable implementation of the Naka-Rushton equation that mimics the visual adaptation performed by the human visual system coherently with Weber-Fechner's law of contrast perception. In the present paper we use the Gaussian Mixture Model (GMM) in order to detect the modes of the log-scale luminance histogram of a given HDR image and then we use the information provided by GMM to properly devise a Naka-Rushton equation for each mode. Finally, we properly select the parameters in order to merge those equations into a continuous function. Tests and comparisons to show how this new method is capable of improving the performances of TSTM are provided and commented, as well as comparisons with state of the art methods.
JVRB, 9(2012), no. 7.
-
2013-04-25
Sharpness Matching in Stereo ImagesWhen stereo images are captured under less than ideal conditions, there may be inconsistencies between the two images in brightness, contrast, blurring, etc. When stereo matching is performed between the images, these variations can greatly reduce the quality of the resulting depth map. In this paper we propose a method for correcting sharpness variations in stereo image pairs which is performed as a pre-processing step to stereo matching. Our method is based on scaling the 2D discrete cosine transform (DCT) coefficients of both images so that the two images have the same amount of energy in each of a set of frequency bands. Experiments show that applying the proposed correction method can greatly improve the disparity map quality when one image in a stereo pair is more blurred than the other.
JVRB, 9(2012), no. 4.
-
2012-02-22
Cosine Lobe Based Relighting from Gradient Illumination PhotographsWe present an image-based method for relighting a scene by analytically fitting cosine lobes to the reflectance function at each pixel, based on gradient illumination photographs. Realistic relighting results for many materials are obtained using a single per-pixel cosine lobe obtained from just two color photographs: one under uniform white illumination and the other under colored gradient illumination. For materials with wavelength-dependent scattering, a better fit can be obtained using independent cosine lobes for the red, green, and blue channels, obtained from three achromatic gradient illumination conditions instead of the colored gradient condition. We explore two cosine lobe reflectance functions, both of which allow an analytic fit to the gradient conditions. One is non-zero over half the sphere of lighting directions, which works well for diffuse and specular materials, but fails for materials with broader scattering such as fur. The other is non-zero everywhere, which works well for broadly scattering materials and still produces visually plausible results for diffuse and specular materials. We also perform an approximate diffuse/specular separation of the reflectance, and estimate scene geometry from the recovered photometric normals to produce hard shadows cast by the geometry, while still reconstructing the input photographs exactly.
JVRB, 9(2012), no. 2.
-
- GI VR/AR 2009
-
-
2012-02-10
XSAMPL3D: An Action Description Language for the Animation of Virtual CharactersIn this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories,hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.
JVRB, 9(2012), no. 1.
-
- GI VR/AR 2010
-
-
2012-11-28
OCTAVIS: Optimization Techniques for Multi-GPU Multi-View RenderingWe present a high performance-yet low cost-system for multi-view rendering in virtual reality (VR) applications. In contrast to complex CAVE installations, which are typically driven by one render client per view, we arrange eight displays in an octagon around the viewer to provide a full 360° projection, and we drive these eight displays by a single PC equipped with multiple graphics units (GPUs). In this paper we describe the hardware and software setup, as well as the necessary low-level and high-level optimizations to optimally exploit the parallelism of this multi-GPU multi-view VR system.
JVRB, 9(2012), no. 6.
-
- CVMP 2010
-
-
2013-07-23
Virtual camera synthesis for soccer game replaysIn this paper, we present a set of tools developed during the creation of a platform that allows the automatic generation of virtual views in a live soccer game production. Observing the scene through a multi-camera system, a 3D approximation of the players is computed and used for the synthesis of virtual views. The system is suitable both for static scenes, to create bullet time effects, and for video applications, where the virtual camera moves as the game plays.
JVRB, 9(2012), no. 5.
-
2012-12-28
High Resolution Image Correspondences for Video Post-ProductionWe present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform) features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.
JVRB, 9(2012), no. 8.
-
- GRAPP 2011
-
-
2012-12-31
Head Tracking Based Avatar Control for Virtual Environment Teamwork TrainingVirtual environments (VE) are gaining in popularity and are increasingly used for teamwork training purposes, e.g., for medical teams. One shortcoming of modern VEs is that nonverbal communication channels, essential for teamwork, are not supported well. We address this issue by using an inexpensive webcam to track the user's head. This tracking information is used to control the head movement of the user's avatar, thereby conveying head gestures and adding a nonverbal communication channel. We conducted a user study investigating the influence of head tracking based avatar control on the perceived realism of the VE and on the performance of a surgical teamwork training scenario. Our results show that head tracking positively influences the perceived realism of the VE and the communication, but has no major influence on the training outcome.
JVRB, 9(2012), no. 9.
-