[go: up one dir, main page]

FIT VUT Brno

High Visual Computing 2019

The sixth meeting of the Czech and Slovak computer graphics and computer vision people in the mountains. The event will take place January 30 - February 2, 2019 in Bohemian Forest (Šumava) in a nice hotel:

http://www.zadovhotel.cz/informace-o-hotelu/ (see the map).

The goal of the meeting is to encourage exchange of ideas between researchers and practitioners in the fields of computer graphics and computer vision from across the Czech and Slovak Republics and beyond. There will be many (optional) outdoor activities possible during the day and fruitful talks, discussions and socializing in the afternoons and evenings.

The expected price is around 100 EUR per person (3 nights, 3x half board). Please let us know should you be interested in attending the event (please provide also your talk proposal, deadline: November 12, 2018).


Frequently Asked Questions:

Q: How do I register?
A: Send an e-mail to Martin and Jarda stating that you would like to participate. In your e-mail, include the title and a short abstract of your offered 20 minute talk. We will confirm the registration by replying to your email (yes, it is this personal). Sending your abstract does not necessarily mean you will be selected to present, but the title and abstract are still a strict requirement for the registration - intended to keep the standard of the presented material consistently high.

Q: Do I need to look for accomodation?
A: Accommodation is taken care of for you - everyone is staying in the same hotel which we have pre-booked. You will pay on spot after you arrive. Should you have any special requirements concerning accommodation or food, please talk to Radka Hacklova.

Q: What is the conference fee and how do I pay it?
A: There is no conference fee per se. All you need to do is pay for your food and accommodation on the spot at the hotel, which we expect to be around 100 EUR per person for the three nights. We will not be collecting your money.

Q: Do I need to take care of my travel arrangements?
A: Yes, travel is up to you. We will send an info email about this to all the registered participants in due time.

Q: Summing it all up, what’s the timeline between now and the conference?
A: Easy. You send us your abstract before Nov 12th. We confirm your registration. By the end of November, we will assemble the program from the offered talks and post it online. At the beginning of December, we will send an email with practical information to all registered participants. You will have almost two months to arrange your travel. You are expected to arrive to the hotel by the afternoon on January 30th, where we’ll have a dinner at 6pm and the conference program will start at 7pm.


Programme:

30.1.2019 Wednesday:

10:00 - 17:00
optional socializing outdoors
18:00 - 19:00
dinner
19:00 - 19:10
welcome
19:10 - 20:00
invited talk 1: Vincent Lepetit, University of Bordeaux, France: 3D Rigid and Articulated Object Registration for Robotics and Augmented Reality
20:15 - 21:30
talks 1 - 3:
21:30 - 02:00
socializing indoors

31.1.2019 Thursday:

7:30 - 9:30
breakfast
10:00 - 17:00
socializing outdoors
17:00 - 17:50
talk 4 - 5:
  • Jiří Vorba: Rendering VFX with Manuka at Weta Digital
  • Filip Škola: Virtual reality environments for motor imagery brain-computer interface training facilitation
18:00 - 19:10
dinner
19:10 - 20:00
invited talk 2: Thorsten Herfet, Saarland University, Germany: Enabling Multiview- and Light Field-Video for Veridical Visual Experiences
20:15 - 21:30
talks 6 - 7:
21:30 - 02:00
socializing indoors

1.2.2019 Friday:

7:30 - 9:30
breakfast
10:00 - 17:00
socializing outdoors
17:00 - 17:50
talks 8 - 9:
18:00 - 19:10
dinner
19:10 - 20:00
invited talk 3: Wenzel Jakob, School of Computer and Communication Sciences, EPFL, Switzerland: Capturing and rendering the world of materials
20:15 - 21:30
talks 10 - 12:
  • Vlastimil Havran: On the Advancement of BTF Measurement on Site
  • Dan Meister: Parallel Locally-Ordered Clustering for Bounding Volume Hierarchy Construction
  • Asen Atanasov: Adaptive Environment Sampling on CPU and GPU
21:25 - 02:00
socializing indoors

Invited Speakers:

Speaker 1: Vincent Lepetit, University of Bordeaux, France

Matthias Hullin

Dr. Vincent Lepetit is a Full Professor at the LaBRI, University of Bordeaux. He also supervizes a research group in Computer Vision for Augmented Reality at theInstitute for Computer Graphics and Vision, TU Graz. He received the PhD degree in Computer Vision in 2001 from the University of Nancy, France, after working in the ISA INRIA team. He then joined the Virtual Reality Lab at EPFL as a post-doctoral fellow and became a founding member of the Computer Vision Laboratory. He became a Professor at TU Graz in February 2014, and at University of Bordeaux in January 2017. His research is at the interface between Machine Learning and 3D Computer Vision, with application to 3D hand pose estimation, feature point detection and description, and 3D object and camera registration from images. In particular, he introduced with his colleagues methods such as Ferns, BRIEF, LINE-MOD, and DeepPrior for feature point matching and 3D object recognition.

3D Rigid and Articulated Object Registration for Robotics and Augmented Reality

I will present our approach to 3D registration of rigid and articulated objects from monocular color images or depth maps. We first introduce a "holistic" approach that relies on a representation of a 3D pose suitable to Deep Networks and on a feedback loop. We also show how to tackle the domain gap between real images and synthetic images, in order to use synthetic images to train our models. Finally, I will present our recent extension to deal with large partial occlusions.




Speaker 2: Thorsten Herfet, Saarland University, Germany

Thorsten Herfet

Prof. Dr.-Ing Thorsten Herfet is full university professor at the Saarland Informatics Campus, Germany. Prior to his appointment (2004) he has been VP Advanced Research with Grundig AG and Manager CE Standards and Regulation EMEA with Intel Corp. Thorsten received his diploma on electrical engineering and his Ph.D. on digital image processing and transmission form Technical University Dortmund in 1988 and 1991 respectively. In his industrial and academic carrier Thorsten published more than 150 papers and articles, holds more than 15 patents and has led large scale collaborative research projects of several 10 Mio. € volume, funded by the German National Science Foundation, Ministry of Research and Education and the European Commission under FP7 and H2020. He served as the Dean for Mathematics and Computer Science 2006-2008, as the University's Vice President Research and Technology Transfer 2014-2017 and as the Director of Research and Operations of the Intel Visual Computing Institute 2009-2017. Thorstens research is focused on Cyber-Physical Networking, Latency and Resilience-Aware Streaming, Computational Videography and High Mobility in Multicarrier-Systems.

Enabling Multiview- and Light Field-Video for Veridical Visual Experiences

With the advent of UHDTV and the inclusion of High Dynamic Range, High Frame Rate and Extended Color Gamut 2D-imagery is able to push technical parameters up to the limits of the human visual sense. Consequently, developments in sensor technology can be used to capture information beyond 2D-imagery. In this talk we introduce multiview- and light field-video as an option to capture (at least parts of) the plenoptic function and therewith drive veridical visual experiences. Our contribution is on tools for capturing and encoding so called 5D light fields. We have built a multi-camera array producing up to 6 GigaRays/s and a real-time hierarchical H.264 MVC encoder that enables encoding the light fields in form of a legacy compliant video stream.




Speaker 3: Wenzel Jakob, School of Computer and Communication Sciences, EPFL, Switzerland

Wenzel Jakob

Wenzel Jakob is an assistant professor at EPFL’s School of Computer and Communication Sciences, where he leads the Realistic Graphics Lab. His research interests revolve around material appearance modeling, rendering algorithms, and the high-dimensional geometry of light paths. Wenzel Jakob is also the lead developer of the Mitsuba renderer, a research-oriented rendering system, and one of the authors of the third edition of “Physically Based Rendering: From Theory To Implementation".

Capturing and rendering the world of materials

One of the key ingredients of any realistic rendering system is a description of the way in which light interacts with objects, typically modeled via the Bidirectional Reflectance Distribution Function (BRDF). Unfortunately, real-world BRDF data remains extremely scarce due to the difficulty of acquiring it it: a BRDF measurement requires scanning a four-dimensional domain at high resolution—an infeasibly time-consuming process.

In this talk, I'll showcase our ongoing work on assembling a large library of materials including including metals, fabrics, organic substances like wood or plant leaves, etc. The key idea to work around the curse of dimensionality is an adaptive parameterization, which automatically warps the 4D space so that most of the volume maps to “interesting” regions. Starting with a review of BRDF models and microfacet theory, I'll explain the new model, as well as the optical measurement apparatus that we used to conduct the measurements.




Important Dates:

  • Deadline for talk proposals: November 12, 2018
  • Meeting: January 30 - February 2, 2019

Venue:

Hotel Zadov, Czech Republic: http://www.zadovhotel.cz/

Programme and Organization Committee:

Martin Čadík, Jaroslav Křivánek

Duties: scientific program, selection of beer and everything else.

Sponsoring:

FIT VUT Brno