High Performance Graphics 2019

Strasbourg | July 8-10, 2019


Monday, July 8

9:30 to 10:30 Registration and Breakfast
10:30 to 10:45 Opening Remarks
10:45 to 12:00 Paper Session – Rendering
Session chair: Attila Afra
HMLFC: Hierarchical Motion-Compensated Light Field Compression for Interactive Rendering
Srihari Pratapa, Dinesh Manocha
slides (pdf) | slides (pptx)
An Analysis of Region Clustered BVH Volume Rendering on GPU
David Ganter, Michael Manzke
slides (pdf) | slides (pptx)
Real-Time Analytic Antialiased Text for 3-D Environments
Apollo Ellis, Warren Hunt, John Hart
slides (pdf)
12:00 to 13:30 Lunch
13:30 to 14:30 Paper Session 2 (short papers) – Ray Tracing: Hardware and Performance
Session chair: Michael Dogget
Mach-RT: A Many Chip Architecture for High-Performance Ray Tracing
Elena Vasiou, Konstantin Shkurko, Erik Brunvand, Cem Yuksel
slides (pdf) | slides (pptx)
RTX Beyond Ray Tracing: Exploring the Use of Hardware Ray Tracing Cores for Tet-Mesh Point Location
Ingo Wald, Will Usher, Nathan Morrical, Laura Lediaev, Valerio Pascucci
slides (pdf)
Wide BVH Traversal with a Short Stack
Karthik Vaidyanathan, Sven Woop, Carsten Benthin
slides (pdf) | slides (pptx)
14:30 to 15:00 Afternoon Break
15:00 to 16:30 Hot3D
Mobile GPU Power and Performance
Andrew Gruber (Qualcomm)
slides (pdf) | slides (pptx)
Open Image Denoise – Open Source Denoising for Ray Tracing
Attila Afra (Intel)
slides (pdf) | slides (pptx)
NVIDIA’s Turing: More Than Ray Tracing and AI
Yuri Uralsky (NVIDIA)
slides (pdf)
16:30 to 17:00 Break
17:00 to 18:00 Paper Session 3 (short papers) – Doing more with each ray
Session chair: Tobias Ritschel
Dynamic Many-Light Sampling for Real-Time Ray Tracing
Pierre Moreau, Matt Pharr, Petrik Clarberg
slides (pdf) | slides (pptx)
Stochastic Lightcuts
Cem Yuksel
video (youtube)
Temporally Dense Ray Tracing
Pontus Andersson, Jim Nilsson, Marco Salvi, Josef Spjut, Tomas Akenine-Möller
slides (pdf) | slides (pptx)
19:00 to 22:00 HPG Banquet (Art Cafe)

Tuesday, July 9

8:30 to 9:30 Breakfast
9:30 to 9:40 Posters fast-forward
9:40 to 10:30 Keynote 1
The Story of NVIDIA RTX
Steven Parker
slides (pdf)
10:30 to 11:00 Morning Break
11:00 to 12:00 Paper Session 4 (short papers) – Rasterization Techniques and Ray Tracing Applications
Session chair: Warren Hunt
Patch Textures: Hardware Implementation of Mesh Colors
Ian Mallett, Larry Seiler, Cem Yuksel
slides (pdf) | slides (pptx)
A Practical and Efficient Approach for Correct Z-Pass Stencil Shadow Volumes
Baran Usta, Leonardo Scandolo, Markus Billeter, Ricardo Marroquim, Elmar Eisemann
slides (pdf) | slides (pptx)
Real-Time Ray Tracing on Head-Mounted-Displays for Advanced Visualization of Sheet Metal Stamping Defects
Andreas Dietrich, Jan Wurster, Eric Kam, Thomas Gierlinger
slides (pdf)
12:00 to 13:30 Lunch
13:30 to 14:45 Paper Session 5 – Simulation and Optimization
Session chair: Jakob Munkberg
An Efficient Solution to Structured Optimization Problems using Recursive Matrices
Darius Rückert, Marc Stamminger
slides (pdf) | slides (pptx)
Position-Based Simulation of Elastic Models on the GPU with Energy Aware Gauss-Seidel Algorithm
Ozan Cetinaslan
slides (pdf) | slides (pptx)
Distortion-Free Displacement Mapping
Tobias Zirr, Tobias Ritschel
slides (pdf) | slides (pptx)
14:45 to 15:15 Afternoon Break / Poster session
15:15 to 15:30 Sponsor talks
15:30 to 16:30 Keynote 2
Managing ultra-high complexity in real-time graphics: some hints and ingredients
Fabrice Neyret
slides (pdf) | slides (pptx)
16:30 to 16:45 Break
16:45 to 18:00 Panel: Machine learning for real-time graphics

Lubor Ladicky (ETH Zurich, Apagom AG), Jacob Munkberg (NVIDIA), Tobias Ritschel (University College London), Renaldas Zioma (Unity), Jaakko Lehtinen (NVIDIA)

Moderator: Jan Novak (NVIDIA)

Wednesday, July 10

8:30 to 9:30 Breakfast
9:30 to 10:30 Keynote 3
Modern movie rendering: how raytracing changed my industry
Luca Fascione (Weta digital)
10:30 to 10:45 Best Paper Awards and Wrapup
10:45 to 11:00 Morning Break
11:00 to 12:00 Keynote 4 (shared with EGSR)
Why learn something you already know?
Jaakko Lehtinen (NVIDIA, Aalto University)
slides (pdf)
12:00 to 13:30 Lunch and HPG Townhall
18:00 Boat tour and conference dinner


The story of NVIDIA RTX (Steve Parker / NVIDIA)

Over a decade ago, NVIDIA began exploring the use of ray tracing in real-time applications. This culminated in NVIDIA RTX, introduced last year in the Turing architecture. NVIDIA RTX brings two new capabilities to modern real-time computer graphics: real-time ray tracing with new RT Cores, and deep learning through Tensor Cores. With a powerful combination of hardware and software, Turing brings a significant advancement in real-time ray tracing performance that previously was thought to be out of reach for several years.

With these architectures, ray tracing can now be used in real-time for accurate reflections, ambient occlusion, area light shadows and even global illumination. We will show some beautiful results recently achieved by NVIDIA and partners in gaming, visual effects, scientific visualization and even audio processing.

In this talk we will explore the journey and evolution of RTX. We will cover some of the key elements of RTX including highlights of the Turing architecture and the ray tracing APIs. We will discuss how it is used for real-time path tracing, hybrid algorithms and for accelerating traditional rendering applications.
Finally, we speculate on what the future may hold for ray tracing as bottlenecks shift dramatically in rendering algorithms, both real-time and offline.

Managing ultra-high complexity in real-time graphics: Some hints and ingredients (Fabrice Neyret / CNRS / INRIA / Grenoble University)

Flyovers of natural scenes probably illustrate the worst of it: an overload of details in the foreground, content continuing way past the horizon and view frustum, possibly animated at various scales (e.g. billowing clouds or flowing water in an Amazonian landscape), and we want all this looking realistic and artifact-free — and a look controllable by the artist, please.

The numbers involved will always outpace by many orders of magnitude the computation and memory resources of computers. Simply clamping details — so nineties — is no longer an option because in the real world their influence does emerge in the final appearance. So we had better be smart.

I will illustrate various hints and ingredients about how to tackle this from my life-long experience in dealing with all aspects of natural scenes (and more) and in exploring how to best model and represent complexity as minimally-required, efficiently-manageable information.

Modern movie rendering: How ray tracing changed my industry (Luca Fascione / Weta)

The movie industry is in the last steps of completing a shift in rendering technology from rasterization-based workflows to path tracing-based ones. We will discuss how and why this change has happened, and propose ideas for where this new path may lead.

Bio: Luca Fascione is Senior Head of Technology & Research at Weta Digital, where he oversees Weta’s core R&D efforts including Simulation and Rendering Research, Software Engineering and Production Engineering. Luca is the lead architect of Weta Digital’s next-generation proprietary renderer, Manuka. This renderer is the culmination of a three-year research endeavour involving over 40 researchers and continues to allow Weta Digital to produce highly complex images with unprecedented fidelity. Luca joined Weta Digital in 2004 and has also worked for Pixar Animation Studios. Through a partnership with NVIDIA, Luca co-developed the GPU-based PantaRay that was instrumental in the making of the movie Avatar, and (since 2011’s The Adventures of Tintin) also became the foundation of volumetric shadow support within the Weta pipeline. Luca was recently recognized with a Scientific and Engineering award from the Academy of Motion Pictures for his work on FACETS, Weta’s facial motion capture system.

Why learn something you already know? (Jaakko Lehtinen / NVIDIA, Aalto University)

While computer graphics has many faces, a central one is the fact that it enables creation of photorealistic pictures by simulating light propagation, motion, shape, appearance, and so on. In this talk, I’ll argue that this ability puts graphics research in a unique position to make fundamental contributions to machine learning and AI, while solving its own longstanding problems.

The majority of modern high-performing machine learning models are not particularly interpretable; you cannot, say, interrogate an image-generating Generative Adversarial Network (GAN) to truly tease apart shape, appearance, lighting, and motion, or directly instruct an image classifier to pay attention to shape instead of texture. Yet, reasoning in such terms is the bread and butter of graphics algorithms! I argue that tightly combining the power of modern machine learning models with sophisticated graphics simulators will enable us to push the learning beyond pixels, into the physically meaningful, interpretable constituents of the world that are all tied together by the fact they come together under well-understood physical processes to form pictures. Of course, such “simulator-based inference” or “analysis by synthesis” is seeing an increasing interest in the research community, but I’ll try to convince you that what we’re seeing at the moment is just a small sample of things to come.

Hot3D Sessions

Mobile GPU Power and Performance (Andrew Gruber / Qualcomm)

Abstract: Mobile GPUs need to live within the power and heat dissipation constraints of a device carried in your pocket – yet they are surprisingly capable. This talk will explore their capabilities relative to desktop devices and discuss their design and implementation differences and similarities.
Bio: Andrew Gruber is VP of GPU architecture at Qualcomm. He has been designing GPUs for 25 years, starting with the first ATI 3D chip – the 3D rage. He and his team created the first ‘unified’ Shader Processor that appeared in the Xbox 360. For the past 10 years, he has lead the GPU architecture team for the Adreno series of mobile GPUs. He has in excess of 75 GPU related patents. He graduated from MIT with a BSEE in 1981.

Open Image Denoise – Open Source Denoising for Ray Tracing (Attila Afra / Intel)

Intel® Open Image Denoise is a recently released open source library of high-performance, high-quality denoising filters for images rendered with raytracing. At the heart of the library is an efficient deep learning based denoising filter, which was trained to be suitable for both interactive previews and final-frame rendering. Open Image Denoise supports Intel® 64 architecture based CPUs and compatible architectures, and automatically exploits modern instruction sets like SSE4, AVX2, and AVX-512. A simple but flexible C/C++ API ensures that the library can be easily integrated into most existing or new ray tracing based rendering applications. In the first half of the talk, we will give an overview of the Open Image Denoise library. We will discuss the main features of the library and the used denoising algorithm. In the second half, we will briefly present the API (through a couple of code examples) and some results showcasing both the denoising quality and performance.

NVIDIA’s Turing: More Than Ray Tracing and AI (Yury Uralsky / NVIDIA)

The Turing architecture is riddled with new features. The GPU’s buzz has been largely around its ray tracing and deep learning capabilities. In this talk we will focus on Turing’s other powerful graphics features, which enable greater efficiency, performance, and scene complexity. We will discuss the new functionality’s use and implementation, covering mesh shading, variable rate shading, texture space shading, and view instancing.