10/07/2018

Highlights from GTC 2018

Author: Stephen McGough Company: Newcastle University Blog

GTC (GPU Technology Conference) is a cross academic/industry event organised by NVIDIA with many companies presenting their latest achievements and academics presenting new ideas and approaches. With some 8,500 attendees it was a huge event with lots of exciting news and updates on a wide field of topics. There are multiple parallel presentations, so it is not possible to see all the great work going on. So, below are my personal highlights. Recordings of all the presentations as well as selected slide decks can be found on http://gputechconf.com/on-demand and a collection of all posters are available on https://www.nvidia.com/en-us/gtc/poster-gallery. You can also watch the keynote for GTC 2018 here.

*sponsored blog*

Highlights:

Real-Time Ray Tracing

This might be an odd topic for a machine learning blog but the awesome power of real-time raytracing really does show off the power of the GPU. Immense compute resources are required in order to render this yourself in real-time, but given appropriate hardware the results are fantastic. The rendered movie can be found on YouTube. There is, however, a link here to deep learning as NVIDIA were also demoing a system for ‘filling in’ the missing pixels in ray tracing images. When ray tracing some pixels take much longer to render than others, with deep learning you can predict these pixels much quicker, see here for details.

New Hardware

NVIDIA’s major product announcement at GTC was the new DGX-2. This has a peak performance of 2 petaFLOPS 16 bit precision computations spread over 16 fully connected Tesla V100s. Aimed at the deep learning market this machine is a step up from its predecessor, the DGX-1. Further specs can be found here, but it’s pricey: For those of us on a smaller budget there’s good news for the Tesla V100 cards which are being upgraded to 32GB of RAM.

Libraries and Tooling

NVIDIA have committed to providing four Cuda releases per year along with NVIDIA GPU Cloud a container-based software stack for cloud services and dedicated hardware. For NVIDIA GPU users, the containers remove the need to keep up to date with different software and libraries and their compatibilities, leading to quicker time to development. More info here. Containers are available for deep learning, high performance compute (HPC) and visualisation.

NVIDIA continues to optimise their drivers. Current work has focused on using mixed precision for Deep Learning which promises to increase speed without loss of accuracy.There has also been significant improvements to the online documentation as well as a new forum for DGX users.

Image Learning

New uses of deep learning for image manipulation where a scene description (defining areas such as road, path, car) on the right, can be converted into photo-realistic images, on the left, through the use of GANS was highlighted. This could also be used for converting images from daytime to night-time, sunny to rainy, one artist style to another or even cats to dogs! With the use of multimodal approaches, a picture of one dog could be rendered as multiple pictures of cats. Other approaches included removing blur from photographs, removing unwanted features from photos or even removing scratches or deleted areas.

New Applications for  Deep Learning

The conference highlighted many applications of deep learning, which may be less familiar than some of the work that has attracted press attention. These included using recurrent neural networks (RNNs) to detect energy meter fraud, cyber defence, sensor data fusion, dialogue understanding systems, a new approach to recommendation systems, using LSTM along with different data (weather, social media) to identify stock market trends, and NVIDIA have been using deep learning for resume matching – to better identify who they should be interviewing.

Understanding your Deep Learning System

A better understanding of the models and their outputs has been seen as a key to the wider uptake of deep learning. Although no significant advance has yet been made in this area a number of presentations were considering this issue and using input importance to help understand better what was going on. One presented approach to gain deeper insight into the functioning of a neural network is to remove  each feature one at a time to see how it affects the overall result. You can also try removing two or more features simultaneously to see what impact they have together.

Building better Deep Learning Networks

The construction of a ‘good’ deep learning network is currently not well understood. Both in terms of the network elements or the hyper-parameters they use. A number of groups are now looking at automatic techniques for producing networks and hyper-parameters. As the search-spaces are far too large to try out every possible combination other approaches are needed, the ones touted here were Bayesian optimisation, adaptive random search (hyperband) and reinforcement learning.

Cars and Robots

Some attending GTC this year may have thought it was a car show not a GPU show by the number of vehicles on display. This included the NVIDIA Formula E race car, many cars on show from companies like BMW, Mercedes-Benz and Ford, self-driving diggers through to self-driving delivery carts – which could be seen driving around the San Jose area while the conference was taking place. NVIDIA also announced their Drive Roadmap to support autonomous vehicles of all types.

Robots are now taking advantage of AI and deep learning. This allows them to perform more generic tasks where they need to be aware of their environment to avoid obstacles, interact with people or manipulate things where the state of the environment can’t be predicted.

 

NVIDIA is a partner of the Machine Intelligence Garage, a programme delivered by Digital Catapult to support early stage companies in the machine learning field with access to compute power and the expertise to make best use of this resource. Amongst other resources, Machine Intelligence Garage can make the latest NVIDIA GPU technology for deep learning available to startups.