Skip to main content

3D News

Dear 3D Vision Fans, We have enjoyed sharing 3D Vision content here over the past eight years, and seeing the fantastic images and videos you’ve uploaded. Thanks for making this such a fun destination for 3D enthusiasts. Usage of this site has been declining, and therefore, we are going to transition the 3D photo function of this site over to Phereo.com. The last day that you will be able to access 3DVisionLive.com photos is Aug. 1, 2018. In addition, the 3D video...

Recent Blog Entries

As thousands of the world’s top artificial intelligence researchers gathered this week in Salt Lake City for the annual Computer Vision and Pattern Recognition conference, NVIDIA unveiled a series of tools and deep learning research to fuel the next big wave of AI discovery.

Topping the agenda: Our launch of NVIDIA Apex an open source extension that helps developers accelerate their AI research by tapping the multi-precision capabilities of NVIDIA’s Volta Tensor Core GPUs.

Amid rows of research posters in a teeming exhibition hall, we also published more than a dozen research papers, including an AI that creates super slow-motion from ordinary video. And Jensen Huang, our founder and CEO, surprised attendees Wednesday night by giving away limited-edition Titan V GPUs he had specially made for AI researchers.

Hello DALI: New Tools Revving the Engine of Modern AI

The star of our show has been the engine of modern AI, the NVIDIA Volta Tensor Core GPU.

With Volta, we reinvented the GPU. Its revolutionary Tensor Core architecture enables multi-precision computing — cranking through deep learning matrix operations at 125 teraflops at FP16 precision, and using FP64 and FP32 when there’s a need for greater range, precision or numerical stability.

Optimizing massively long-running code for both precision and performance is hard. With Apex, we’ve made it easier than ever to harness the multi-precision computing feature built into Tensor Core GPUs. Developers can use the newly available PyTorch extension to split their AI training workloads based on the level of precision required. Apex lets them train AI models with incredible speed by automatically applying higher precision approaches where needed and efficient lower precision methods when possible.

NVIDIA GPUs have made compute nodes super-fast, so fast that I/O – input/output – can be a bottleneck. But, because GPUs can also accelerate data movement, we addressed that this week by unveiling DALI – an open source library designed as a plug-in that works with all major frameworks.

DALI allows users to define configurable data processing graphs to accelerate image decode and augmentation steps typical in AI training to the GPU. It’s an essential technology that recently allowed our own data scientists to feed data to our new NVIDIA DGX-2, powered by 16 Volta GPUs, and achieve a record-breaking 15,000 images per second in training.

DALI on Volta accelerates data loading to the framework, which is Tensor Core-optimized with Apex. Then, Volta GPUs train the model at lightning speed.

We also released the much-anticipated Kubernetes on NVIDIA GPUs and TensorRT 4 — tools that help developers scale up AI across thousands of GPUs. The release of Kubernetes on NVIDIA GPUs allows developers to seamlessly scale up training and inference deployments to multi-cloud GPU clusters. TensorRT 4 is now generally available with accelerated support for such layers as Top-k, LSTMs and batch GEMMs for speeding up neural machine translation, recommenders and speech applications.

NVIDIA Research at the Cutting-edge

CVPR is where researchers present cutting-edge computer vision research. So, it was a natural place for NVIDIA Research – which is now 200+ scientists strong – to show papers that answer questions at the intersection of art, science and practical challenges. Among them:

  • Is it possible to imagine the next frame in a video? We built a new AI network called Super SloMo. which can convert 30 frames per second video into 240 frames per second by intelligently filling in the missing frames. The technique described in our paper, “Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation,” creates beautiful slow motion video from standard footage. This is a killer app for the movie industry, which can film with standard equipment and use Super SloMo to deliver incredible slow-motion scenes on a whim.
  • How great would it be to have an AI that automatically labels data? NVIDIA Research, along with France’s Ecole Polytechnique and the University of Montreal built a landmark localization network for accurate and reliable gesture recognition, facial expression recognition, facial identity verification and eye gaze tracking. Our research team’s paper, “Improving Landmark Localization with Semi-Supervised Learning,” presents a framework of sequential multitasking and an unsupervised learning technique for landmark localization — finding the precise location of specific parts of an image, a key step in many complex vision problems — that only needed 5 percent of data to be labeled to outperform previous methods.
  • Wouldn’t it be cool if we could create new virtual worlds from just pictures? Using conditional generative adversarial networks, or GANs, we can take a photograph of a city block and recreate a new virtual world. Our own self-driving car teams use this to design simulation tests for our NVIDIA DRIVE platform using the method we described in “High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs.”

Our work on NVIDIA SPLATNet – which describes a new deep learning based architecture that processes point clouds without preprocessing – received one of the six coveted CVPR research awards.

Join Us to Tackle the World’s Great Challenges

NVIDIA’s GPU architecture is the leading platform for graphics, accelerated computing and AI. Combining these approaches can spark innovation in incredible ways.

Take NVIDIA RTX ray-tracing technology and combine it with Volta Tensor Cores and AI denoising, and for the first time we can realize real-time, cinematic-quality rendering. This is the holy grail for movie animation, gaming, product design and architecture. Architects can design interactively with their clients; movie animation studios can experiment with light and materials in real time.

Or take the challenge of training autonomous vehicles to drive safely by training them on data from billions of miles of driving. That could take years of driving on public roads to collect. So, we invented DRIVE Constellation and Drive SIM, a VR autonomous vehicle simulator that generates a wide range of testing scenarios, simulation sensors and apply neural networks to test and react to real world conditions. Photorealistic simulation enables a safer, more scalable, and more cost-effective way to bring self-driving cars to our roads.

Or take the challenge of early disease detection, the inspiration behind CLARA, an AI medical imaging supercomputer. Over the past 10 years, medical imaging has become computational, from reconstruction to image processing to rendering and an explosion in AI applications. Clara is a virtual, scalable AI supercomputer that can upgrade the installed base of instruments and usher in a new breed of miniaturized, intelligent devices.

AI has inspired us to dream about a better world, now and for the future of our family and friends. We love coming to CVPR to honor and celebrate the AI researchers that got us here and join in the discovery of the future of AI.

The post AI Feast at CVPR: NVIDIA Brings New Tensor Core GPU AI Tools, Super SloMo, Cutting-Edge Research appeared first on The Official NVIDIA Blog.

This is what smart people do for fun.

Detecting gravitational waves millions of light years away, in real time. Powering computationally fast quantum mechanical simulations at high accuracy and low cost. Proving the feasibility of fusion as a source of renewable clean energy.

Of course, it doesn’t take a supercomputer to figure out that high-performance computing is much more than just fun. Thanks to AI, HPC now matters more than ever.

That’s why next week’s International Supercomputing Conference — from June 24 to 28 in Frankfurt — will feature more content than ever, much of it focused on the collision of artificial intelligence and high-performance computing.

Attendees can train up on deep learning in workshops. They’ll crowd into panels focused on the convergence of AI and HPC. They’ll take part in a day dedicated to how machine learning transforms high-performance computing.

At our booth: Visit the NVIDIA booth (#1040) where we’ll be featuring:

  • Talks led by industry leaders, covering the most important topics in HPC and AI. Join us at the NVIDIA booth theatre to see:
  • NVIDIA’s Steve Oberlin speaking on AI and HPC. Monday, June 25, 4 p.m.
  • NVIDIA’s Marc Hamilton discussing DGX-2 and NVSwitch. Monday, June 25, 5 p.m.
  • Prof. Dirk Pleiter, of Jülich Supercomputing Centre, presenting on “Bringing Neuroscience to HPC.” Tuesday, June 26, 11:30 a.m.
  • Jack Wells, of Oak Ridge National Laboratory, presenting “Summit Science Stories.” Wednesday, June 27, 1:30 p.m.
  • Thrilling demos and displays including NVIDIA’s new DGX-2, NVIDIA GPU Cloud and more.
  • Networking with several dozen NVIDIA ecosystem partners and experts through the new Volta Experience.
  • A chance to get your hands on the renowned NVIDIA ruler, as well as a VIP discount code for GTC Europe 2018.

At the show: Beyond our booth, you’ll find our story woven into workshops and presentations throughout ISC:

If this sounds like fun to you, keep up-to-date on all things HPC and AI: follow our social handles @NVIDIAEU and @NVIDIADC, and #ISC18.

The post AI Storms Top Supercomputing Show – NVIDIA Brings Talks, Training, Demos, and More to ISC appeared first on The Official NVIDIA Blog.