Skip to main content

3D News

For the last few years we’ve worked with the National Stereoscopic Association to support the 3D Digital Showcase photo competition featured at the NSA’s annual conventions. The images from this past year’s showcase are now live for everyone to view. We really enjoy the diversity of images submitted by 3D artists and enthusiasts to this event, and this gallery is certainly no different. You’ll see everything from close ups of insects to people juggling fire. Simply put,...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in driver 344.11, increasing game support and adding some new interface elements. You can get the new driver at or via the update option in Geforce Experience. With the release of 344.11, new 3D...
We’re fortunate enough to have another fine 3D video from New Media Film Festival to share with you here on 3DVisionLive—a pop music video from Italy called “The Way,” which you can view here. Even better, New Media Film Festival has provided an interview with one of the co-directors of the video, Edoardo Ballanti, which provides insights on how the video was created and the vision behind it. Enjoy! (Alice Corsi also co-directed the video.) What was the Inspiration behind “...
The Fall Photo Contest received nearly 100 images – thanks to all that entered! The contest called for your best “nature” shots with the only other requirement being that they had to be true stereo images. Submissions ranged from shots of spiders in gardens to artistic approaches to tasteful nudes. As before, members were invited to vote for the winner by tagging images in the contest gallery as favorites. Without further ado, the winner is: Autumn Goodbye to Summer This...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in beta driver 337, increasing game support and adding a toggle key to enable/disable the mode. Games with 3D Compatibility Mode will launch in this mode by default. To change the render mode back to standard 3D Vision...

Recent Blog Entries

How is sports strategy like self-driving cars and brain tumor diagnosis?

They’re all the work of world-leading universities that are breaking new ground in artificial intelligence at the NVIDIA AI Labs. And they’ll all be on deck in the next few days at IEEE’s Computer Vision and Pattern Recognition (CVPR) conference, the premier annual computer vision event.

Stanford University has a better way to plan sports strategy. University of Oxford researchers are teaming up with NEC Labs America and others to solve one of the thorniest problems for self-driving cars. And the National Taiwan University (NTU) is working on a better way to diagnose brain tumors.

Our NVAIL program helps us keep AI pioneers like these ahead of the curve with support for students, assistance from our researchers and engineers, and access to the industry’s most advanced GPU computing power, the DGX-1 AI supercomputer. NVAIL includes 20 universities from around the world.

Read on for more information about the research.

Treating Brain Tumors

Winston Hsu, a professor at NTU, thinks there should be a better way to diagnose brain cancer.

When doctors diagnose the disease, they’re not just looking for malignant tissue. They need to know where liquids around the tumor could cause brain swelling. They need to find out if the cancer has killed any tissue. And they need to know the size, shape and location of everything they find. All this information helps them determine the best way to treat their patients, Hsu said.

For this complex problem, a simple MRI won’t cut it. To accurately detect each tissue type, Hsu said physicians must process MRIs four different ways and examine all of this data.

Hsu and his team used NVIDIA DGX-1 to train a deep neural network to analyze all four image types at once. The researchers also used the DGX-1 to deploy their deep learning model, a process known as inference.

Hsu is not the first to apply deep learning to examining brain tissue images, but he is believed to be the first to combine all the image types into one algorithm. Hsu and the other researchers will present a paper on their research on July 23 at CVPR.

Increasing Safety in Autonomous Vehicles

The complexity and diversity of driving environments makes autonomous driving difficult. At a busy intersection, a car must interpret stationary elements like traffic lights and lanes, and respond to moving objects like pedestrians, cyclists and other cars.

A research team led by NEC Labs America and the University of Oxford aims to make driving safer by training vehicles to predict what will happen in these complicated situations.

Using deep learning, the researchers developed a framework to predict how stationary and moving elements will interact. Unlike many existing solutions, this work goes beyond estimating how an object — like a car or pedestrian — will move from one point to another.

Instead, the framework assumes a moving object could go anywhere, and makes a series of hypotheses about what is most likely to happen. It does this by evaluating both the context of the scene — perhaps a busy traffic intersection or a pedestrian crosswalk — as well as interactions between neighboring objects.

For example, the car could anticipate several different trajectories for a cyclist, or hypothesize that a child playing alongside the road might throw their ball into the street or run out after it.

After establishing several hypotheses, the framework makes a strategic prediction about what will happen. These predictions have proven highly accurate when compared to real world behaviors.

Learning curve: The framework’s predictions (shown in red) get closer to the ground truth (shown in blue) through multiple iterations of deep learning. Image courtesy of DESIRE research team.

According to Namhoon Lee, a Ph.D. student from Oxford, “This framework offers a safer way to predict future interactions, because it predicts various future outcomes rather than limiting the possibilities for what might happen, and because the most likely predictions scored by the framework are more accurate than other systems.”

On the road, where almost anything can happen, this combination of flexibility and accuracy could make all the difference.

Lee will present a paper on this research on July 23 at CVPR.

Stanford AI technology identifies where players are, what they’re doing and interprets team behavior. Image courtesy of Stanford University and École Polytechnique Fédérale de Lausanne. How AI Interprets What Sports Teams Do

Sports teams are always looking for a competitive edge, so it’s no wonder some are turning to AI to improve player performance and craft strategy.

Winning takes more than boosting individual players. Whether it’s on the field or on the court, teamwork is what makes the game. Stanford University Professor Silvio Savarese and his team are tackling this problem by using deep learning to analyze game tapes.

“When more than one person is in a scene, they’re not acting alone — they interact,” said Savarese.

The team’s research focused on volleyball, but it could apply to other sports, as well as to robotics and self-driving cars, according to Alexandre Alahi, a research scientist at Stanford. By understanding group dynamics, robots might be able to behave more like humans. The technology could also be used in self-driving cars to understand what pedestrians are doing — say, crossing the street while distracted by a mobile phone, Alahi said.

Existing efforts to understand social interactions detect which scenes include a specific person, track that person over time and determine what the person is doing, said Timur Bagautdinov, a doctoral student at École Polytechnique Fédérale de Lausanne. That has to be repeated for every player. Finally, researchers stitch it all together to try to make sense of what they have.

The team developed a framework that does everything other approaches do, but in just a single pass through a neural network. For more technical details, see the paper on social scene understanding they’ll discuss at CVPR on July 23.

Feature image: MRI brain image segmentation. Credit: National Taiwan University.

The post How NVIDIA AI Labs Are Driving the Future of Computer Vision appeared first on The Official NVIDIA Blog.

AI is reshaping the world. The researchers gathered at this week’s Computer Vision and Pattern Recognition conference in Honolulu are reshaping AI.

That’s why NVIDIA CEO Jensen Huang chose to light up a meetup of elite deep learning researchers at CVPR to unveil the NVIDIA Tesla V100, our latest GPU, based on our Volta architecture, by presenting it to 15 participants in our NVIDIA AI Labs program.

The audience of more than 150 top AI researchers — gathered for our NVAIL meetup — grabbed their smartphones to snap pictures of the moment.

“AI is the most powerful technology force that we have ever known,” said Jensen, clad in a short sleeve dress shirt, white jeans and vans, or what he called his “aloha uniform.”

“I’ve seen everything. I’ve seen the coming and going of the client-server revolution. I’ve seen the coming and going of the PC revolution. Absolutely nothing compares,” he said.

Tesla V100: Great Gear for Great AI Researchers NVIDIA CEO Jensen Huang, and our new Volta-based Tesla V100, lit up a meetup of elite deep learning researchers at CVPR in Honolulu Saturday evening.

Jensen then presented representatives of each of the 15 attending research institutions with NVIDIA Tesla V100 GPU accelerators, each of which included his signature, along with an inscription on the accelerator’s box that read, “Do great AI!”

GPUs — along with the torrents of data unleashed by the internet — have played a key role in the deep learning boom led by researchers like the ones gathered at CVPR. It’s remaking every aspect of human endeavor.

One of the researchers, Silvio Savarese, an associate professor of computer science at Stanford University and director of the school’s SAIL-Toyota Center for AI Research, likened the signed V100 box to a bottle of fine wine.

Savarese’s research has broken ground in computer vision, robotic perception and machine learning. In recent years, he has received the Best Student Paper Award at CVPR 2016, the James R. Croes Medal in 2013, a TRW Automotive Endowed Research Award in 2012, an NSF Career Award in 2011 and a Google Research Award in 2010.

It was clear this moment meant something special to him.

“It’s exciting, especially to get Jensen’s signature,” Savarese said. “My students will be even more excited.”

He said the V100 would be used for new research on autonomous driving and virtual reality, among other areas.

“Everything is powered by deep learning,” said Savarese. “We can do things we’ve never done before.”

Breakthroughs made by researchers such as Savarese and others gathered at CVPR are unleashing technologies with superhuman capabilities.

So it’s fitting that the researchers in attendance will be among the first to put our latest technology to work.

Close Ties to AI Researchers NVIDIA CEO Jensen Huang presented NVIDIA Tesla V100s to 15 participants in our NVIDIA AI Labs program Saturday evening at the CVPR conference in Honolulu.

The surprise presentation is just the latest evidence of NVIDIA’s unique relationship with researchers, Saverese added.

“NVIDIA has a very unusual way of interacting with the community that’s not like any other company,” he said. “It’s a way to sustain the collaboration, and we look forward to more interactions.”

Volta, our seventh-generation GPU architecture, provides a 5x improvement in peak teraflops over its predecessor Pascal, and 15x over the Maxwell architecture, launched just two years ago. This performance surpasses by 4x the improvements that Moore’s law would have predicted.

The Tesla V100 GPU accelerator shatters the 100 teraflops barrier of deep learning performance.

The V100 features over 21 billion transistors, and includes 640 Tensor Cores, delivering 120 teraflops of deep learning performance; the latest NVLink high-speed interconnect technology; and 900 GB/sec HBM2 DRAM to achieve 50 percent more memory bandwidth than previous generation GPUs.

It’s all supported by Volta-optimized software, including CUDA, cuDNN and TensorRT, which frameworks and applications can easily tap into to accelerate AI and research.

Researchers Relish the Moment

Audience members at the gathering busily captured the moment with photos and video as Jensen detailed the V100’s capabilities. While they were soaking in the moment, Jensen returned the favor by paying tribute to them.

“We’ve learned a great deal about the challenges of AI, and we’ve been adapting our GPUs to be better suited for AI,” he said. “I can’t imagine a better place and a better group of people to share the work we’ve been doing.”

The post Big Surprise for Top AI Brainiacs: NVIDIA CEO Gives World’s Top AI Researchers First NVIDIA Tesla V100s appeared first on The Official NVIDIA Blog.