Skip to main content

3D News

We're proud to officially unveil the NVIDIA® GeForce® GTX 680, a next-generation GPU that delivers more than just state-of-the-art features and technology. It gives you truly game-changing performance that taps into the powerful next-generation GeForce architecture to redefine smooth, seamless, more realistic gaming. As 3D Vision users/fans the benefits of the new graphics architecture can be summed up in three main areas: Faster Innovative new...
And the Winner Is...   Our apologies for being a bit tardy in getting the results of the February photo contest posted - but good things come to those that wait... Without further ado, the winner is (cue drumroll please): "Warehouse in Wonderland", submitted by Nick Saglimbeni. We sure you'll agree that this shot features excellent composition and lighting control - as well as creative use of backlighting for a 3D subject. We also liked the...
3DVisionLive.com is excited to unveil the third in a series of monthly photo contests aimed at giving you a platform to show off your images and potentially win some cool prizes. The May Photo Contest is similar to April's and is open to legal residents of the United Kingdom, Germany, France, Norway, Sweden, Finland, Czech Republic, Russia, Australia, and the United States and Canada. Contest Rules The contest is open for submissions right now! So start...
With the launch of our 3D Vision® 2 glasses, it was easy to overlook advancements being made on 3D Vision-Ready monitors. Product Manager Michael McSorley walks you through the latest and greatest developments in the display category.  Other than the obvious advantage of an increase in the maximum size for 3D Vision panels—from 24-inches to 27-inches—there are really two primary game-changing features that you should be aware of: 3D LightBoost and...
Just because you don’t have a 3D camera does not have to mean you can’t participate in 3D photography—thanks to companies such as 3Defy, which make software that you can use to transform your 2D images into stereoscopic 3D shots you can view with your 3D Vision™ hardware. The 3Defy name may already be a bit familiar to you if you frequent the site—we’ve been featuring a lot of their content on 3DVisonLive to give you a taste of what...

Recent Blog Entries

May the force — and 21 billion transistors — be with you.

NVIDIA CEO Jensen Huang on Thursday lit up a gathering of hundreds of elite deep learning researchers at the Conference and Workshop on Neural Information Processing Systems — better known as NIPS — in Long Beach, Calif., by unveiling TITAN V, our latest GPU.F

“What NVIDIA is all about is building tools that advance computing, so we can do things that would otherwise be impossible,” Huang, dressed in his trademark black leather jacket, told the crowd as he kicked off the evening. “Our ultimate purpose is to build computing platforms that allow you to do groundbreaking work.”

Twenty of the researchers — selected at random — received one of the first TITANs based on the company’s latest Volta architecture.

The debut of TITAN V was followed another unveiling — the premier of an original, Star Wars inspired piece of music performed live by 15 musicians from the CMG Music Recording Orchestra of Hollywood for the hundreds of researchers gathered for the event.

A Surprise Star Wars Serenade A live orchestra played an original Star Wars inspired composition written by Belgian startup AIVA’s AI.

After Huang described the music, a huge screen at the front of the room — which was displaying images from Huang’s presentation — slid away to reveal the live orchestra. Hundreds of deep learning researchers craned their necks and stood on their stools to record the performance with their smartphones as they listened in stunned silence, before bursting into raucous applause at the end of the performance, prompting the performers onstage to take a bow.

“It was a nice surprise,” said deep learning pioneer Yann LeCun, director of AI Research at Facebook and founding director of the NYU Center for Data Science of the performance.

LeCun said he not only uses Volta-based GPUs for his research, he relies on a GeForce GTX 1080 at home for gaming and VR. “Jensen is a great, great showman, and it was a nice touch,” he said of the performance.

LeCun was just one of the legendary names in AI gathered at the event, which included Yoshua Bengio, head of the Montreal Institute for Learning Algorithms, whose work was among those singled out by Huang that night for an NVAIL Pioneering Research Award, and Nicholas Pinto — one of the 20 in the crowd who won a TITAN V — who is the deep learning lead at Apple.

The result: a social media sensation in the deep learning community.

Jensen Huang just announced the new @nvidia #TITANV and pulled out a freaking orchestra playing AI composed classical music! #NIPS2017 pic.twitter.com/8hrh2ogFcE

— Alf (冷在) (@AlfredoCanziani) December 8, 2017

The music highlighted just how far researchers like the ones gathered at NIPS have taken the world over the past five years.

Computers are now able to perform tasks — such as voice recognition, image recognition and even musical composition — once thought impossible for machines.

Pierre Barreau, whose Luxembourg-based startup, AIVA, created the AI that composed the evening’s music, leads a team that uses a collection of GPUs — including an NVIDIA TITAN Xp — to do its work.

He said he’s definitely adding a TITAN V to his arsenal. “I’m really excited about it,” he said.

AIVA CEO Pierre Barreau and NVIDIA CEO Jensen Huang.

More wonders — from startups tackling challenges in finance, energy, medicine, transportation and many more — are coming. Tycho Tax, a researcher at startup Corti, is using GPU-powered deep learning to help create a voice-activated AI that will help coach emergency responders through tough situations.

GPUs are key to all these efforts. “If you use deep learning, you need to use GPUs — otherwise you can’t do deep learning,” said Luca Rigazo, a researcher with Totemic Labs, which is doing work in elder care.

And the faster the better. “To realistically do anything at the speeds we need, you have to use GPUs,” said Jason Fries, a postdoc at Stanford University whose work is powered by Volta-based NVIDIA Tesla V100 GPUs.

The TITAN V promises to bring the power of Volta, our seventh-generation GPU architecture, to the desktops of AI researchers like these.

Along with our earlier generation TITAN Xp GPU, TITAN V will be supported by our NVIDIA GPU Cloud — or NGC — giving users instant access to a complete deep learning software stack.

TITAN V’s 21.1 billion transistors deliver 110 teraflops of raw horsepower, 9x that of its predecessor, and extreme energy efficiency, Huang told the crowd.

Close Ties to AI Researchers Twenty attendees at our event Thursday received our new TITAN V GPUs.

Thursday’s surprise presentation is just the latest evidence of NVIDIA’s unique relationship with researchers.

After the event wound down, audience members lingered to schmooze with NVIDIA’s Ian Buck, Chief Scientist Bill Dally and Huang, who eagerly posed for photos late into the evening with knots of delighted researchers from all over the world.

“It is so great to be here to celebrate NIPS with you, the work you are doing is so incredibly important, and you get to enjoy the discovery of it every day,” Huang said, citing breakthroughs in everything from transportation to healthcare. “It is such privilege for me to be part of this journey with you guys. Thank you.”

The post NVIDIA CEO Brings Orchestra, AI-Generated Star Wars Music — and First TITAN V GPUs — to Top AI Brainiacs appeared first on The Official NVIDIA Blog.

Where’s AI going next? There may be no better place to ask that question than this week’s Conference and Workshop on Neural Information Processing Systems — better known as NIPS — in Long Beach, Calif.

And there may be no one more fun to ask than MILABOT — the AI developed by a team at the University of Montreal that’s one of the two demonstrations we’re hosting at our booth at the conference.

The bot’s specialty: entangling users in open-ended conversations that can veer from cat puns to probing questions about your relationship with your mother.

“Some people wind up talking to it for 20-30 minutes about their personal lives,” Iulian Vlad Serban, one of the researchers who built it, said as he invited AI researchers to step up and put it to the test.

And when asked about the future of AI, MILABOT spat out the answer you’ll hear from many of the more than 7,000 students and researchers engaged in freewheeling conversations spilling out into the hallways of the Long Beach Convention Center this week. “I’m going to have to think about that one for a while,” MILABOT replied.

Like everyone at NIPS — one of the world’s premier AI gatherings — NVIDIA is working to answer this question, too.

In part by supporting the work of researchers like Serban through our NVIDIA AI Labs, or NVAIL program, through which we support research at 20 top universities and institutes, offering technical assistance from our engineers, support for students and access to our DGX AI supercomputing systems.

Watch and Learn

One answer: deep learning will help machines interact with the physical world — and the humans who inhabit it — much more fluidly.

“I think that the next few years are going to be about autonomous machines,” said NVIDIA CEO Jensen Huang during a visit to our booth, where he stopped to talk with UC Berkeley’s Sergey Levine, Chelsea Finn and Frederik Ebert about their work. “You’re at the intersection of AI and machines that can interact with the physical world.”

NVIDIA CEO Jensen Huang talks with Chelsea Finn and Frederik Ebert about their work.

The team from the Berkeley AI Research Lab — or BAIR — brought a pair of demos to NIPS that show how new deep learning techniques are making this possible.

In the first of two demos from BAIR you’ll place an object — such as your car keys or a pair of glasses — in a robot’s workspace. You’ll then click on a user interface to show the machine where the object you want moved is, and where you want it to go.

The robot will then imagine — through a video prediction users can watch — where the object you specified will move based on the actions the robot will take. The robot will then use this prediction to plan its next moves.

Thanks to an innovative convolutional neural network design, the robot’s skill has surprised even some of the students who helped train it — over the course of a single day last month.

In the second demo, you’ll demonstrate a task, such as putting something in a container, by guiding a robot arm. Using video of your demonstration, the robot will find the container and put the same item in it.

Talk to Us

MILABOT was another demo that captivated NIPS attendees, many of them researchers eager to find ways to trip up the chatbot.

It can be done, but, when not being tortured by a researcher who tricked the bot into making a “pronoun disambiguation,” MILABOT can keep a conversation going, even if it has to resort to a bad pun.

“I don’t own a cat, but if I did, I would like her meowy much,” MILABOT replies when asked if it likes cats. (Cats, of course, are a running joke among AI researchers.)

Created by the Montreal Institute for Learning Algorithms to compete for the Amazon Alexa Prize competition, this chatbot doesn’t rely on one conversational model, but 22 models.

Making small talk with people via speech or text is a challenge computer scientists have been grappling with at least since MIT’s Joseph Weizenbaum created ELIZA — which spits out frustratingly superficial responses to human questioning — four decades ago.

Unlike ELIZA, MILABOT relies on what its creators describe as an “ensemble” of models. They include template-based models, bag-of-words models, sequence-to-sequence neural network latent variable neural network models as well as a variant of the original ELIZA.

The real trick is using deep reinforcement learning to pick which of these models to use. To do that, MILABOT uses reinforcement learning — where software agents learn by taking a long string of actions to maximize a cumulative reward — applied to data crowdsourced from real interactions with people.

It’s not perfect, but it’s enough to draw a crowd — and keep them entertained as they throw one curveball after another at the software.

Stop By

To see these demos, and many more, stop by our booth at NIPS.

The post Where’s AI Going Next? Ask an AI … or the Researchers Whose Work We’re Demoing at NIPS appeared first on The Official NVIDIA Blog.