Skip to main content

3D News

NVIDIA is pleased to announce the first Photo Champion for 3D Vision Live, Nick Saglimbeni. Regular visitors to the site should be well familiar with Nick's images. His Warehouse Wonderland image won the site's first monthly Photo Contest, and he was also the first repeat winner of the Contest two months later with Kim Kardashian's Wild West - one of the site's first 3D celebrity images. Nick is receiving the 2012 3D Vision Live Photo Champion Award as our formal...
Sorry folks for the delay in announcing the winner for May's Photo Contest - we had an issue with the search function and needed to make sure all entries were considered. Without further ado, on to the results! Alex Savin has been submitting some excellent images from his European adventures for some time now, and his "Fontana di Trevi" is a wonderful example of stereo photography that just plain works. The composition is top notch and the image is sharp throughout, which...
James Cameron continues to pioneer 3D technology. With the first Avatar he showed what 3D could add to the film experience. After criticizing the fast conversions from 2D to 3D that many Hollywood studios have released since Avatar, Cameron oversaw a team that turned Titanic into a 3D blockbuster. That film has been a commercial and critical success, showing what a year of meticulous conversion and $18 million can add to a 15-year-old movie. The director talks about Avatar,...
Marvel Entertainment was one of the first major Hollywood companies to commit to 3D movies. Beginning last summer, every movie based on a Marvel comic property was to be either filmed in 3D or converted to 3D for theatrical and home entertainment releases. When this mandate came down, Ari Arad (Iron Man), producer of Ghost Rider: Spirit of Vengeance, turned to NVIDIA to help with the production of the Sony Pictures sequel, which is now out on Blu-ray 3D, Blu-ray and DVD....
People are flocking to the theater to take in Pixar’s latest animated film, Brave, which we recommend seeing in 3D, of course. After seeing the movie you can relive the adventure by picking up the gorgeous Brave: The Video Game for PC. The third-person action/adventure game lets you play the role of Princess Merida—Pixar’s first female lead character—as you follow her adventures in a family-friendly storyline based on the film. Engage in bow-and-arrow and sword combat and...

Recent Blog Entries

May the force — and 21 billion transistors — be with you.

NVIDIA CEO Jensen Huang on Thursday lit up a gathering of hundreds of elite deep learning researchers at the Conference and Workshop on Neural Information Processing Systems — better known as NIPS — in Long Beach, Calif., by unveiling TITAN V, our latest GPU.F

“What NVIDIA is all about is building tools that advance computing, so we can do things that would otherwise be impossible,” Huang, dressed in his trademark black leather jacket, told the crowd as he kicked off the evening. “Our ultimate purpose is to build computing platforms that allow you to do groundbreaking work.”

Twenty of the researchers — selected at random — received one of the first TITANs based on the company’s latest Volta architecture.

The debut of TITAN V was followed another unveiling — the premier of an original, Star Wars inspired piece of music performed live by 15 musicians from the CMG Music Recording Orchestra of Hollywood for the hundreds of researchers gathered for the event.

A Surprise Star Wars Serenade A live orchestra played an original Star Wars inspired composition written by Belgian startup AIVA’s AI.

After Huang described the music, a huge screen at the front of the room — which was displaying images from Huang’s presentation — slid away to reveal the live orchestra. Hundreds of deep learning researchers craned their necks and stood on their stools to record the performance with their smartphones as they listened in stunned silence, before bursting into raucous applause at the end of the performance, prompting the performers onstage to take a bow.

“It was a nice surprise,” said deep learning pioneer Yann LeCun, director of AI Research at Facebook and founding director of the NYU Center for Data Science of the performance.

LeCun said he not only uses Volta-based GPUs for his research, he relies on a GeForce GTX 1080 at home for gaming and VR. “Jensen is a great, great showman, and it was a nice touch,” he said of the performance.

LeCun was just one of the legendary names in AI gathered at the event, which included Yoshua Bengio, head of the Montreal Institute for Learning Algorithms, whose work was among those singled out by Huang that night for an NVAIL Pioneering Research Award, and Nicholas Pinto — one of the 20 in the crowd who won a TITAN V — who is the deep learning lead at Apple.

The result: a social media sensation in the deep learning community.

Jensen Huang just announced the new @nvidia #TITANV and pulled out a freaking orchestra playing AI composed classical music! #NIPS2017 pic.twitter.com/8hrh2ogFcE

— Alf (冷在) (@AlfredoCanziani) December 8, 2017

The music highlighted just how far researchers like the ones gathered at NIPS have taken the world over the past five years.

Computers are now able to perform tasks — such as voice recognition, image recognition and even musical composition — once thought impossible for machines.

Pierre Barreau, whose Luxembourg-based startup, AIVA, created the AI that composed the evening’s music, leads a team that uses a collection of GPUs — including an NVIDIA TITAN Xp — to do its work.

He said he’s definitely adding a TITAN V to his arsenal. “I’m really excited about it,” he said.

AIVA CEO Pierre Barreau and NVIDIA CEO Jensen Huang.

More wonders — from startups tackling challenges in finance, energy, medicine, transportation and many more — are coming. Tycho Tax, a researcher at startup Corti, is using GPU-powered deep learning to help create a voice-activated AI that will help coach emergency responders through tough situations.

GPUs are key to all these efforts. “If you use deep learning, you need to use GPUs — otherwise you can’t do deep learning,” said Luca Rigazo, a researcher with Totemic Labs, which is doing work in elder care.

And the faster the better. “To realistically do anything at the speeds we need, you have to use GPUs,” said Jason Fries, a postdoc at Stanford University whose work is powered by Volta-based NVIDIA Tesla V100 GPUs.

The TITAN V promises to bring the power of Volta, our seventh-generation GPU architecture, to the desktops of AI researchers like these.

Along with our earlier generation TITAN Xp GPU, TITAN V will be supported by our NVIDIA GPU Cloud — or NGC — giving users instant access to a complete deep learning software stack.

TITAN V’s 21.1 billion transistors deliver 110 teraflops of raw horsepower, 9x that of its predecessor, and extreme energy efficiency, Huang told the crowd.

Close Ties to AI Researchers Twenty attendees at our event Thursday received our new TITAN V GPUs.

Thursday’s surprise presentation is just the latest evidence of NVIDIA’s unique relationship with researchers.

After the event wound down, audience members lingered to schmooze with NVIDIA’s Ian Buck, Chief Scientist Bill Dally and Huang, who eagerly posed for photos late into the evening with knots of delighted researchers from all over the world.

“It is so great to be here to celebrate NIPS with you, the work you are doing is so incredibly important, and you get to enjoy the discovery of it every day,” Huang said, citing breakthroughs in everything from transportation to healthcare. “It is such privilege for me to be part of this journey with you guys. Thank you.”

The post NVIDIA CEO Brings Orchestra, AI-Generated Star Wars Music — and First TITAN V GPUs — to Top AI Brainiacs appeared first on The Official NVIDIA Blog.

Where’s AI going next? There may be no better place to ask that question than this week’s Conference and Workshop on Neural Information Processing Systems — better known as NIPS — in Long Beach, Calif.

And there may be no one more fun to ask than MILABOT — the AI developed by a team at the University of Montreal that’s one of the two demonstrations we’re hosting at our booth at the conference.

The bot’s specialty: entangling users in open-ended conversations that can veer from cat puns to probing questions about your relationship with your mother.

“Some people wind up talking to it for 20-30 minutes about their personal lives,” Iulian Vlad Serban, one of the researchers who built it, said as he invited AI researchers to step up and put it to the test.

And when asked about the future of AI, MILABOT spat out the answer you’ll hear from many of the more than 7,000 students and researchers engaged in freewheeling conversations spilling out into the hallways of the Long Beach Convention Center this week. “I’m going to have to think about that one for a while,” MILABOT replied.

Like everyone at NIPS — one of the world’s premier AI gatherings — NVIDIA is working to answer this question, too.

In part by supporting the work of researchers like Serban through our NVIDIA AI Labs, or NVAIL program, through which we support research at 20 top universities and institutes, offering technical assistance from our engineers, support for students and access to our DGX AI supercomputing systems.

Watch and Learn

One answer: deep learning will help machines interact with the physical world — and the humans who inhabit it — much more fluidly.

“I think that the next few years are going to be about autonomous machines,” said NVIDIA CEO Jensen Huang during a visit to our booth, where he stopped to talk with UC Berkeley’s Sergey Levine, Chelsea Finn and Frederik Ebert about their work. “You’re at the intersection of AI and machines that can interact with the physical world.”

NVIDIA CEO Jensen Huang talks with Chelsea Finn and Frederik Ebert about their work.

The team from the Berkeley AI Research Lab — or BAIR — brought a pair of demos to NIPS that show how new deep learning techniques are making this possible.

In the first of two demos from BAIR you’ll place an object — such as your car keys or a pair of glasses — in a robot’s workspace. You’ll then click on a user interface to show the machine where the object you want moved is, and where you want it to go.

The robot will then imagine — through a video prediction users can watch — where the object you specified will move based on the actions the robot will take. The robot will then use this prediction to plan its next moves.

Thanks to an innovative convolutional neural network design, the robot’s skill has surprised even some of the students who helped train it — over the course of a single day last month.

In the second demo, you’ll demonstrate a task, such as putting something in a container, by guiding a robot arm. Using video of your demonstration, the robot will find the container and put the same item in it.

Talk to Us

MILABOT was another demo that captivated NIPS attendees, many of them researchers eager to find ways to trip up the chatbot.

It can be done, but, when not being tortured by a researcher who tricked the bot into making a “pronoun disambiguation,” MILABOT can keep a conversation going, even if it has to resort to a bad pun.

“I don’t own a cat, but if I did, I would like her meowy much,” MILABOT replies when asked if it likes cats. (Cats, of course, are a running joke among AI researchers.)

Created by the Montreal Institute for Learning Algorithms to compete for the Amazon Alexa Prize competition, this chatbot doesn’t rely on one conversational model, but 22 models.

Making small talk with people via speech or text is a challenge computer scientists have been grappling with at least since MIT’s Joseph Weizenbaum created ELIZA — which spits out frustratingly superficial responses to human questioning — four decades ago.

Unlike ELIZA, MILABOT relies on what its creators describe as an “ensemble” of models. They include template-based models, bag-of-words models, sequence-to-sequence neural network latent variable neural network models as well as a variant of the original ELIZA.

The real trick is using deep reinforcement learning to pick which of these models to use. To do that, MILABOT uses reinforcement learning — where software agents learn by taking a long string of actions to maximize a cumulative reward — applied to data crowdsourced from real interactions with people.

It’s not perfect, but it’s enough to draw a crowd — and keep them entertained as they throw one curveball after another at the software.

Stop By

To see these demos, and many more, stop by our booth at NIPS.

The post Where’s AI Going Next? Ask an AI … or the Researchers Whose Work We’re Demoing at NIPS appeared first on The Official NVIDIA Blog.