Skip to main content

3D News

3DVisionLive’s first-ever short-form 3D video contest received 14 entries that showed a great deal of diversity, ranging from video game captures to commercial-style clips to raw captures of pets or people doing cool things (such as bashing each other with swords). During judging we laughed, we cried (okay, maybe not), and we simply scratched our heads…. But seriously: thank-you to all that participated and we hope to see more of your content uploaded to the site for all to...
The submission period for the Fall Photo Contest is now closed, and we are happy to report we’ve received nearly 100 images from our members for consideration. And, once again, we’re opening the judging process to our community as well to help us determine the winners. The full gallery of images may be seen by clicking the link above. Between now and February 10th (11:59 PST), please view all of the images in the gallery and place your votes for the ones you’d like to win by...
With the holidays drawing near NVIDIA would like to say a quick thank-you to all the 3DVisionLive members for sharing so many outstanding 3D images throughout the year and continuing to provide a supportive and fun environment for 3D enthusiasts. We look forward to seeing what 2014 brings to the site, and with that said, here are a few of our favorite holiday-themed images to add an extra dimension to that holiday spirit.   See stereo 3D on photos.3dvisionlive.com See...
Time to break out those 3D video cameras folks and show us your inner Peter Jackson – or Cameron or Speilberg or whichever director you admire. 3DVisionLive is pleased to announce its first ever Short-form 3D Video Contest. And don’t worry, we really don’t expect Orcs or Goblins in your video (but that would be cool). What’s a Short-form video you ask? Well, it’ simple really: short-from videos are generally defined as a video of less than 5 minutes in length – and we’re...
We’re fortunate to be able to host Elysian Fields here on 3DVisionlive for all of you. Winner of a number of accolades, including multiple “Best Animated 3D Short Film” awards, it’s hard to watch Elysian Fields and not be drawn into its world. The short was brought to us through Susan Johnston, Founder/Director of the New Media Film Festival, who was also kind enough to provide us with the following interview of Elysian Field’s creator, Ina Chavez. Enjoy! Silverlight....

Recent Blog Entries

It’s just not practical to program a car to drive itself in every environment, given the nearly infinite range of possible variables involved.

But, thanks to AI, we can show it how to drive. And, unlike your teenager, you can then see what it’s paying attention to.

With NVIDIA PilotNet, we created a neural-network-based system that learns to steer a car by observing what people do. But we didn’t stop there. We developed a method for the network to tell us what it prioritized when making driving decisions.

So while the technology lets us build systems that learn to do things we can’t manually program, we can still explain how the systems make decisions.

“Think about why you recognize a face in a photo, and then try to break that down into a set of specific rules that you can program — you can’t do it,” says Urs Muller, chief architect for self-driving cars at NVIDIA. “The question thus becomes: ‘Do we want to limit our solutions to only the things we can define with rules?’”

AI Learns to Drive by Watching How We Drive

We use our AI car, BB8, to develop and test our DriveWorks software. The make and model of the vehicle doesn’t matter; we’ve used cars from Lincoln and Audi so far, and will use others in the future. What makes BB8 an AI car, and showcases the power of deep learning, is the deep neural network that translates images from a forward-facing camera into steering commands.

We trained our network to steer the car by having it study human drivers. The network recorded what the driver saw using a camera on the car, and then paired the images with data about the driver’s steering decisions. We logged a lot of driving hours in different environments: on roads with and without lane markings; on country roads and highways; during different times of day with different lighting conditions; in a variety of weather conditions.

The trained network taught itself to drive BB8 without ever receiving a single hand-coded instruction. It learned by observing. And now that we’ve trained the network, it can provide real-time steering commands when it sees new environments. See it in action in the video below.

Watching Our AI Think

Once PilotNet was up and running, we wanted to know more about how it makes decisions. So we developed a method for determining what the network thinks is important when it looks at an image.

To understand what PilotNet cares about most when it gets new information from a car camera, we created a visualization map. Below you see an example of the visualization, overlaid on an image of what the car’s camera recorded. Everything in green is a high priority focus point for the network.

An inside look at how our AI car thinks from our latest whitepaper (NVIDIA).

This visualization shows us that PilotNet focuses on the same things a human driver would, including lane markers, road edges and other cars. What’s revolutionary about this is that we never directly told the network to care about these things. It learned what’s important in the driving environment the same way a student in driving school would: observation.

“The benefit of using a deep neural network is that the car figures things out on its own, but we can’t make real progress if we don’t understand how it makes decisions,” Muller says. “The method we developed for peering into the network gives us information we need to improve the system. It also gives us more confidence. I can’t explain everything I need the car to do, but I can show it, and now it can show me what it learned.”

When self-driving cars go into production, many different AI neural networks, as well as more traditional technologies, will operate the vehicle. Besides PilotNet, which controls steering, cars will have networks trained and focused on specific tasks like pedestrian detection, lane detection, sign reading, collision avoidance and many more.

Take Me to Starbucks

Using a variety of AI networks, each responsible for an area of expertise, will increase safety and reliability in autonomous vehicles. Our work brings this type of sophisticated artificial intelligence to the complex world of driving. So one day soon you can enjoy a trip like the one we take in BB8 below.

For more on NVIDIA’s full stack of automotive solutions, including the DRIVE PX 2 AI car supercomputer and the NVIDIA DriveWorks open platform for developers, visit NVIDIA.com/drive. To read the research about how we peered into our deep neural net to understand what it was prioritizing, view the whitepaper.

The post Reading an AI Car’s Mind: How NVIDIA’s Neural Net Makes Decisions appeared first on The Official NVIDIA Blog.

You’ve seen the headlines. You’ve heard the buzzwords. Everyone is talking about how self-driving cars will change the world. But there’s a much deeper story here — one we’re telling in full at our GPU Technology Conference, May 8-11, in Silicon Valley — about how AI generates software with amazing capabilities.

This is what makes makes AI so powerful, and makes self-driving cars possible. At GTC, you’ll learn about the latest developments in this technology revolution — and meet the people putting it on the road. If you work in — or with — the auto industry, you can’t miss GTC and its unique conference sessions, hands-on labs, exhibitions, demos and networking events.

From Autonomous Racecars to Maps that Predict the Future

At GTC, you’ll gain insights about major auto manufacturers, sexy autonomous racecars, emotional artificial intelligence and maps that predict the future. Just to name a few.

The head of connected car, user interaction and telematics for the R&D group at Mercedes-Benz will showcase how the company enables AI in the car with powerful embedded hardware for sensor processing and fusion in the cabin interior.

Mercedes-Benz Concept EQ (Mercedes-Benz).

Self-driving technology also works on the track. Roborace, the first global championship for driverless cars, will cover relevant AI technologies in their Robocars and highlight how software defines the future of the auto industry and motor sport.

Getting Under the Hood

Everybody loves a shiny racecar, but if you want to get into the geeky stuff, we have you covered. Stop by a session about using NVIDIA DRIVE PX 2 and the NVIDIA DriveWorks SDK to enable Level 4 autonomous research vehicles. AutonomouStuff will talk you through sensor choices and mounting locations for highway and urban autonomous driving, including the optimal use of DriveWorks for sensor data gathering and processing.

If you’re getting excited thinking about the options at GTC 2017, Affectiva has an AI that would recognize how you’re feeling. They’ll talk about how they use computer vision and deep learning to measure drivers’ emotions, improving road safety and delivering a more personalized transportation experience.

Mapping the Road Ahead (Literally)

HD maps are a critical component of developing autonomous vehicles, and GTC has plenty of sessions to help you figure out how to use deep learning to create them, and keep them updated. From “self-healing” maps for automated driving, to 3D semantic maps providing cognition to autonomous vehicles, you’ll learn about the many ways mapping technology improves self-driving functionality. Speakers include representatives from Baidu, Civil Maps, HERE, TomTom and Zenrin.

HD mapping image (HERE).

Another set of sessions focuses on simulation for training and validating AI car systems. This technology speeds the pace of development, and increases the amount and diversity of data available to autonomous vehicles as they learn to drive themselves.

Register Now

Reserve your spot and save 20 percent using promo code CMAUTOBG. Then head over to our full list of self-driving speakers to schedule your sessions. See you there.

The post The Code Ahead: At GTC, Learn How Software Writes Software for AI Cars appeared first on The Official NVIDIA Blog.