Skip to main content

3D News

If you’ve a penchant for liking superhero-themed anything and playing games in 3D, the Batman: Arkham series has been a match made in heaven. Simply put, when it comes to 3D Vision titles it just doesn’t get much better – and it’s hard to see how it could.We’re happy to report that Batman: Arkham Origins, which releases today, continues this tradition. Out of the box, Origins is rated 3D Vision Ready, so you know it’s going to look spectacular. We’ve played it quite a bit...
Contest closed - stay tuned to 3DVisionlive.com for details about upcoming contests.     3DVisionLive.com is excited to unveil the latest in a series of photo contests aimed at giving you a platform to show off your images and potentially win some cool prizes. Like our most recent Spring Contest, this one will span three months - October, November, and December - and is themed: Your image must be something that captures or shows the essence of "nature" and what...
With sincere apologies for the delay, NVIDIA is pleased to announce the results of the Spring Photo Contest. We received more than 80 submissions from 3DVisionLive members and, for the first time, invited the membership to select the winner. The only criteria for the contest was the photos had to represent the meaning of Spring in some fashion, and be an original image created by the member that submitted it. All submitted photos were put in a gallery and ample time was...
For the third year in a row, NVIDIA worked with the National Stereoscopic Association to sponsor a 3D digital image competition called the Digital Image Showcase, which is shown at the NSA convention - held this past June in Michigan. This year, the 3D Digital Image Showcase competition consisted of 294 images, submitted by 50 different makers. Entrants spanned the range from casual snapshooters to both commercial and fine art photographers. The competition was judged by...
  VOTING IS NOW CLOSED - Thanks to all that participated. Results coming soon!   The submission period for the Spring Photo Contest is now closed, and we are happy to report we’ve received 80 images from our members for consideration. And, for the first time, we’re opening the judging process to our community as well to help us determine the winners. So, between now and the end of June (11:59 PST, June 30st), please view all of the images in the gallery and place...

Recent Blog Entries

It’s just not practical to program a car to drive itself in every environment, given the nearly infinite range of possible variables involved.

But, thanks to AI, we can show it how to drive. And, unlike your teenager, you can then see what it’s paying attention to.

With NVIDIA PilotNet, we created a neural-network-based system that learns to steer a car by observing what people do. But we didn’t stop there. We developed a method for the network to tell us what it prioritized when making driving decisions.

So while the technology lets us build systems that learn to do things we can’t manually program, we can still explain how the systems make decisions.

“Think about why you recognize a face in a photo, and then try to break that down into a set of specific rules that you can program — you can’t do it,” says Urs Muller, chief architect for self-driving cars at NVIDIA. “The question thus becomes: ‘Do we want to limit our solutions to only the things we can define with rules?’”

AI Learns to Drive by Watching How We Drive

We use our AI car, BB8, to develop and test our DriveWorks software. The make and model of the vehicle doesn’t matter; we’ve used cars from Lincoln and Audi so far, and will use others in the future. What makes BB8 an AI car, and showcases the power of deep learning, is the deep neural network that translates images from a forward-facing camera into steering commands.

We trained our network to steer the car by having it study human drivers. The network recorded what the driver saw using a camera on the car, and then paired the images with data about the driver’s steering decisions. We logged a lot of driving hours in different environments: on roads with and without lane markings; on country roads and highways; during different times of day with different lighting conditions; in a variety of weather conditions.

The trained network taught itself to drive BB8 without ever receiving a single hand-coded instruction. It learned by observing. And now that we’ve trained the network, it can provide real-time steering commands when it sees new environments. See it in action in the video below.

Watching Our AI Think

Once PilotNet was up and running, we wanted to know more about how it makes decisions. So we developed a method for determining what the network thinks is important when it looks at an image.

To understand what PilotNet cares about most when it gets new information from a car camera, we created a visualization map. Below you see an example of the visualization, overlaid on an image of what the car’s camera recorded. Everything in green is a high priority focus point for the network.

An inside look at how our AI car thinks from our latest whitepaper (NVIDIA).

This visualization shows us that PilotNet focuses on the same things a human driver would, including lane markers, road edges and other cars. What’s revolutionary about this is that we never directly told the network to care about these things. It learned what’s important in the driving environment the same way a student in driving school would: observation.

“The benefit of using a deep neural network is that the car figures things out on its own, but we can’t make real progress if we don’t understand how it makes decisions,” Muller says. “The method we developed for peering into the network gives us information we need to improve the system. It also gives us more confidence. I can’t explain everything I need the car to do, but I can show it, and now it can show me what it learned.”

When self-driving cars go into production, many different AI neural networks, as well as more traditional technologies, will operate the vehicle. Besides PilotNet, which controls steering, cars will have networks trained and focused on specific tasks like pedestrian detection, lane detection, sign reading, collision avoidance and many more.

Take Me to Starbucks

Using a variety of AI networks, each responsible for an area of expertise, will increase safety and reliability in autonomous vehicles. Our work brings this type of sophisticated artificial intelligence to the complex world of driving. So one day soon you can enjoy a trip like the one we take in BB8 below.

For more on NVIDIA’s full stack of automotive solutions, including the DRIVE PX 2 AI car supercomputer and the NVIDIA DriveWorks open platform for developers, visit NVIDIA.com/drive. To read the research about how we peered into our deep neural net to understand what it was prioritizing, view the whitepaper.

The post Reading an AI Car’s Mind: How NVIDIA’s Neural Net Makes Decisions appeared first on The Official NVIDIA Blog.

You’ve seen the headlines. You’ve heard the buzzwords. Everyone is talking about how self-driving cars will change the world. But there’s a much deeper story here — one we’re telling in full at our GPU Technology Conference, May 8-11, in Silicon Valley — about how AI generates software with amazing capabilities.

This is what makes makes AI so powerful, and makes self-driving cars possible. At GTC, you’ll learn about the latest developments in this technology revolution — and meet the people putting it on the road. If you work in — or with — the auto industry, you can’t miss GTC and its unique conference sessions, hands-on labs, exhibitions, demos and networking events.

From Autonomous Racecars to Maps that Predict the Future

At GTC, you’ll gain insights about major auto manufacturers, sexy autonomous racecars, emotional artificial intelligence and maps that predict the future. Just to name a few.

The head of connected car, user interaction and telematics for the R&D group at Mercedes-Benz will showcase how the company enables AI in the car with powerful embedded hardware for sensor processing and fusion in the cabin interior.

Mercedes-Benz Concept EQ (Mercedes-Benz).

Self-driving technology also works on the track. Roborace, the first global championship for driverless cars, will cover relevant AI technologies in their Robocars and highlight how software defines the future of the auto industry and motor sport.

Getting Under the Hood

Everybody loves a shiny racecar, but if you want to get into the geeky stuff, we have you covered. Stop by a session about using NVIDIA DRIVE PX 2 and the NVIDIA DriveWorks SDK to enable Level 4 autonomous research vehicles. AutonomouStuff will talk you through sensor choices and mounting locations for highway and urban autonomous driving, including the optimal use of DriveWorks for sensor data gathering and processing.

If you’re getting excited thinking about the options at GTC 2017, Affectiva has an AI that would recognize how you’re feeling. They’ll talk about how they use computer vision and deep learning to measure drivers’ emotions, improving road safety and delivering a more personalized transportation experience.

Mapping the Road Ahead (Literally)

HD maps are a critical component of developing autonomous vehicles, and GTC has plenty of sessions to help you figure out how to use deep learning to create them, and keep them updated. From “self-healing” maps for automated driving, to 3D semantic maps providing cognition to autonomous vehicles, you’ll learn about the many ways mapping technology improves self-driving functionality. Speakers include representatives from Baidu, Civil Maps, HERE, TomTom and Zenrin.

HD mapping image (HERE).

Another set of sessions focuses on simulation for training and validating AI car systems. This technology speeds the pace of development, and increases the amount and diversity of data available to autonomous vehicles as they learn to drive themselves.

Register Now

Reserve your spot and save 20 percent using promo code CMAUTOBG. Then head over to our full list of self-driving speakers to schedule your sessions. See you there.

The post The Code Ahead: At GTC, Learn How Software Writes Software for AI Cars appeared first on The Official NVIDIA Blog.