Skip to main content

3D News

At E3 Namco thrilled hip-hop loving fans of its Tekken series by announcing that none other than the “Doggfather” of hip-hip himself, Snoop Dogg, has recorded the title track for its upcoming Tekken Tag Tournament 2. Better still, for his myriad fans, a special Snoop-themed fighting stage will be featured in the game. And that’s not all… NVIDIA has partnered with Namco to produce an exclusive 3D version of the game’s title track, “Knock ‘Em Down”. The video made its national...
3DVisionLive.com is excited to unveil the fourth in a series of photo contests aimed at giving you a platform to show off your images and potentially win some cool prizes. This one's a little different in that it will span three months - June, July, August - and is themed: Your image must be something that captures the essence of the season, which can be anything from bathing beauties to sand castles to epic water balloon fights. The Summer Photo Contest is open to legal...
During NGF in Shanghai in April (http://www.geforce.com/whats-new/articles/watch-live-streaming/), NVIDIA CEO Jensen Huang first time unveiled the Passion Leads Army (光荣使命) benchmark to the public. PLA is the first China-developed DirectX 11 benchmark officially released by a top Chinese developer—Giant IronHorse Studio. Giant IronHorse is creating the multiplayer online first-person shooter with the Unreal Engine 3 engine, so you know it's going to look awesome. The...
Max Payne and Bullet Time are back and better than ever. The third chapter of Rockstar Games' critically acclaimed action series has launched worldwide on PC. Taking place in New York and in São Paulo, Brazil, Max Payne 3 follows Max Payne through a dark and gritty story fans of the franchise will love. For the first time in the series, Max Payne 3 features multiplayer that brings the same cinematic feel, fluid gunplay, and sense of movement of the single-player game into...
Our April Photo Contest was another doozy with more than 100 images entered. Subjects spanned a wide gamut from creative angles in in-game screenshots to macro shots of insects to family candids and super-wide panoramic shots. Thanks to all who submitted—and please, keep ‘em coming! And the Winner Is… Without further ado, another tip o’ the cap goes to Nick Saglimbeni, who won our very first contest in February of this year. This time, his shot of Kim Kardashian posed in a...

Recent Blog Entries

It’s just not practical to program a car to drive itself in every environment, given the nearly infinite range of possible variables involved.

But, thanks to AI, we can show it how to drive. And, unlike your teenager, you can then see what it’s paying attention to.

With NVIDIA PilotNet, we created a neural-network-based system that learns to steer a car by observing what people do. But we didn’t stop there. We developed a method for the network to tell us what it prioritized when making driving decisions.

So while the technology lets us build systems that learn to do things we can’t manually program, we can still explain how the systems make decisions.

“Think about why you recognize a face in a photo, and then try to break that down into a set of specific rules that you can program — you can’t do it,” says Urs Muller, chief architect for self-driving cars at NVIDIA. “The question thus becomes: ‘Do we want to limit our solutions to only the things we can define with rules?’”

AI Learns to Drive by Watching How We Drive

We use our AI car, BB8, to develop and test our DriveWorks software. The make and model of the vehicle doesn’t matter; we’ve used cars from Lincoln and Audi so far, and will use others in the future. What makes BB8 an AI car, and showcases the power of deep learning, is the deep neural network that translates images from a forward-facing camera into steering commands.

We trained our network to steer the car by having it study human drivers. The network recorded what the driver saw using a camera on the car, and then paired the images with data about the driver’s steering decisions. We logged a lot of driving hours in different environments: on roads with and without lane markings; on country roads and highways; during different times of day with different lighting conditions; in a variety of weather conditions.

The trained network taught itself to drive BB8 without ever receiving a single hand-coded instruction. It learned by observing. And now that we’ve trained the network, it can provide real-time steering commands when it sees new environments. See it in action in the video below.

Watching Our AI Think

Once PilotNet was up and running, we wanted to know more about how it makes decisions. So we developed a method for determining what the network thinks is important when it looks at an image.

To understand what PilotNet cares about most when it gets new information from a car camera, we created a visualization map. Below you see an example of the visualization, overlaid on an image of what the car’s camera recorded. Everything in green is a high priority focus point for the network.

An inside look at how our AI car thinks from our latest whitepaper (NVIDIA).

This visualization shows us that PilotNet focuses on the same things a human driver would, including lane markers, road edges and other cars. What’s revolutionary about this is that we never directly told the network to care about these things. It learned what’s important in the driving environment the same way a student in driving school would: observation.

“The benefit of using a deep neural network is that the car figures things out on its own, but we can’t make real progress if we don’t understand how it makes decisions,” Muller says. “The method we developed for peering into the network gives us information we need to improve the system. It also gives us more confidence. I can’t explain everything I need the car to do, but I can show it, and now it can show me what it learned.”

When self-driving cars go into production, many different AI neural networks, as well as more traditional technologies, will operate the vehicle. Besides PilotNet, which controls steering, cars will have networks trained and focused on specific tasks like pedestrian detection, lane detection, sign reading, collision avoidance and many more.

Take Me to Starbucks

Using a variety of AI networks, each responsible for an area of expertise, will increase safety and reliability in autonomous vehicles. Our work brings this type of sophisticated artificial intelligence to the complex world of driving. So one day soon you can enjoy a trip like the one we take in BB8 below.

For more on NVIDIA’s full stack of automotive solutions, including the DRIVE PX 2 AI car supercomputer and the NVIDIA DriveWorks open platform for developers, visit NVIDIA.com/drive. To read the research about how we peered into our deep neural net to understand what it was prioritizing, view the whitepaper.

The post Reading an AI Car’s Mind: How NVIDIA’s Neural Net Makes Decisions appeared first on The Official NVIDIA Blog.

You’ve seen the headlines. You’ve heard the buzzwords. Everyone is talking about how self-driving cars will change the world. But there’s a much deeper story here — one we’re telling in full at our GPU Technology Conference, May 8-11, in Silicon Valley — about how AI generates software with amazing capabilities.

This is what makes makes AI so powerful, and makes self-driving cars possible. At GTC, you’ll learn about the latest developments in this technology revolution — and meet the people putting it on the road. If you work in — or with — the auto industry, you can’t miss GTC and its unique conference sessions, hands-on labs, exhibitions, demos and networking events.

From Autonomous Racecars to Maps that Predict the Future

At GTC, you’ll gain insights about major auto manufacturers, sexy autonomous racecars, emotional artificial intelligence and maps that predict the future. Just to name a few.

The head of connected car, user interaction and telematics for the R&D group at Mercedes-Benz will showcase how the company enables AI in the car with powerful embedded hardware for sensor processing and fusion in the cabin interior.

Mercedes-Benz Concept EQ (Mercedes-Benz).

Self-driving technology also works on the track. Roborace, the first global championship for driverless cars, will cover relevant AI technologies in their Robocars and highlight how software defines the future of the auto industry and motor sport.

Getting Under the Hood

Everybody loves a shiny racecar, but if you want to get into the geeky stuff, we have you covered. Stop by a session about using NVIDIA DRIVE PX 2 and the NVIDIA DriveWorks SDK to enable Level 4 autonomous research vehicles. AutonomouStuff will talk you through sensor choices and mounting locations for highway and urban autonomous driving, including the optimal use of DriveWorks for sensor data gathering and processing.

If you’re getting excited thinking about the options at GTC 2017, Affectiva has an AI that would recognize how you’re feeling. They’ll talk about how they use computer vision and deep learning to measure drivers’ emotions, improving road safety and delivering a more personalized transportation experience.

Mapping the Road Ahead (Literally)

HD maps are a critical component of developing autonomous vehicles, and GTC has plenty of sessions to help you figure out how to use deep learning to create them, and keep them updated. From “self-healing” maps for automated driving, to 3D semantic maps providing cognition to autonomous vehicles, you’ll learn about the many ways mapping technology improves self-driving functionality. Speakers include representatives from Baidu, Civil Maps, HERE, TomTom and Zenrin.

HD mapping image (HERE).

Another set of sessions focuses on simulation for training and validating AI car systems. This technology speeds the pace of development, and increases the amount and diversity of data available to autonomous vehicles as they learn to drive themselves.

Register Now

Reserve your spot and save 20 percent using promo code CMAUTOBG. Then head over to our full list of self-driving speakers to schedule your sessions. See you there.

The post The Code Ahead: At GTC, Learn How Software Writes Software for AI Cars appeared first on The Official NVIDIA Blog.