Skip to main content

3D News

If you’ve a penchant for liking superhero-themed anything and playing games in 3D, the Batman: Arkham series has been a match made in heaven. Simply put, when it comes to 3D Vision titles it just doesn’t get much better – and it’s hard to see how it could.We’re happy to report that Batman: Arkham Origins, which releases today, continues this tradition. Out of the box, Origins is rated 3D Vision Ready, so you know it’s going to look spectacular. We’ve played it quite a bit...
Contest closed - stay tuned to for details about upcoming contests. is excited to unveil the latest in a series of photo contests aimed at giving you a platform to show off your images and potentially win some cool prizes. Like our most recent Spring Contest, this one will span three months - October, November, and December - and is themed: Your image must be something that captures or shows the essence of "nature" and what...
With sincere apologies for the delay, NVIDIA is pleased to announce the results of the Spring Photo Contest. We received more than 80 submissions from 3DVisionLive members and, for the first time, invited the membership to select the winner. The only criteria for the contest was the photos had to represent the meaning of Spring in some fashion, and be an original image created by the member that submitted it. All submitted photos were put in a gallery and ample time was...
For the third year in a row, NVIDIA worked with the National Stereoscopic Association to sponsor a 3D digital image competition called the Digital Image Showcase, which is shown at the NSA convention - held this past June in Michigan. This year, the 3D Digital Image Showcase competition consisted of 294 images, submitted by 50 different makers. Entrants spanned the range from casual snapshooters to both commercial and fine art photographers. The competition was judged by...
  VOTING IS NOW CLOSED - Thanks to all that participated. Results coming soon!   The submission period for the Spring Photo Contest is now closed, and we are happy to report we’ve received 80 images from our members for consideration. And, for the first time, we’re opening the judging process to our community as well to help us determine the winners. So, between now and the end of June (11:59 PST, June 30st), please view all of the images in the gallery and place...

Recent Blog Entries

With droopy electronic eyes, two gangly arms and four spindly legs, MANTIS looks like it could be a sidekick in a science-fiction blockbuster.

This robot isn’t from a universe far, far away, but from northern Germany. And it was on display this week in Amsterdam among a half-dozen walking, flying and crawling machines powered by NVIDIA technologies playing a starring role at GTC Europe.

GTC attendees clustered around the MANTIS on the floor of the city’s Passenger Terminal to snap photos and talk to members of the team from the German Research Center for Artificial Intelligence and the University Bremen that put it together.

MANTIS was among the embedded devices on display at GTC Europe in Amsterdam this week.

It’s both an ingenious design, and a classic machine learning problem. The robot can grasp and manipulate objects with its two arms while scurrying around on four legs. Or it can crouch down to use all six of its appendages to scuttle across particularly tricky terrain.

The challenge: the robot’s operator has to manipulate between 20 and 50 parameters as this machine moves. It’s unique design means it’s got tremendous potential. To make the most of it, the robot’s creators are working to use our Jetson embedded platform — and the power of deep learning — to make MANTIS truly autonomous.

The team’s engineers plan to use Jetson to help MANTIS evaluate how it moves through its environment. It can then create a knowledge base that Jetson can use in conjunction with the TensorRT high-performance neural network inference engine to put these lessons to work, in real time, as it moves.

MANTIS was a highlight of GTC Europe’s embedded track, which packed in 18 sessions by leading researchers, companies and startups talking about how they’re using deep learning and GPUs to push the boundaries of embedded technology.

Thanks to our Jetson embedded platform and the power of deep learning — the AI revolution kickstarted by GPUs — devices of all kinds are becoming increasingly intelligent. A few highlights:

  • Aerialtronics released one of the first commercial drones, Altura Zenith, using AI technology to visually inspect buildings, cell towers, wind turbines and more. All real-time processing is done onboard the drone.
  • is computer vision software for precision agriculture and inspection, enabling drones to count cattle, crops and more. customers can get a bird’s eye view of all of their assets through aerial imagery.
  • IIT and R1, also known as “your personal Humanoids,” are robots created to help people in their daily tasks in homes and offices. They cost about the price of a new TV.
  • Neurala has deep learning software, Brains for Bots, that runs in real time on a drone. With Neurala software, a drone can learn objects, scenes, people or obstacles. It can also recognize them when viewed by the camera, locate them within the video stream and track them as they move.
  • Parrot is showing its latest S.L.A.M. Dunk open development kit for the design of advanced applications for autonomous navigation, obstacle avoidance, indoor navigation and 3D mapping for drones and other robotic platforms in environments with multiple barriers and where GPS signals are not available.
  • Squadrone Systems is demonstrating real-time data collection and data analytics for logistics, site exploration and surveillance. These autonomous flying drones can scan items in bins and recognize misplaced items.


The post Six-Legged MANTIS Among Machines Swarming Out of Sci-Fi Into GTC Europe appeared first on The Official NVIDIA Blog.

It was my wobbly willpower vs. a double-chocolate brownie, and the brownie was winning.

My newest defense: the Lose It diet app’s deep learning calorie counter. Using a photo of the brownie, it warned me of the harrowing diet damage the brownie would inflict. I ate half.

I am not a dieter. The idea of keeping track of everything I eat – a winning strategy, the experts say – makes me lose my appetite. Which, I suppose, is another diet strategy.

But last week, a beta version of Lose It’s automatic calorie-counter lured me into the ranks of the estimated 45 million American dieters.

Deep Learning Calorie Counter The Lose It diet app has a new automatic calorie counter that uses deep learning to tally calories from a photo.

Called Snap It, the app’s GPU-accelerated deep learning keeps tabs on what you eat based largely on photos you take of what’s on your plate. It returns a list of foods it thinks are in the photo. You choose one and select a portion size, and it tallies the calorie toll.

Like any beta, it has some glitches. The app easily identifies photos of some foods like salad, pasta or a banana. But the list it generated from a picture of a glass of white wine was off the mark, and increasingly desperate sounding: water, cake, milkshake, smoothie, applesauce, fried rice, cheesecake, edamame, sushi, dumpling. When it saw a picture of cereal in a bowl, it offered up pasta, granola, almonds, fried chicken, cake, risotto, pretzels, steak, sauce or an egg.

But one of the beauties of deep learning is that the AI gets better with more data and more training feedback. And millions of Lose It customers – the app currently averages 2 million users a month – gained access to the Snap It deep learning feature this week.

“The more people use this, the more it improves,” said Edward W. Lowe, data scientist at Lose It. “The goal is to get the accuracy high enough in six months so it won’t even need to ask you for validation.”

Tough Training for Neural Network

Although Google and others have created automated calorie-counters, Lowe said Lose It’s accuracy rate is about 87 percent for foods commonly entered by its users. That surpassed others tested using the standard measure in the Food-101 dataset.

He credits that to the rigorous neural network training – he trained the network 10 times – using a vast database of 230,000 food images and more than 4 billion foods logged by Lose It users since 2008.

Lowe trained the network using the NVIDIA DIGITS deep learning training system on four NVIDIA TITAN X GPUs. DIGITS uses the latest cuDNN 5.1 deep learning library for accelerated training on NVIDIA GPUs.

“Without the GPUs, we never would have initiated this project,” Lowe said.

A Little Help Losing Weight

Even before the automatic calorie counting, Lose It has helped lots of people lose weight. Since the company launched in 2008, its members have reported losing a total of more than 50 million pounds.

It’s a good thing something works. More than two-thirds of American adults are considered to be overweight or obese, according to the National Institutes of Health. Globally, 39 percent of adults are overweight or obese, the World Health Organization said.

As for my fight with brownies, let’s just say the Snap It feature continues to get regular feedback on what a small portion of chocolatey deliciousness looks like.


The post How AI Helped Me (Almost) Give up Brownies appeared first on The Official NVIDIA Blog.