Skip to main content

3D News

In our opinion, there are far too few people out there taking 3D images and one major reason is the perceived difficulty barrier—taking two images and combining them for a stereo effect with special software or using custom twin digital SLR camera rigs is simply too complex and/or expensive for most of us mere mortals. Enter the 3D-capable point-and-shoot, the latest of which is Panasonic’s upcoming Lumix DMC-3D1. Similar to Fujifilm...

Recent Blog Entries

With droopy electronic eyes, two gangly arms and four spindly legs, MANTIS looks like it could be a sidekick in a science-fiction blockbuster.

This robot isn’t from a universe far, far away, but from northern Germany. And it was on display this week in Amsterdam among a half-dozen walking, flying and crawling machines powered by NVIDIA technologies playing a starring role at GTC Europe.

GTC attendees clustered around the MANTIS on the floor of the city’s Passenger Terminal to snap photos and talk to members of the team from the German Research Center for Artificial Intelligence and the University Bremen that put it together.

MANTIS was among the embedded devices on display at GTC Europe in Amsterdam this week.

It’s both an ingenious design, and a classic machine learning problem. The robot can grasp and manipulate objects with its two arms while scurrying around on four legs. Or it can crouch down to use all six of its appendages to scuttle across particularly tricky terrain.

The challenge: the robot’s operator has to manipulate between 20 and 50 parameters as this machine moves. It’s unique design means it’s got tremendous potential. To make the most of it, the robot’s creators are working to use our Jetson embedded platform — and the power of deep learning — to make MANTIS truly autonomous.

The team’s engineers plan to use Jetson to help MANTIS evaluate how it moves through its environment. It can then create a knowledge base that Jetson can use in conjunction with the TensorRT high-performance neural network inference engine to put these lessons to work, in real time, as it moves.

MANTIS was a highlight of GTC Europe’s embedded track, which packed in 18 sessions by leading researchers, companies and startups talking about how they’re using deep learning and GPUs to push the boundaries of embedded technology.

Thanks to our Jetson embedded platform and the power of deep learning — the AI revolution kickstarted by GPUs — devices of all kinds are becoming increasingly intelligent. A few highlights:

  • Aerialtronics released one of the first commercial drones, Altura Zenith, using AI technology to visually inspect buildings, cell towers, wind turbines and more. All real-time processing is done onboard the drone.
  • Birds.ai is computer vision software for precision agriculture and inspection, enabling drones to count cattle, crops and more. Birds.ai customers can get a bird’s eye view of all of their assets through aerial imagery.
  • IIT and R1, also known as “your personal Humanoids,” are robots created to help people in their daily tasks in homes and offices. They cost about the price of a new TV.
  • Neurala has deep learning software, Brains for Bots, that runs in real time on a drone. With Neurala software, a drone can learn objects, scenes, people or obstacles. It can also recognize them when viewed by the camera, locate them within the video stream and track them as they move.
  • Parrot is showing its latest S.L.A.M. Dunk open development kit for the design of advanced applications for autonomous navigation, obstacle avoidance, indoor navigation and 3D mapping for drones and other robotic platforms in environments with multiple barriers and where GPS signals are not available.
  • Squadrone Systems is demonstrating real-time data collection and data analytics for logistics, site exploration and surveillance. These autonomous flying drones can scan items in bins and recognize misplaced items.

 

The post Six-Legged MANTIS Among Machines Swarming Out of Sci-Fi Into GTC Europe appeared first on The Official NVIDIA Blog.

It was my wobbly willpower vs. a double-chocolate brownie, and the brownie was winning.

My newest defense: the Lose It diet app’s deep learning calorie counter. Using a photo of the brownie, it warned me of the harrowing diet damage the brownie would inflict. I ate half.

I am not a dieter. The idea of keeping track of everything I eat – a winning strategy, the experts say – makes me lose my appetite. Which, I suppose, is another diet strategy.

But last week, a beta version of Lose It’s automatic calorie-counter lured me into the ranks of the estimated 45 million American dieters.

Deep Learning Calorie Counter The Lose It diet app has a new automatic calorie counter that uses deep learning to tally calories from a photo.

Called Snap It, the app’s GPU-accelerated deep learning keeps tabs on what you eat based largely on photos you take of what’s on your plate. It returns a list of foods it thinks are in the photo. You choose one and select a portion size, and it tallies the calorie toll.

Like any beta, it has some glitches. The app easily identifies photos of some foods like salad, pasta or a banana. But the list it generated from a picture of a glass of white wine was off the mark, and increasingly desperate sounding: water, cake, milkshake, smoothie, applesauce, fried rice, cheesecake, edamame, sushi, dumpling. When it saw a picture of cereal in a bowl, it offered up pasta, granola, almonds, fried chicken, cake, risotto, pretzels, steak, sauce or an egg.

But one of the beauties of deep learning is that the AI gets better with more data and more training feedback. And millions of Lose It customers – the app currently averages 2 million users a month – gained access to the Snap It deep learning feature this week.

“The more people use this, the more it improves,” said Edward W. Lowe, data scientist at Lose It. “The goal is to get the accuracy high enough in six months so it won’t even need to ask you for validation.”

Tough Training for Neural Network

Although Google and others have created automated calorie-counters, Lowe said Lose It’s accuracy rate is about 87 percent for foods commonly entered by its users. That surpassed others tested using the standard measure in the Food-101 dataset.

He credits that to the rigorous neural network training – he trained the network 10 times – using a vast database of 230,000 food images and more than 4 billion foods logged by Lose It users since 2008.

Lowe trained the network using the NVIDIA DIGITS deep learning training system on four NVIDIA TITAN X GPUs. DIGITS uses the latest cuDNN 5.1 deep learning library for accelerated training on NVIDIA GPUs.

“Without the GPUs, we never would have initiated this project,” Lowe said.

A Little Help Losing Weight

Even before the automatic calorie counting, Lose It has helped lots of people lose weight. Since the company launched in 2008, its members have reported losing a total of more than 50 million pounds.

It’s a good thing something works. More than two-thirds of American adults are considered to be overweight or obese, according to the National Institutes of Health. Globally, 39 percent of adults are overweight or obese, the World Health Organization said.

As for my fight with brownies, let’s just say the Snap It feature continues to get regular feedback on what a small portion of chocolatey deliciousness looks like.

 

The post How AI Helped Me (Almost) Give up Brownies appeared first on The Official NVIDIA Blog.