Skip to main content

3D News

Dear 3D Vision Fans, We have enjoyed sharing 3D Vision content here over the past eight years, and seeing the fantastic images and videos you’ve uploaded. Thanks for making this such a fun destination for 3D enthusiasts. Usage of this site has been declining, and therefore, we are going to transition the 3D photo function of this site over to Phereo.com. The last day that you will be able to access 3DVisionLive.com photos is Aug. 1, 2018. In addition, the 3D video...

Recent Blog Entries

Three pioneering research teams supported by NVIDIA AI Labs are presenting key findings this week at the International Conference on Machine Learning, a major AI show taking place in Stockholm.

Known as NVAIL, our program provides these research partners with access to powerful GPU computing resources.

Researchers from Tsinghua University and Georgia Tech are exploring ways to detect vulnerabilities in neural networks that use graph structured data. An Oxford University team is training multiple AI agents to efficiently operate together in the same environment. And Carnegie Mellon University researchers are determining how a neural network can more quickly learn the optimal path around a space.

Strengthening Neural Networks Against Attack

If shown an image of a watermelon that had some scrambled pixels laid over it, a human would still be able to easily identify it. But that can be enough to fool a neural network into misclassifying the pictured object as an elephant instead — a kind of adversarial attack hackers can use to manipulate the algorithm.

While existing research on adversarial attacks has focused on images, a joint paper from Georgia Tech, Ant Financial and Tsinghua University shows for the first time that this vulnerability extends to neural networks for graph data as well.

Graph structured data consists of nodes, which store data, and edges, which connect nodes to one another. The researchers experimented by adding and deleting edges to see where the neural network begins to perform badly in response to edge modifications.

Social network data, like the graph of how a single user is connected to a web of Facebook friends, is one example of graph structured data. Another is data on money transactions between individuals — such as records of who has sent money to whom.

Fooling a graph neural network that looks at financial data could result in a fraudulent transaction being labeled as legitimate. “If such models are not robust, if that’s easy to be attacked, that raises more concerns about using these models,” said Hanjun Dai, Ph.D. student at Georgia Tech and lead author on the paper.

The team used the cuDNN software library and ran their experiments on Tesla and GeForce GTX 1080 Ti GPUs. While the paper focuses on investigating the problem of adversarial attacks on graph structured data, the goal is for future research to propose solutions to strengthen graph neural networks so they provide reliable results despite attempted attacks.

Teamwork Makes the Neural Net Work

Driving is a multiplayer activity. Though each driver only has control over a single vehicle, the driver’s actions affect everyone else on the road. The person behind the wheel must also consider the actions of fellow motorists when deciding what to do.

Translating this kind of multilayered understanding into AI is a challenge.

An AI agent takes in information and feedback from its environment to learn and make decisions. But when there are multiple agents operating in the same space, researchers are tasked with teaching each AI to understand how the other agents affect the final outcome.

If an agent can’t reason about the behavior of others, it wouldn’t be able to properly reconcile its observations.

“For instance, it could find itself in exactly the same situation as earlier, take the same action and something different could happen,” said Oxford University doctoral student Tabish Rashid, a co-author on a paper that will be presented at ICML. “That causes conflicting learning to happen. It makes it difficult to learn what to do.”

This problem can be avoided during training, where researchers can allow multiple agents to communicate with one another and know other agents’ actions. But in the real world, one AI agent will not always have communication with or insight into the plans of others — so it must be able to act independently.

The Oxford researchers proposed a novel method that takes advantage of the training setting. Using the strategy game StarCraft II, they trained several agents together in an environment where agents could share information freely. After this centralized training, the agents were tested on how well they could perform independently.

Co-author Mikayel Samvelyan, former master’s student at Oxford, said this approach transfers well beyond the research setting: “You can train agents in a simulator and then use the strategies they learned in the real world.”

The team used an NVIDIA DGX-1 AI supercomputer and several GeForce GTX 1080 Ti GPUs for their work.

Planning the Perfect Path

Watching a cleaning robot wend its way around a swimming pool can be a mildly entertaining pastime on a lazy summer day. But is it taking the most efficient path around the pool to save time and energy?

Neural networks can help robots learn the optimal path around an environment faster and with less input information. A research group at Carnegie Mellon authored a paper outlining a path-finding model that’s simpler to train and more generic than current algorithms.

This makes it easier for developers to take the same base model, quickly apply it to different solutions, and optimize it. Applications for path-finding are diverse, ranging from household robots to factory robots, drones and autonomous vehicles.

Using 2D and 3D mazes, the team trained the neural network, powered by the NVIDIA DGX-1, an essential tool for accelerating deep learning research. Out in the world, an AI may not always have a map or know the structure of an environment beforehand — so the model was developed to learn just from images of the environment.

“Navigation is one of the core components for pretty much any intelligent system,” said Ruslan Salakhutdinov, computer science professor at Carnegie Mellon. Path planning networks like this one could become a building block that developers plug into larger robotic systems, he said.

Attendees of ICML, which runs July 10-15, can hear about each of these projects at the conference. Come by the NVIDIA booth (B02:12, Hall B) to connect with our AI experts, take a look at the new DGX-2 supercomputer and check out the latest demos.

The post Smorgasbord of AI Research Gets Set Out in Stockholm by NVAIL Partners appeared first on The Official NVIDIA Blog.

A pasture of startups promises to beef up the cattle industry.

These pioneers in what’s become known as the Internet of Cows are using image recognition algorithms to help detect cattle health issues and serve up analytics for improvements to farm management in the multibillion-dollar dairy and beef markets.

Companies dotting the globe — in Israel, Canada, Ireland, Amsterdam, India and the U.S. — have been focused on boosting cattle production as AI has come to the fore of agriculture in recent years.

“We have an Internet of Things approach,” said Joy Parr Drach, CEO of Advanced Animal Diagnostics. “Point of care for us is point of cow.”

Morrisville, N.C.-based AAD has a commercial portable dairy testing device and is in trials with its device targeted at the beef cattle industry. AAD uses AI to analyze images of fluorescing white blood cells in a drop of milk or blood.

Using deep learning algorithms, the device determines quantities of each type of white blood cell, translating the results into animal health status. AAD processes animal tests on its servers running NVIDIA GPUs, tapping machine learning to differentiate infection-fighting cells.

AAD’s QScout farm testing device aims to predict animal performance and detect infections in beef cattle before they appear. Its internet-connected portable lab can shuttle results to the cloud and provide alerts to red flags in the health of cows. 

Its testing unit can also keep tabs on the health of cows for milk production. The testing device can monitor for elevated types of certain white blood cells in cows, an indicator for mastitis, which is inflammation of the udder’s mammary gland that can threaten milk production. Using these new technologies can provide early detection and enable farms to head off the condition in some cases and avoid use of antibiotics.

The startup’s technology promotes reduced use of antibiotics in cows while ensuring health and welfare of those animals who need treatment, said Parr Drach.

Parr Drach is no stranger to cattle farming. “My weekend job is as a livestock producer. It’s a family business,” she said.

Deep Learning Drives IoC

Like AAD, SomaDetect is trying to improve milk production with the use of AI. The startup, based in Fredericton, New Brunswick, Canada, and Buffalo, New York, uses optical sensors to produce images from light-scatter patterns in milk. SomaDetect applies its deep learning models to analyze the images.

SomaDetect has trained its convolutional neural network on lab data — basically, using all the previously taken cow health data and feeding it into its model. The database it has developed keeps a set of images taken from its sensors as well as the lab results, and it continues to be fed to further develop its algorithm.

The startup is measuring fats and proteins, as well as for the onset of mastitis indicated by an elevated white blood cell count, reproductive status and antibiotic residuals.

“Deep learning is really what unlocks this,” said Bethany Deshpande, co-founder and CEO at SomaDetect. “We’re using deep learning to provide critical data to the farmer so they can improve and optimize their operation.”

SomaDetect’s technology is in use at Cornell University and the company has 20 farms identified as early adopters for installations at the end of 2018. Customers purchase its sensors and license the use of its algorithm, providing revenue for the startup.

Fitbit for Cows

Amsterdam-based Connecterra is taking a different tack with cows: The startup has a Fitbit-for-cows business. The company’s sensors for cows capture data that can help detect for eating disorders, heat stress and fertility. The information can be uploaded to the cloud for analysis and predictions of cow behavioral patterns.

Similarly, Israel-based Afimilk offers a smart collar for cow tracking. Its device, called Silent Herdsman, monitors the heat and other health signs of cows. Afimilk provides health-tracking software and alerts to potential indicators of problems.

India-based Stellapps, which offers a cloud-based farm and herd management system, tracks cow metrics including fertility and steps of activity with its tracking devices, promising internet-based health alerts and increased productivity for farms.

The emergence of the internet of cows sector has given farmers more options to help manage the health of their cattle. For example, most cows get milked twice a day, but if you know one is getting mastitis, you can milk the cow four times a day to help flush it out and avoid antibiotics.

“Deep learning has allowed us to exploit this technology to a level that years ago never would have been possible,” said Deshpande.

The post Herd of AI Startups Milking the Internet of Cows appeared first on The Official NVIDIA Blog.