Skip to main content

3D News

Getting a robotic exploration vehicle safely to a destination that’s hundreds of millions of miles away is an amazing feat in and of itself. And the good folks at NASA’s Jet Propulsion Laboratory (JPL) should be congratulated for their current success with the latest machine sent to Mars to “look around” – the MSL Curiosity. However, getting the craft to Mars was just the start of the challenge: Once there, how do you control it? In Curiosity’s case, the answer lies with...
The Dark Knight Rises, the final movie in Warner Bros. trilogy starring everyone’s favorite Caped Crusader, has “Batman Fever” at an all-time high. Outside of the films, there’s a bounty of excellent ways to feed your hunger for more Batman thrills on your 3D Vision-equipped PC. The following games all provide top-notch 3D experiences, feature Batman in a starring role, and all are available now.    Batman: Arkham City Harley Quinn’s Revenge  The first...
NVIDIA is pleased to announce the first Photo Champion for 3D Vision Live, Nick Saglimbeni. Regular visitors to the site should be well familiar with Nick's images. His Warehouse Wonderland image won the site's first monthly Photo Contest, and he was also the first repeat winner of the Contest two months later with Kim Kardashian's Wild West - one of the site's first 3D celebrity images. Nick is receiving the 2012 3D Vision Live Photo Champion Award as our formal...
Sorry folks for the delay in announcing the winner for May's Photo Contest - we had an issue with the search function and needed to make sure all entries were considered. Without further ado, on to the results! Alex Savin has been submitting some excellent images from his European adventures for some time now, and his "Fontana di Trevi" is a wonderful example of stereo photography that just plain works. The composition is top notch and the image is sharp throughout, which...
James Cameron continues to pioneer 3D technology. With the first Avatar he showed what 3D could add to the film experience. After criticizing the fast conversions from 2D to 3D that many Hollywood studios have released since Avatar, Cameron oversaw a team that turned Titanic into a 3D blockbuster. That film has been a commercial and critical success, showing what a year of meticulous conversion and $18 million can add to a 15-year-old movie. The director talks about Avatar,...

Recent Blog Entries

For centuries, scientists have assembled and maintained extensive information on plants and stored it in what are known as herbaria — vast numbers of cabinets and drawers – at natural history museums and research institutions across the globe.

They’ve used them to discover and confirm the identity of organisms and catalog their characteristics. Over the past two decades, much of this data has been digitized, and this treasure of text, imagery and samples has become easier to share around the world.

Now, complementary projects at the Smithsonian Institution in the U.S. and the Costa Rica Institute of Technology (ITCR) are tapping the combination of big data analytics, computer vision and GPUs to deepen science’s access — and understanding — of botanical information.

Their use of GPU-accelerated deep learning promises to hasten the work of researchers, who discover and describe about 2,000 species of plants each year, and need to compare them against the nearly 400,000 known species.

Making Plant Identification Picture Perfect

A team at the ITCR published a paper last year detailing its work on a deep learning algorithm that enables image-based identification of organisms recorded on museum herbaria sheets. This work was conducted jointly with experts from CIRAD and Inria, in France.

A few months later, Smithsonian researchers published a separate paper describing the use of convolutional neural networks to digitize natural history collections, especially herbarium specimens.

Both sets of researchers expect their work to fuel a revolution in the field of biodiversity informatics.

“Instead of having to look at millions of images and search through metadata, we’re approaching a time when we’ll be able to do that through machine learning,” said Eric Schuettpelz, a research botanist at the Smithsonian. “The ability to identify something from an image may, in a matter of years, be a rather trivial endeavor.”

And that, in turn, is good news for efforts to preserve natural habitats.

“Plant species identification is particularly important for biodiversity conservation,” Jose Mario Carranza-Rojas, a Ph.D. candidate on the ITCR team.

From Ecotourism to Informatics

The associate professor overseeing the Costa Rica research, Erick Mata-Montero, was on the ground floor of biodiversity informatics’ beginnings. After studying at the University of Oregon, Mata-Montero returned to his native country in 1990 to find Costa Rica amidst an ecotourism boom and an associated effort to create and consolidate protected wildlife areas to conserve the nation’s biodiversity.

To aid the effort’s scientific understanding, Mata-Montero joined Costa Rica’s National Biodiversity Institute. By 1995, he was heading up the organization’s biodiversity informatics program, which quickly became a pioneer in the field.

Mata-Montero’s work feeds directly into his research with Carranza-Rojas, whose master’s thesis focused on algorithmic approaches to improving the identification of plants based on characteristics of their leaves, such as contours, veins and texture. During a four-month internship at CIRAD in France last year, Carranza-Rojas discovered work by Pl@ntNet, a consortium that’s created a mobile app for enabling image-based plant recognition, and the two groups collaborated on the recently published paper.

Keeping the Foot on the Accelerator

For the lab work supporting the plant-identification research, the Costa Rican team trained a convolutional neural network on about 260,000 images using two NVIDIA GeForce GPUs, the Caffe deep learning framework and cuDNN.

“Without this technology, it would’ve been impossible to run the network with such a big dataset,” said Carranza-Rojas. “On common CPUs, it would take forever to train and our experiments would have never finished.”

Since publishing their paper, the team has continued with new experiments focused on image identification of plant images taken in the wild. It’s upgraded to NVIDIA Tesla GPUs for this work, which have delivered a 25x performance gain over the GeForce GTX 1070 GPU it tested earlier this year, and it has accelerated its work with the Theano computation library for Python.

“We can test many ideas in a fraction of the time of previous experiments, which means we can do more science,” said Carranza-Rojas.

Significantly, the team’s approach hasn’t relied on domain-specific knowledge. As a result, Carranza-Rojas expects to be able to apply the work to identification of a variety of organisms such as insects, birds and fish.

On the plant front, while the work has focused on identification of species, the team would like to move to the genus and family level. It’s currently too computationally demanding to deal with all plant species because of the sheer numbers involved. But they hope to take a top-down approach to gathering knowledge at these higher taxonomic levels.

Tackling Mercury Staining

At the Smithsonian, Schuettpelz said his team became aware of the Costa Rican effort while working on their own project. Although the two teams didn’t collaborate, he believes the studies in combination may have a bigger impact.

“Coming at a problem from a couple different angles is ultimately a good thing,” he said.

The Smithsonian team has focused on identifying mercury staining, the result of early botanists treating specimens with the toxic substance to protect them from insects. A goal of their research was to know where mercury staining was prevalent in their collection.

“We can scan a million images and easily see where the plants treated with mercury are,” said Schuettpelz. Those samples with mercury staining can be isolated in special folders.

The Smithsonian team started by building a training set of images of stained and unstained specimens. They evaluated about 1,000 neural networks and found one that could identify stained specimens with 90 percent accuracy.

A Step Further

Emboldened by their success, the team decided to see how their network would do at distinguishing between plants that look similar to a trained eye. They built another dataset with 10,000 images of two hard-to-distinguish plant families, and achieved 96 percent accuracy in distinguishing between them.

Like their peers in Costa Rica, the Smithsonian team credits GPUs with making their research possible. Rebecca Dikow, a research data scientist at the Smithsonian, said that training of their network — which ran on Wolfram Mathematica with CUDA and cuDNN integrated into the mix — would’ve taken hundreds of times as long on a CPU than it did with the two NVIDIA Tesla GPU accelerators in the Smithsonian computing cluster.

“A lot of this work involves iterating over lots of different parameters, tweaking things and then running them through another network,” said Dikow in describing the computing demands.

Similar to the ITCR’s work with Pl@ntNet, the Smithsonian team is pursuing a collaboration with a larger-scale effort — in this case with iDigBio, a National Science Foundation-funded digital repository for biological data. Dikow suggested that such joint efforts will bring out the best in deep learning projects.

“Everyone who’s undertaking these lines of research has the same feeling,” said Dikow. “We really want to make our networks as robust as possible, and so collaboration is definitely the way to go.”

The post AI Offering Fertile Ground for Biodiversity Informatics appeared first on The Official NVIDIA Blog.

When it comes to advancing science, Marianne Sinka has some skin in the game. Some itchy skin.

The Oxford University entomologist has regularly sacrificed her flesh (and blood) as mosquito bait to further her research. Now she’s using AI to track the irksome insects and battle the deadly diseases they carry.

“Today, the best way to detect what species are in a place is to sit down, roll up your trousers, and see what mosquitoes bite you,” Sinka said. “There are obviously some issues with that.”

Instead, Sinka and a group of other Oxford researchers are using cheap mobile phones and GPU-accelerated deep learning to detect mosquitoes. They also want to determine whether the bugs belong to a species that transmits malaria or other life-threatening illnesses.

The goal is to help cash-strapped governments in the regions where malaria is rampant know where and when to deploy insecticides, vaccinations and other actions to prevent disease.

Killer Bugs

Few creatures are as hated as mosquitoes, and with good reason: They’re the world’s deadliest animals, killing more people than tigers, snakes, sharks and wolves put together. The blood-sucking insects carry many life-threatening illnesses, including malaria, the Zika virus, dengue and yellow fever.

A female (top of picture) and male (bottom of picture) Anopheles gambiae mosquito, the principal carrier of malaria in Africa. Image courtesy of the Centers for Disease Control.

In 2016, malaria alone infected more than 200 million people — 90 percent of them in Africa —  and killed some 445,000, according to the World Health Organization. UNICEF reports that most these deaths occured in children less than five years old.

Among some 3,500 species of mosquitoes, only 75 can infect people with malaria, and of these, about 40 are considered the primary carriers of the parasite that causes the disease. To identify mosquito species today, researchers capture the insects (either with human lures or costly light traps) and examine them under the microscope.

For some important species, they must then use molecular methods, such as examining the mosquito’s DNA to ensure an accurate identification. These methods can be costly and time-consuming, Sinka said.

Catching a Buzz

Instead of getting up close with the vexatious vermin, the researchers put a smartphone with a sound-sensing app within biting range. Like people, animals and machines the bugs have a unique sound signature.

“It’s those distinctive buzzing tones we all hate from mosquitoes,” said Ivan Kiskin, an Oxford doctoral student with expertise in signal processing who is working on the mosquito project. The project, dubbed Humbug, is a partnership between Oxford University and London’s Kew Gardens.

Researchers are using recordings of captured mosquitoes and NVIDIA GPUs to train a neural network to recognize wing noise. So far, the deep learning-based software reports the likelihood that the buzzing comes from furiously flapping mosquito wings, which beat up to 1000 times a second. In numerous tests, the algorithms have outperformed human experts.

Humbug researchers are beginning to distinguish species as well, Kiskin said. But further progress is stymied by the need for additional training data, he added.

Beating Malaria

To collect more sound, the team is deploying mobile phones to research groups around the world. In addition, researchers developed an Android app called MozzWear to enlist help from ordinary people. MozzWear will record mosquito buzzing, along with the time and location — data that users can send to the citizen science web portal, Zooniverse.

“Malaria is a disease of the poor,” said Sinka, the bug expert. Although the disease is present in developed countries, it’s more common in regions where people live near their livestock and are often too poor to afford air conditioning, window screens or even protective netting to drape over beds.

“Ultimately, we could use our best algorithm and the phones to map malaria prevalence over a region or country,” Kiskin said. “Then we could tackle malaria by targeting aid to places in need.”

The post Beating the Bloodsuckers: AI Takes a Swat at Mosquitoes and Malaria appeared first on The Official NVIDIA Blog.