Skip to main content

3D News

We’re fortunate enough to have another fine 3D video from New Media Film Festival to share with you here on 3DVisionLive—a pop music video from Italy called “The Way,” which you can view here. Even better, New Media Film Festival has provided an interview with one of the co-directors of the video, Edoardo Ballanti, which provides insights on how the video was created and the vision behind it. Enjoy! (Alice Corsi also co-directed the video.) What was the Inspiration behind “...
The Fall Photo Contest received nearly 100 images – thanks to all that entered! The contest called for your best “nature” shots with the only other requirement being that they had to be true stereo images. Submissions ranged from shots of spiders in gardens to artistic approaches to tasteful nudes. As before, members were invited to vote for the winner by tagging images in the contest gallery as favorites. Without further ado, the winner is: Autumn Goodbye to Summer This...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in beta driver 337, increasing game support and adding a toggle key to enable/disable the mode. Games with 3D Compatibility Mode will launch in this mode by default. To change the render mode back to standard 3D Vision...
3DVisionLive’s first-ever short-form 3D video contest received 14 entries that showed a great deal of diversity, ranging from video game captures to commercial-style clips to raw captures of pets or people doing cool things (such as bashing each other with swords). During judging we laughed, we cried (okay, maybe not), and we simply scratched our heads…. But seriously: thank-you to all that participated and we hope to see more of your content uploaded to the site for all to...
The submission period for the Fall Photo Contest is now closed, and we are happy to report we’ve received nearly 100 images from our members for consideration. And, once again, we’re opening the judging process to our community as well to help us determine the winners. The full gallery of images may be seen by clicking the link above. Between now and February 10th (11:59 PST), please view all of the images in the gallery and place your votes for the ones you’d like to win by...

Recent Blog Entries

For centuries, scientists have assembled and maintained extensive information on plants and stored it in what are known as herbaria — vast numbers of cabinets and drawers – at natural history museums and research institutions across the globe.

They’ve used them to discover and confirm the identity of organisms and catalog their characteristics. Over the past two decades, much of this data has been digitized, and this treasure of text, imagery and samples has become easier to share around the world.

Now, complementary projects at the Smithsonian Institution in the U.S. and the Costa Rica Institute of Technology (ITCR) are tapping the combination of big data analytics, computer vision and GPUs to deepen science’s access — and understanding — of botanical information.

Their use of GPU-accelerated deep learning promises to hasten the work of researchers, who discover and describe about 2,000 species of plants each year, and need to compare them against the nearly 400,000 known species.

Making Plant Identification Picture Perfect

A team at the ITCR published a paper last year detailing its work on a deep learning algorithm that enables image-based identification of organisms recorded on museum herbaria sheets. This work was conducted jointly with experts from CIRAD and Inria, in France.

A few months later, Smithsonian researchers published a separate paper describing the use of convolutional neural networks to digitize natural history collections, especially herbarium specimens.

Both sets of researchers expect their work to fuel a revolution in the field of biodiversity informatics.

“Instead of having to look at millions of images and search through metadata, we’re approaching a time when we’ll be able to do that through machine learning,” said Eric Schuettpelz, a research botanist at the Smithsonian. “The ability to identify something from an image may, in a matter of years, be a rather trivial endeavor.”

And that, in turn, is good news for efforts to preserve natural habitats.

“Plant species identification is particularly important for biodiversity conservation,” Jose Mario Carranza-Rojas, a Ph.D. candidate on the ITCR team.

From Ecotourism to Informatics

The associate professor overseeing the Costa Rica research, Erick Mata-Montero, was on the ground floor of biodiversity informatics’ beginnings. After studying at the University of Oregon, Mata-Montero returned to his native country in 1990 to find Costa Rica amidst an ecotourism boom and an associated effort to create and consolidate protected wildlife areas to conserve the nation’s biodiversity.

To aid the effort’s scientific understanding, Mata-Montero joined Costa Rica’s National Biodiversity Institute. By 1995, he was heading up the organization’s biodiversity informatics program, which quickly became a pioneer in the field.

Mata-Montero’s work feeds directly into his research with Carranza-Rojas, whose master’s thesis focused on algorithmic approaches to improving the identification of plants based on characteristics of their leaves, such as contours, veins and texture. During a four-month internship at CIRAD in France last year, Carranza-Rojas discovered work by Pl@ntNet, a consortium that’s created a mobile app for enabling image-based plant recognition, and the two groups collaborated on the recently published paper.

Keeping the Foot on the Accelerator

For the lab work supporting the plant-identification research, the Costa Rican team trained a convolutional neural network on about 260,000 images using two NVIDIA GeForce GPUs, the Caffe deep learning framework and cuDNN.

“Without this technology, it would’ve been impossible to run the network with such a big dataset,” said Carranza-Rojas. “On common CPUs, it would take forever to train and our experiments would have never finished.”

Since publishing their paper, the team has continued with new experiments focused on image identification of plant images taken in the wild. It’s upgraded to NVIDIA Tesla GPUs for this work, which have delivered a 25x performance gain over the GeForce GTX 1070 GPU it tested earlier this year, and it has accelerated its work with the Theano computation library for Python.

“We can test many ideas in a fraction of the time of previous experiments, which means we can do more science,” said Carranza-Rojas.

Significantly, the team’s approach hasn’t relied on domain-specific knowledge. As a result, Carranza-Rojas expects to be able to apply the work to identification of a variety of organisms such as insects, birds and fish.

On the plant front, while the work has focused on identification of species, the team would like to move to the genus and family level. It’s currently too computationally demanding to deal with all plant species because of the sheer numbers involved. But they hope to take a top-down approach to gathering knowledge at these higher taxonomic levels.

Tackling Mercury Staining

At the Smithsonian, Schuettpelz said his team became aware of the Costa Rican effort while working on their own project. Although the two teams didn’t collaborate, he believes the studies in combination may have a bigger impact.

“Coming at a problem from a couple different angles is ultimately a good thing,” he said.

The Smithsonian team has focused on identifying mercury staining, the result of early botanists treating specimens with the toxic substance to protect them from insects. A goal of their research was to know where mercury staining was prevalent in their collection.

“We can scan a million images and easily see where the plants treated with mercury are,” said Schuettpelz. Those samples with mercury staining can be isolated in special folders.

The Smithsonian team started by building a training set of images of stained and unstained specimens. They evaluated about 1,000 neural networks and found one that could identify stained specimens with 90 percent accuracy.

A Step Further

Emboldened by their success, the team decided to see how their network would do at distinguishing between plants that look similar to a trained eye. They built another dataset with 10,000 images of two hard-to-distinguish plant families, and achieved 96 percent accuracy in distinguishing between them.

Like their peers in Costa Rica, the Smithsonian team credits GPUs with making their research possible. Rebecca Dikow, a research data scientist at the Smithsonian, said that training of their network — which ran on Wolfram Mathematica with CUDA and cuDNN integrated into the mix — would’ve taken hundreds of times as long on a CPU than it did with the two NVIDIA Tesla GPU accelerators in the Smithsonian computing cluster.

“A lot of this work involves iterating over lots of different parameters, tweaking things and then running them through another network,” said Dikow in describing the computing demands.

Similar to the ITCR’s work with Pl@ntNet, the Smithsonian team is pursuing a collaboration with a larger-scale effort — in this case with iDigBio, a National Science Foundation-funded digital repository for biological data. Dikow suggested that such joint efforts will bring out the best in deep learning projects.

“Everyone who’s undertaking these lines of research has the same feeling,” said Dikow. “We really want to make our networks as robust as possible, and so collaboration is definitely the way to go.”

The post AI Offering Fertile Ground for Biodiversity Informatics appeared first on The Official NVIDIA Blog.

When it comes to advancing science, Marianne Sinka has some skin in the game. Some itchy skin.

The Oxford University entomologist has regularly sacrificed her flesh (and blood) as mosquito bait to further her research. Now she’s using AI to track the irksome insects and battle the deadly diseases they carry.

“Today, the best way to detect what species are in a place is to sit down, roll up your trousers, and see what mosquitoes bite you,” Sinka said. “There are obviously some issues with that.”

Instead, Sinka and a group of other Oxford researchers are using cheap mobile phones and GPU-accelerated deep learning to detect mosquitoes. They also want to determine whether the bugs belong to a species that transmits malaria or other life-threatening illnesses.

The goal is to help cash-strapped governments in the regions where malaria is rampant know where and when to deploy insecticides, vaccinations and other actions to prevent disease.

Killer Bugs

Few creatures are as hated as mosquitoes, and with good reason: They’re the world’s deadliest animals, killing more people than tigers, snakes, sharks and wolves put together. The blood-sucking insects carry a many life-threatening illnesses, including malaria, the Zika virus, dengue and yellow fever.

A female (top of picture) and male (bottom of picture) Anopheles gambiae mosquito, the principal carrier of malaria in Africa. Image courtesy of the Centers for Disease Control.

In 2016, malaria alone infected more than 200 million people — 90 percent of them in Africa —  and killed some 445,000, according to the World Health Organization. UNICEF reports that most these deaths occured in children less than five years old.

Among some 3,500 species of mosquitoes, only 75 can infect people with malaria, and of these, about 40 are considered the primary carriers of the parasite that causes the disease. To identify mosquito species today, researchers capture the insects (either with human lures or costly light traps) and examine them under the microscope.

For some important species, they must then use molecular methods, such as examining the mosquito’s DNA to ensure an accurate identification. These methods can be costly and time-consuming, Sinka said.

Catching a Buzz

Instead of getting up close with the vexatious vermin, the researchers put a smartphone with a sound-sensing app within biting range. Like people, animals and machines the bugs have a unique sound signature.

“It’s those distinctive buzzing tones we all hate from mosquitoes,” said Ivan Kiskin, an Oxford doctoral student with expertise in signal processing who is working on the mosquito project. The project, dubbed Humbug, is a partnership between Oxford University and London’s Kew Gardens.

Researchers are using recordings of captured mosquitoes and NVIDIA GPUs to train a neural network to recognize wing noise. So far, the deep learning-based software reports the likelihood that the buzzing comes from furiously flapping mosquito wings, which beat up to 1000 times a second. In numerous tests, the algorithms have outperformed human experts.

Humbug researchers are beginning to distinguish species as well, Kiskin said. But further progress is stymied by the need for additional training data, he added.

Beating Malaria

To collect more sound, the team is deploying mobile phones to research groups around the world. In addition, researchers developed an Android app called MozzWear to enlist help from ordinary people. MozzWear will record mosquito buzzing, along with the time and location — data that users can send to the citizen science web portal, Zooniverse.

“Malaria is a disease of the poor,” said Sinka, the bug expert. Although the disease is present in developed countries, it’s more common in regions where people live near their livestock and are often too poor to afford air conditioning, window screens or even protective netting to drape over beds.

“Ultimately, we could use our best algorithm and the phones to map malaria prevalence over a region or country,” Kiskin said. “Then we could tackle malaria by targeting aid to places in need.”

The post Beating the Bloodsuckers: How AI Takes a Swat at Mosquitoes and Malaria appeared first on The Official NVIDIA Blog.