Skip to main content

3D News

With the launch of our 3D Vision® 2 glasses, it was easy to overlook advancements being made on 3D Vision-Ready monitors. Product Manager Michael McSorley walks you through the latest and greatest developments in the display category.  Other than the obvious advantage of an increase in the maximum size for 3D Vision panels—from 24-inches to 27-inches—there are really two primary game-changing features that you should be aware of: 3D LightBoost and...
Just because you don’t have a 3D camera does not have to mean you can’t participate in 3D photography—thanks to companies such as 3Defy, which make software that you can use to transform your 2D images into stereoscopic 3D shots you can view with your 3D Vision™ hardware. The 3Defy name may already be a bit familiar to you if you frequent the site—we’ve been featuring a lot of their content on 3DVisonLive to give you a taste of what...
For the second year NVIDIA is teaming up with Premiers Plans on a 3D short film competition as part of Premiers’ 24th Angers Festival that is taking place now in France. We have created a channel  on 3D Vision Live where you can view two of the six films selected for the official competition in 3D right now.  "Miss Daisy Cutter" is an animated short film by Laen Sanches featuring "Nux Vomica" by The Veils. If Walt Disney took some...
The NVIDIA booth at CES last week in Las Vegas was overflowing with visitors for most of the show. People could get into the Tesla Model S for the first time to play with the incredible 17-inch display in the center of the dash (powered by NVIDIA). And we had two triple-screen 3D racing simulators from VRX running iRacing that drew big lines/crowds. We took along a FujiFilm W3 and snapped some 3D candids throughout the week from the booth, and a few from around the...
It’s been a thrilling first year for and we’d like to thank all of ours users for helping to make this site one of the most compelling places to visit to view 3D content. During the past year more than 20,000 3D images have been uploaded by 3D enthusiasts for all of us to view, share, and enjoy! As you can imagine, then, selecting a gallery of just 12 images from all of this material for our Best 3D Photos of 2011 gallery was no small feat....

Recent Blog Entries

For centuries, scientists have assembled and maintained extensive information on plants and stored it in what are known as herbaria — vast numbers of cabinets and drawers – at natural history museums and research institutions across the globe.

They’ve used them to discover and confirm the identity of organisms and catalog their characteristics. Over the past two decades, much of this data has been digitized, and this treasure of text, imagery and samples has become easier to share around the world.

Now, complementary projects at the Smithsonian Institution in the U.S. and the Costa Rica Institute of Technology (ITCR) are tapping the combination of big data analytics, computer vision and GPUs to deepen science’s access — and understanding — of botanical information.

Their use of GPU-accelerated deep learning promises to hasten the work of researchers, who discover and describe about 2,000 species of plants each year, and need to compare them against the nearly 400,000 known species.

Making Plant Identification Picture Perfect

A team at the ITCR published a paper last year detailing its work on a deep learning algorithm that enables image-based identification of organisms recorded on museum herbaria sheets. This work was conducted jointly with experts from CIRAD and Inria, in France.

A few months later, Smithsonian researchers published a separate paper describing the use of convolutional neural networks to digitize natural history collections, especially herbarium specimens.

Both sets of researchers expect their work to fuel a revolution in the field of biodiversity informatics.

“Instead of having to look at millions of images and search through metadata, we’re approaching a time when we’ll be able to do that through machine learning,” said Eric Schuettpelz, a research botanist at the Smithsonian. “The ability to identify something from an image may, in a matter of years, be a rather trivial endeavor.”

And that, in turn, is good news for efforts to preserve natural habitats.

“Plant species identification is particularly important for biodiversity conservation,” Jose Mario Carranza-Rojas, a Ph.D. candidate on the ITCR team.

From Ecotourism to Informatics

The associate professor overseeing the Costa Rica research, Erick Mata-Montero, was on the ground floor of biodiversity informatics’ beginnings. After studying at the University of Oregon, Mata-Montero returned to his native country in 1990 to find Costa Rica amidst an ecotourism boom and an associated effort to create and consolidate protected wildlife areas to conserve the nation’s biodiversity.

To aid the effort’s scientific understanding, Mata-Montero joined Costa Rica’s National Biodiversity Institute. By 1995, he was heading up the organization’s biodiversity informatics program, which quickly became a pioneer in the field.

Mata-Montero’s work feeds directly into his research with Carranza-Rojas, whose master’s thesis focused on algorithmic approaches to improving the identification of plants based on characteristics of their leaves, such as contours, veins and texture. During a four-month internship at CIRAD in France last year, Carranza-Rojas discovered work by Pl@ntNet, a consortium that’s created a mobile app for enabling image-based plant recognition, and the two groups collaborated on the recently published paper.

Keeping the Foot on the Accelerator

For the lab work supporting the plant-identification research, the Costa Rican team trained a convolutional neural network on about 260,000 images using two NVIDIA GeForce GPUs, the Caffe deep learning framework and cuDNN.

“Without this technology, it would’ve been impossible to run the network with such a big dataset,” said Carranza-Rojas. “On common CPUs, it would take forever to train and our experiments would have never finished.”

Since publishing their paper, the team has continued with new experiments focused on image identification of plant images taken in the wild. It’s upgraded to NVIDIA Tesla GPUs for this work, which have delivered a 25x performance gain over the GeForce GTX 1070 GPU it tested earlier this year, and it has accelerated its work with the Theano computation library for Python.

“We can test many ideas in a fraction of the time of previous experiments, which means we can do more science,” said Carranza-Rojas.

Significantly, the team’s approach hasn’t relied on domain-specific knowledge. As a result, Carranza-Rojas expects to be able to apply the work to identification of a variety of organisms such as insects, birds and fish.

On the plant front, while the work has focused on identification of species, the team would like to move to the genus and family level. It’s currently too computationally demanding to deal with all plant species because of the sheer numbers involved. But they hope to take a top-down approach to gathering knowledge at these higher taxonomic levels.

Tackling Mercury Staining

At the Smithsonian, Schuettpelz said his team became aware of the Costa Rican effort while working on their own project. Although the two teams didn’t collaborate, he believes the studies in combination may have a bigger impact.

“Coming at a problem from a couple different angles is ultimately a good thing,” he said.

The Smithsonian team has focused on identifying mercury staining, the result of early botanists treating specimens with the toxic substance to protect them from insects. A goal of their research was to know where mercury staining was prevalent in their collection.

“We can scan a million images and easily see where the plants treated with mercury are,” said Schuettpelz. Those samples with mercury staining can be isolated in special folders.

The Smithsonian team started by building a training set of images of stained and unstained specimens. They evaluated about 1,000 neural networks and found one that could identify stained specimens with 90 percent accuracy.

A Step Further

Emboldened by their success, the team decided to see how their network would do at distinguishing between plants that look similar to a trained eye. They built another dataset with 10,000 images of two hard-to-distinguish plant families, and achieved 96 percent accuracy in distinguishing between them.

Like their peers in Costa Rica, the Smithsonian team credits GPUs with making their research possible. Rebecca Dikow, a research data scientist at the Smithsonian, said that training of their network — which ran on Wolfram Mathematica with CUDA and cuDNN integrated into the mix — would’ve taken hundreds of times as long on a CPU than it did with the two NVIDIA Tesla GPU accelerators in the Smithsonian computing cluster.

“A lot of this work involves iterating over lots of different parameters, tweaking things and then running them through another network,” said Dikow in describing the computing demands.

Similar to the ITCR’s work with Pl@ntNet, the Smithsonian team is pursuing a collaboration with a larger-scale effort — in this case with iDigBio, a National Science Foundation-funded digital repository for biological data. Dikow suggested that such joint efforts will bring out the best in deep learning projects.

“Everyone who’s undertaking these lines of research has the same feeling,” said Dikow. “We really want to make our networks as robust as possible, and so collaboration is definitely the way to go.”

The post AI Offering Fertile Ground for Biodiversity Informatics appeared first on The Official NVIDIA Blog.

When it comes to advancing science, Marianne Sinka has some skin in the game. Some itchy skin.

The Oxford University entomologist has regularly sacrificed her flesh (and blood) as mosquito bait to further her research. Now she’s using AI to track the irksome insects and battle the deadly diseases they carry.

“Today, the best way to detect what species are in a place is to sit down, roll up your trousers, and see what mosquitoes bite you,” Sinka said. “There are obviously some issues with that.”

Instead, Sinka and a group of other Oxford researchers are using cheap mobile phones and GPU-accelerated deep learning to detect mosquitoes. They also want to determine whether the bugs belong to a species that transmits malaria or other life-threatening illnesses.

The goal is to help cash-strapped governments in the regions where malaria is rampant know where and when to deploy insecticides, vaccinations and other actions to prevent disease.

Killer Bugs

Few creatures are as hated as mosquitoes, and with good reason: They’re the world’s deadliest animals, killing more people than tigers, snakes, sharks and wolves put together. The blood-sucking insects carry many life-threatening illnesses, including malaria, the Zika virus, dengue and yellow fever.

A female (top of picture) and male (bottom of picture) Anopheles gambiae mosquito, the principal carrier of malaria in Africa. Image courtesy of the Centers for Disease Control.

In 2016, malaria alone infected more than 200 million people — 90 percent of them in Africa —  and killed some 445,000, according to the World Health Organization. UNICEF reports that most these deaths occured in children less than five years old.

Among some 3,500 species of mosquitoes, only 75 can infect people with malaria, and of these, about 40 are considered the primary carriers of the parasite that causes the disease. To identify mosquito species today, researchers capture the insects (either with human lures or costly light traps) and examine them under the microscope.

For some important species, they must then use molecular methods, such as examining the mosquito’s DNA to ensure an accurate identification. These methods can be costly and time-consuming, Sinka said.

Catching a Buzz

Instead of getting up close with the vexatious vermin, the researchers put a smartphone with a sound-sensing app within biting range. Like people, animals and machines the bugs have a unique sound signature.

“It’s those distinctive buzzing tones we all hate from mosquitoes,” said Ivan Kiskin, an Oxford doctoral student with expertise in signal processing who is working on the mosquito project. The project, dubbed Humbug, is a partnership between Oxford University and London’s Kew Gardens.

Researchers are using recordings of captured mosquitoes and NVIDIA GPUs to train a neural network to recognize wing noise. So far, the deep learning-based software reports the likelihood that the buzzing comes from furiously flapping mosquito wings, which beat up to 1000 times a second. In numerous tests, the algorithms have outperformed human experts.

Humbug researchers are beginning to distinguish species as well, Kiskin said. But further progress is stymied by the need for additional training data, he added.

Beating Malaria

To collect more sound, the team is deploying mobile phones to research groups around the world. In addition, researchers developed an Android app called MozzWear to enlist help from ordinary people. MozzWear will record mosquito buzzing, along with the time and location — data that users can send to the citizen science web portal, Zooniverse.

“Malaria is a disease of the poor,” said Sinka, the bug expert. Although the disease is present in developed countries, it’s more common in regions where people live near their livestock and are often too poor to afford air conditioning, window screens or even protective netting to drape over beds.

“Ultimately, we could use our best algorithm and the phones to map malaria prevalence over a region or country,” Kiskin said. “Then we could tackle malaria by targeting aid to places in need.”

The post Beating the Bloodsuckers: AI Takes a Swat at Mosquitoes and Malaria appeared first on The Official NVIDIA Blog.