Skip to main content

3D News

We’re fortunate enough to have another fine 3D video from New Media Film Festival to share with you here on 3DVisionLive—a pop music video from Italy called “The Way,” which you can view here. Even better, New Media Film Festival has provided an interview with one of the co-directors of the video, Edoardo Ballanti, which provides insights on how the video was created and the vision behind it. Enjoy! (Alice Corsi also co-directed the video.) What was the Inspiration behind “...
The Fall Photo Contest received nearly 100 images – thanks to all that entered! The contest called for your best “nature” shots with the only other requirement being that they had to be true stereo images. Submissions ranged from shots of spiders in gardens to artistic approaches to tasteful nudes. As before, members were invited to vote for the winner by tagging images in the contest gallery as favorites. Without further ado, the winner is: Autumn Goodbye to Summer This...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in beta driver 337, increasing game support and adding a toggle key to enable/disable the mode. Games with 3D Compatibility Mode will launch in this mode by default. To change the render mode back to standard 3D Vision...
3DVisionLive’s first-ever short-form 3D video contest received 14 entries that showed a great deal of diversity, ranging from video game captures to commercial-style clips to raw captures of pets or people doing cool things (such as bashing each other with swords). During judging we laughed, we cried (okay, maybe not), and we simply scratched our heads…. But seriously: thank-you to all that participated and we hope to see more of your content uploaded to the site for all to...
The submission period for the Fall Photo Contest is now closed, and we are happy to report we’ve received nearly 100 images from our members for consideration. And, once again, we’re opening the judging process to our community as well to help us determine the winners. The full gallery of images may be seen by clicking the link above. Between now and February 10th (11:59 PST), please view all of the images in the gallery and place your votes for the ones you’d like to win by...

Recent Blog Entries

Alexa, play @%!^&!!

Voice assistants have a long way to go still, but that’s not slowing an AI-based speech recognition startup’s ambitions to be the de facto meeting notes assistant, capturing voice-to-text interactions.

Silicon Valley-based AISense has launched Otter, a GPU-powered app that records speech and quickly returns voice files and transcriptions noted from multiple people. Otter is available now for free on iOS, Android and the web.

Founded in 2016, the startup has focused on speech recognition technologies for long-form conversations among multiple speakers, as well as on a language processing area known as speaker diarization, which enables machines to differentiate voices.

Two years in the making, AISense’s proprietary Ambient Voice Intelligence technology allows people to store, search, share and analyze voice conversations.

Otter allows you to scroll text, clearly labeled coming from multiple people, and gives the option to listen, as well. The app provides better than 90 percent accuracy in text dictation, according to the company.

Human-to-human interactions are much more difficult to capture than human-to-machine interactions such as simple commands between people and Amazon’s Alexa, Apple’s Siri or Google Assistant, according to AISense co-founder and CEO Sam Liang. That’s what makes Otter different from traditional voice products, which only handle short queries or commands from a single speaker.

AISense technology had to be enhanced to handle all of the complicated interactions of people and nuances of conversations, and it can get tripped up by accents in people’s speech, said Liang.

“This is a pretty deep technology. It’s extremely difficult,” he said. “We had to do pretty sophisticated supervised learning, and we had to get a lot of labeled data, with hundreds of thousands of hours of recordings.”

Liang is a well-known Silicon Valley tech figure. At Google Maps, he was responsible for the blue dot as the tech lead of location services. In 2013, he sold his startup Alohar Mobile to Alibaba.

His latests startup is also building a semi-supervised learning system to do self-learning from large quantities of meeting data without requiring human transcription.

Luckily, there’s a ton of such training data available online. The 15-person team at AISense was able to get freely available data from archives of NPR radio programs and Supreme Court proceedings available at the Library of Congress.

Then it used terabytes of audio data and transcripts to train its algorithms for Otter, relying on  50 NVIDIA Tesla GPUs. Said Liang, “It’s a startup and we have to spend money very frugally. But we have to spend some resources on GPUs — it’s just a must.”

The company is targeting Otter at enterprise customers who might use it in meetings. AISense plans to release a premium version that will require a subscription, and it already licenses some of its technology to enterprise customers.

AISense recently partnered with Zoom Video Communications to handle gobs of voice data, which is being robo-transcribed by AISense’s technology.

AISense has raised more than $13 million in funding to date. Investors include Horizon Ventures, Draper Associates, Slow Ventures, SV Tech Ventures, Bridgewater Associates, 500 Startups and billionaire Stanford professor David Cheriton.

The post Otter App Aims to Use Power of AI to Set Gold Standard for Note Taking appeared first on The Official NVIDIA Blog.

Teenager Kavya Kopparapu has accomplished more than most of us ordinary mortals.

Before entering her senior year of high school, she’d invented an AI-powered tool to prevent blindness in diabetics. She’d co-created a mobile app to let EMTs securely pull medical information from unconscious patients’ smartphones. And she’d founded a nonprofit to support girls in technology.

But that was just kids’ stuff for the 17 year old. Now the Washington, D.C.-area teen is using GPU-accelerated deep learning to fight the deadliest form of brain cancer, glioblastoma.

The fast-growing cancer is best known for who it’s afflicted — U.S. Senator John McCain is battling it now — and the lives it’s claimed, including U.S. Senator Edward (Ted) Kennedy and Beau Biden, son of former U.S. Vice President Joe Biden.

A Faster Way to Diagnose Brain Cancer

Glioblastoma is a grim diagnosis. Only 15 percent of patients survive beyond five years, and most die within 18 months of diagnosis, according to the U.S. National Cancer Institute.

“People aren’t diagnosed until it’s too late, and once they’re diagnosed they have little time to live,” Kopparapu said. “I thought there has to be a better way to diagnose and treat this illness.”

The Harvard-bound teen, who was a finalist in the just-completed Regeneron National Science Talent Search is one of many researchers who believe that the best treatments for the disease are the most personalized.

Most brain tumors, including glioblastoma, originate in the star-shaped astrocyte cell.

Today, personalizing treatments requires analyzing DNA from tissue removed during a biopsy, but that can be a weeks-long process, Kopparapu said.

“It’s expensive and it takes a lot of time,” she said. “For patients with very aggressive cancer, it’s time they don’t have.”

Genes + GPUs

Kopparapu hopes to save brain cancer patients precious time with a deep learning-powered tool she calls GlioVision. The software is designed to instantly detect and interpret genetic information from a biopsy slide, skipping the long analysis. Doctors use this information to predict how fast a tumor will grow, and if it will respond to specific drugs and other treatments.

She trained her deep learning software using the cuDNN-accelerated deep learning framework Caffe and NVIDIA Tesla GPUs in a university computing cluster.

“If you’re doing deep learning, you need GPU hardware to speed up training,” she said.

The teen is working with a pathologist at the Georgetown University Medical Center to test her system’s performance on patient data.

Science Interest Goes Way Back

Kopparapu’s interest in science began in elementary school, where she was astonished to see how mixing substances like baking soda and vinegar produced an eruption of foam.

“I wanted to figure out how things work and learn the world around me,” Kopparapu said. She recalls how she and her kid brother Neeyanth would watch MythBusters and Cosmos, and read Scientific American together over breakfast. Now Neeyanth, 16, is presenting a poster at the GPU Technology Conference, March 26-29, in Silicon Valley.

Encouraged by her parents, she took classes in computer science, computer vision and AI at Thomas Jefferson High School for Science and Technology. She was troubled to see that she was one of only a few girls in the classes, so she founded the Girls Computing League. It offers workshops on topics like robotics, Java and Python programming, and mobile application development to underprivileged girls and others.

Kopparapu joined representatives from the United Nations, the American Association for the Advancement of Science, and other organizations as one of two teen speakers at last year’s March for Science in Washington, D.C. AI for Eyes

Eager to put her new computer science skills to work, she teamed with her brother, and a classmate, Justin Zhang, to create an AI tool to help people like her grandfather, who has diabetic retinopathy, which can lead to blindness.

The tool, called Eyeagnosis, is a smartphone app and 3D-printed lens that could diagnose the condition quickly and easily for people in regions where access to an ophthalmologist is limited. She’s now testing it at a hospital in India.

In one of Kopparapu’s long list of awards, the health website WebMD in January named her one of three young Health Heroes. She’s also organized an AI research symposium for high school students, spoken at last spring’s National March for Science, and is set to share the stage with Peter Norvig, director of research at Google, during a talk at the O’Reilly Artificial Intelligence Conference in May.

Her ultimate goal, she says, is “making the world a better place.” We’d argue she already has.

The post Brainiac vs. Brain Cancer: Teen Tackles Deadly Disease appeared first on The Official NVIDIA Blog.