Skip to main content

3D News

NVIDIA is pleased to announce the first Photo Champion for 3D Vision Live, Nick Saglimbeni. Regular visitors to the site should be well familiar with Nick's images. His Warehouse Wonderland image won the site's first monthly Photo Contest, and he was also the first repeat winner of the Contest two months later with Kim Kardashian's Wild West - one of the site's first 3D celebrity images. Nick is receiving the 2012 3D Vision Live Photo Champion Award as our formal...
Sorry folks for the delay in announcing the winner for May's Photo Contest - we had an issue with the search function and needed to make sure all entries were considered. Without further ado, on to the results! Alex Savin has been submitting some excellent images from his European adventures for some time now, and his "Fontana di Trevi" is a wonderful example of stereo photography that just plain works. The composition is top notch and the image is sharp throughout, which...
James Cameron continues to pioneer 3D technology. With the first Avatar he showed what 3D could add to the film experience. After criticizing the fast conversions from 2D to 3D that many Hollywood studios have released since Avatar, Cameron oversaw a team that turned Titanic into a 3D blockbuster. That film has been a commercial and critical success, showing what a year of meticulous conversion and $18 million can add to a 15-year-old movie. The director talks about Avatar,...
Marvel Entertainment was one of the first major Hollywood companies to commit to 3D movies. Beginning last summer, every movie based on a Marvel comic property was to be either filmed in 3D or converted to 3D for theatrical and home entertainment releases. When this mandate came down, Ari Arad (Iron Man), producer of Ghost Rider: Spirit of Vengeance, turned to NVIDIA to help with the production of the Sony Pictures sequel, which is now out on Blu-ray 3D, Blu-ray and DVD....
People are flocking to the theater to take in Pixar’s latest animated film, Brave, which we recommend seeing in 3D, of course. After seeing the movie you can relive the adventure by picking up the gorgeous Brave: The Video Game for PC. The third-person action/adventure game lets you play the role of Princess Merida—Pixar’s first female lead character—as you follow her adventures in a family-friendly storyline based on the film. Engage in bow-and-arrow and sword combat and...

Recent Blog Entries

More people have probably heard of the annual NIPS conference in the past week than over its previous 30 years as the premier event focused on neural networks.

The once obscure gathering — held last week in Long Beach, Calif. — drew coverage from the New York Times, Bloomberg, The Economist and other major outlets, focused on its astounding growth as AI has become a hot field and on the mad dash companies are making to recruit gifted developers.

But recall what put NIPS on the map in the first place: the sharing of world-class research that advances artificial intelligence.

Two of our NVIDIA AI Labs (NVAIL) partners were among those presenting groundbreaking work. Researchers at New York University have advanced how computers can classify objects within complex images, and taken a step toward what might loosely be thought of as peripheral vision for machines.

And researchers at the University of California, Berkeley, are using the parallel processing power of GPUs to give machines more “curiosity” to explore their environments when trying to complete a task.

Giving Image Recognition a Second Look

At NIPS, lead NYU researcher Sean Welleck described how his team is using reinforcement learning to help computers better classify the objects within images.

Instead of looking at an image in a rote pattern — say from left to right, starting at the top and working its way to the bottom — the team’s multi-object recognition model takes a high-level look at all the objects in the image. If it identifies something it can likely classify correctly, then it takes a closer look and gets a reward if it’s right. It then proceeds to another object it has a good bead on.

This ability to classify objects in any order is a major advancement, and could lead to faster and more accurate image classification. It also minimizes the need for annotating objects — the drudgework often required to get good, labeled datasets to work with. Welleck’s work makes the best of the labels already present.

The research is also a step toward giving computers peripheral vision, where two levels of attention — one scanning the image for objects and the other deciding to take a closer look at potentially interesting items — are in play.

Rewarding Complex Tasks

At Berkeley, Justin Fu is lead researcher on work to overcome the problem in reinforcement learning of how to incentivize a machine to complete complex tasks. A classic reward test is the game Pong, where an AI, or even a sophisticated toddler, can learn to manipulate a paddle to successfully keep a ricocheting ball in play.

But games with many more choices, like Doom, pose a much greater challenge for an AI, and plenty of high-IQ humans, because the reward only comes after a much longer sequence of successful steps. If the task is complex enough, the chance of randomly completing it — and ever getting the reward — is slim.

The research team’s proposed solution uses what’s known as an exemplar model. In it, the model is incentivized to take actions that result in unexpected outcomes — so your robot isn’t trying the same left turn every time it sees a T in a maze. Instead, it’s incentivized to explore the options in its environment.

It does this by determining the differences between new images and all the previous ones it’s seen. Instead of comparing raw pixels between images, the model trains a classifier to distinguish what’s new in an image compared to earlier ones it’s examined.

Thanks to GPU computing, the model can crank through tons of images quickly, classifying everything as it goes. Noting these subtle changes to the environment helps the model try new options and better figure out how to successfully complete its task.

NYU and Berkeley are two of the 20 universities across the globe our NVAIL program supports. NVAIL helps researchers from schools like these advance their work through assistance from NVIDIA researchers and engineers, support for the universities’ students, and with access to the industry’s most advanced GPU computing power, like the DGX-1 AI supercomputer.


The post NVAIL Partners Show Groundbreaking AI Research at NIPS Event appeared first on The Official NVIDIA Blog.

Artificial intelligence is years, even decades, from replicating functions of the human mind, but it’s still getting serious work done today. And its influence will only expand. The irony of all that promise: Human minds are way behind. Relatively few have a baseline understanding about how AI and deep learning truly work.

Techniques like machine learning, which underpin many of today’s AI tools, aren’t easy to grasp. They feed computers massive volumes of information to “teach” them to recognize our words or halt for a stop sign. This isn’t just dissimilar to how human minds work: it also involves techniques that can’t be understood without an effective teacher.

Best Courses in AI, Deep Learning, and Machine Learning

Luckily, AI’s recent popularity has yielded hundreds of articles, videos, webinars, courses and books catering to beginners and experts who aspire to expand their minds. Below, we’ve curated a selection of the best available.

AI Courses for Beginners
  • Artificial Intelligence: A Free Online Course From MIT
    The Massachusetts Institute of Technology is one of the toughest technical universities, but also routinely produces some of the best minds in the field. This introductory course, made up of 30 video lectures, starts from basic knowledge representation, and includes interactive demonstrations to help students understand how different AI methods work under different circumstances.
  • Artificial Intelligence A-Z: Learn How To Build An AI
    A course that covers key AI concepts, teaching you to code from scratch and discussing the real-world applications of AI. This course is useful as a comprehensive yet simple approach to learning the basics of creating practical AI.
  • Deep Learning for Business
    A solid, non-technical approach to the most talked about AI technique (computer vision runs a close second). The focus is on the AI stars of the business world, from IBM’s Jeopardy-winning Watson to LettuceBot, a deep learning system that assists in planting and growing everyone’s favorite leaf vegetable. Some hands-on work using tools like Google’s TensorFlow is included, but the focus remains squarely on what business leaders need to know.
Intermediate Courses to Improve Your AI Knowledge
  • Deep Learning by Google
    A more advanced, three-month course that teaches students how to train and optimize different types of neural networks, and how to design systems that learn from massive datasets. This course is a good follow-up or alternative for those too advanced for Ng’s deep learning courses.
  • Neural Networks and Deep Learning
    Andrew Ng, a star both in AI and teaching, runs students through a more technical introduction to the fundamentals of deep learning and neural networks. The course is targeted to people with some technical proficiency, but also demonstrates how deep learning is relevant to business. Later courses in the series follow up with more in-depth material, such as Structuring Machine Learning Projects.
  • Salesforce Einstein Discovery – Easy AI and Machine Learning
    The Salesforce Einstein AI engine offers an interesting example of AI targeted to a particular business problem: supporting customers. While this course is too focused to serve as a general introduction to the field, it also offers a tradeoff: no coding is required to get some hands-on experience creating an AI-enabled app.
Advanced AI Courses
  • Introduction to Computer Vision
    Computer vision is a distinct subspecialty within AI, important in everything from driverless cars, to augmented reality, to advanced manufacturing. This four-month course isn’t for beginners, but it does effectively teach the fundamentals and core concepts behind computer vision.
  • NVIDIA Deep Learning Institute
    Learn how to speed up your AI, deep learning, and accelerated computing applications with more than a dozen project-based hands-on training courses. You’ll work through DLI training online  from anywhere, using a fully configured GPU-accelerated workstation in the cloud. All you need is web browser and Internet connection. Examples include Deep Learning for Image Classification, which teaches how to train neural networks to recognize images, and Linear Classification with TensorFlow, which uses Google’s extensive machine learning framework.
  • Reinforcement Learning
    If the previous courses in deep learning look like child’s play to you, this course is a good step up: it adopts a theoretical approach to machine learning, from classic papers on the topic to more recent work. This course will allow students to understand, engage and contribute to the reinforcement learning research community.

The post How (and Where) to Get a Great Crash Course in AI appeared first on The Official NVIDIA Blog.