Skip to main content

3D News

If you’ve a penchant for liking superhero-themed anything and playing games in 3D, the Batman: Arkham series has been a match made in heaven. Simply put, when it comes to 3D Vision titles it just doesn’t get much better – and it’s hard to see how it could.We’re happy to report that Batman: Arkham Origins, which releases today, continues this tradition. Out of the box, Origins is rated 3D Vision Ready, so you know it’s going to look spectacular. We’ve played it quite a bit...
Contest closed - stay tuned to for details about upcoming contests. is excited to unveil the latest in a series of photo contests aimed at giving you a platform to show off your images and potentially win some cool prizes. Like our most recent Spring Contest, this one will span three months - October, November, and December - and is themed: Your image must be something that captures or shows the essence of "nature" and what...
With sincere apologies for the delay, NVIDIA is pleased to announce the results of the Spring Photo Contest. We received more than 80 submissions from 3DVisionLive members and, for the first time, invited the membership to select the winner. The only criteria for the contest was the photos had to represent the meaning of Spring in some fashion, and be an original image created by the member that submitted it. All submitted photos were put in a gallery and ample time was...
For the third year in a row, NVIDIA worked with the National Stereoscopic Association to sponsor a 3D digital image competition called the Digital Image Showcase, which is shown at the NSA convention - held this past June in Michigan. This year, the 3D Digital Image Showcase competition consisted of 294 images, submitted by 50 different makers. Entrants spanned the range from casual snapshooters to both commercial and fine art photographers. The competition was judged by...
  VOTING IS NOW CLOSED - Thanks to all that participated. Results coming soon!   The submission period for the Spring Photo Contest is now closed, and we are happy to report we’ve received 80 images from our members for consideration. And, for the first time, we’re opening the judging process to our community as well to help us determine the winners. So, between now and the end of June (11:59 PST, June 30st), please view all of the images in the gallery and place...

Recent Blog Entries

More people have probably heard of the annual NIPS conference in the past week than over its previous 30 years as the premier event focused on neural networks.

The once obscure gathering — held last week in Long Beach, Calif. — drew coverage from the New York Times, Bloomberg, The Economist and other major outlets, focused on its astounding growth as AI has become a hot field and on the mad dash companies are making to recruit gifted developers.

But recall what put NIPS on the map in the first place: the sharing of world-class research that advances artificial intelligence.

Two of our NVIDIA AI Labs (NVAIL) partners were among those presenting groundbreaking work. Researchers at New York University have advanced how computers can classify objects within complex images, and taken a step toward what might loosely be thought of as peripheral vision for machines.

And researchers at the University of California, Berkeley, are using the parallel processing power of GPUs to give machines more “curiosity” to explore their environments when trying to complete a task.

Giving Image Recognition a Second Look

At NIPS, lead NYU researcher Sean Welleck described how his team is using reinforcement learning to help computers better classify the objects within images.

Instead of looking at an image in a rote pattern — say from left to right, starting at the top and working its way to the bottom — the team’s multi-object recognition model takes a high-level look at all the objects in the image. If it identifies something it can likely classify correctly, then it takes a closer look and gets a reward if it’s right. It then proceeds to another object it has a good bead on.

This ability to classify objects in any order is a major advancement, and could lead to faster and more accurate image classification. It also minimizes the need for annotating objects — the drudgework often required to get good, labeled datasets to work with. Welleck’s work makes the best of the labels already present.

The research is also a step toward giving computers peripheral vision, where two levels of attention — one scanning the image for objects and the other deciding to take a closer look at potentially interesting items — are in play.

Rewarding Complex Tasks

At Berkeley, Justin Fu is lead researcher on work to overcome the problem in reinforcement learning of how to incentivize a machine to complete complex tasks. A classic reward test is the game Pong, where an AI, or even a sophisticated toddler, can learn to manipulate a paddle to successfully keep a ricocheting ball in play.

But games with many more choices, like Doom, pose a much greater challenge for an AI, and plenty of high-IQ humans, because the reward only comes after a much longer sequence of successful steps. If the task is complex enough, the chance of randomly completing it — and ever getting the reward — is slim.

The research team’s proposed solution uses what’s known as an exemplar model. In it, the model is incentivized to take actions that result in unexpected outcomes — so your robot isn’t trying the same left turn every time it sees a T in a maze. Instead, it’s incentivized to explore the options in its environment.

It does this by determining the differences between new images and all the previous ones it’s seen. Instead of comparing raw pixels between images, the model trains a classifier to distinguish what’s new in an image compared to earlier ones it’s examined.

Thanks to GPU computing, the model can crank through tons of images quickly, classifying everything as it goes. Noting these subtle changes to the environment helps the model try new options and better figure out how to successfully complete its task.

NYU and Berkeley are two of the 20 universities across the globe our NVAIL program supports. NVAIL helps researchers from schools like these advance their work through assistance from NVIDIA researchers and engineers, support for the universities’ students, and with access to the industry’s most advanced GPU computing power, like the DGX-1 AI supercomputer.


The post NVAIL Partners Show Groundbreaking AI Research at NIPS Event appeared first on The Official NVIDIA Blog.

Artificial intelligence is years, even decades, from replicating functions of the human mind, but it’s still getting serious work done today. And its influence will only expand. The irony of all that promise: Human minds are way behind. Relatively few have a baseline understanding about how AI and deep learning truly work.

Techniques like machine learning, which underpin many of today’s AI tools, aren’t easy to grasp. They feed computers massive volumes of information to “teach” them to recognize our words or halt for a stop sign. This isn’t just dissimilar to how human minds work: it also involves techniques that can’t be understood without an effective teacher.

Best Courses in AI, Deep Learning, and Machine Learning

Luckily, AI’s recent popularity has yielded hundreds of articles, videos, webinars, courses and books catering to beginners and experts who aspire to expand their minds. Below, we’ve curated a selection of the best available.

AI Courses for Beginners
  • Artificial Intelligence: A Free Online Course From MIT
    The Massachusetts Institute of Technology is one of the toughest technical universities, but also routinely produces some of the best minds in the field. This introductory course, made up of 30 video lectures, starts from basic knowledge representation, and includes interactive demonstrations to help students understand how different AI methods work under different circumstances.
  • Artificial Intelligence A-Z: Learn How To Build An AI
    A course that covers key AI concepts, teaching you to code from scratch and discussing the real-world applications of AI. This course is useful as a comprehensive yet simple approach to learning the basics of creating practical AI.
  • Deep Learning for Business
    A solid, non-technical approach to the most talked about AI technique (computer vision runs a close second). The focus is on the AI stars of the business world, from IBM’s Jeopardy-winning Watson to LettuceBot, a deep learning system that assists in planting and growing everyone’s favorite leaf vegetable. Some hands-on work using tools like Google’s TensorFlow is included, but the focus remains squarely on what business leaders need to know.
Intermediate Courses to Improve Your AI Knowledge
  • Deep Learning by Google
    A more advanced, three-month course that teaches students how to train and optimize different types of neural networks, and how to design systems that learn from massive datasets. This course is a good follow-up or alternative for those too advanced for Ng’s deep learning courses.
  • Neural Networks and Deep Learning
    Andrew Ng, a star both in AI and teaching, runs students through a more technical introduction to the fundamentals of deep learning and neural networks. The course is targeted to people with some technical proficiency, but also demonstrates how deep learning is relevant to business. Later courses in the series follow up with more in-depth material, such as Structuring Machine Learning Projects.
  • Salesforce Einstein Discovery – Easy AI and Machine Learning
    The Salesforce Einstein AI engine offers an interesting example of AI targeted to a particular business problem: supporting customers. While this course is too focused to serve as a general introduction to the field, it also offers a tradeoff: no coding is required to get some hands-on experience creating an AI-enabled app.
Advanced AI Courses
  • Introduction to Computer Vision
    Computer vision is a distinct subspecialty within AI, important in everything from driverless cars, to augmented reality, to advanced manufacturing. This four-month course isn’t for beginners, but it does effectively teach the fundamentals and core concepts behind computer vision.
  • NVIDIA Deep Learning Institute
    Learn how to speed up your AI, deep learning, and accelerated computing applications with more than a dozen project-based hands-on training courses. You’ll work through DLI training online  from anywhere, using a fully configured GPU-accelerated workstation in the cloud. All you need is web browser and Internet connection. Examples include Deep Learning for Image Classification, which teaches how to train neural networks to recognize images, and Linear Classification with TensorFlow, which uses Google’s extensive machine learning framework.
  • Reinforcement Learning
    If the previous courses in deep learning look like child’s play to you, this course is a good step up: it adopts a theoretical approach to machine learning, from classic papers on the topic to more recent work. This course will allow students to understand, engage and contribute to the reinforcement learning research community.

The post How (and Where) to Get a Great Crash Course in AI appeared first on The Official NVIDIA Blog.