Skip to main content

3D News

For the last few years we’ve worked with the National Stereoscopic Association to support the 3D Digital Showcase photo competition featured at the NSA’s annual conventions. The images from this past year’s showcase are now live for everyone to view. We really enjoy the diversity of images submitted by 3D artists and enthusiasts to this event, and this gallery is certainly no different. You’ll see everything from close ups of insects to people juggling fire. Simply put,...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in driver 344.11, increasing game support and adding some new interface elements. You can get the new driver at www.geforce.com/drivers or via the update option in Geforce Experience. With the release of 344.11, new 3D...
We’re fortunate enough to have another fine 3D video from New Media Film Festival to share with you here on 3DVisionLive—a pop music video from Italy called “The Way,” which you can view here. Even better, New Media Film Festival has provided an interview with one of the co-directors of the video, Edoardo Ballanti, which provides insights on how the video was created and the vision behind it. Enjoy! (Alice Corsi also co-directed the video.) What was the Inspiration behind “...
The Fall Photo Contest received nearly 100 images – thanks to all that entered! The contest called for your best “nature” shots with the only other requirement being that they had to be true stereo images. Submissions ranged from shots of spiders in gardens to artistic approaches to tasteful nudes. As before, members were invited to vote for the winner by tagging images in the contest gallery as favorites. Without further ado, the winner is: Autumn Goodbye to Summer This...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in beta driver 337, increasing game support and adding a toggle key to enable/disable the mode. Games with 3D Compatibility Mode will launch in this mode by default. To change the render mode back to standard 3D Vision...

Recent Blog Entries

The rise of modern business intelligence (BI) has seen the emergence of a number of component parts designed to support the different analytical functions necessary to deliver what enterprises require.

Perhaps the most fundamental component of the BI movement is the traditional frontend or visualization application. Companies like Tableau, Qlik, Birst, Domo and Periscope provide these. There are dozens more — all with essentially equivalent capabilities: the ability to make spreadsheets look beautiful. Some of these companies have been tremendously successful, primarily differentiating themselves on the axis of usability.

Another, equally critical component of the BI equation is the database. Here, too, there are the usual suspects: Redshift, Impala, Vertica, Netezza and others. Some of these databases are fully featured, system-of-record worthy solutions, while others focus on a particular performance axis, streaming, for instance, and do it well.

Finally, there is the emergence, across BI and database players, more advanced analytics tools, driven by the explosion of interest in and development of machine learning, deep learning and artificial intelligence. This market has its stars, starting with the big boys — Google, Facebook, Amazon, Microsoft, IBM, Baidu, Tesla — as well as a host of highly credentialed startups, like Sentient, Ayasdi, Bonsai and H2O.ai.

A successful, fully functional BI system has each of these operating optimally. The problem is that none of these systems are operating optimally. The reason is the extraordinary growth in data.

These systems are laboring because they are all based on an antiquated, CPU-centric view of the world that is computationally incapable of querying, rendering or learning from data at the scale demanded by the petabyte economy.

There is a solution, however. One that the deep learning folks have already adopted: GPUs.

With GPUs, you get order of magnitude performance enhancements. There’s a reason why so many supercomputers on the Top500 list use NVIDIA GPUs. It’s because GPUs are far more adept at the mathematical tasks than traditional CPUs.

But, it’s more than just deep learning. Databases and visualization also benefit significantly from GPUs. A system based on GPUs can deliver the speed and scale required to handle these massive working sets and deliver the functionality required.

MapD uses GPU computing to deliver SQL and immersive visual analytics on billion+ record datasets in milliseconds. What You Need to Know About GPUs and Integrated Analytics

First, GPUs offer exceptionally high memory bandwidth — we’re talking terabytes per second across multiple GPUs. This is important since database queries are typically memory bandwidth or I/O bound. Because of the memory footprint, GPUs can scan more data in less time, resulting in faster results.

To put this in context, noted database authority Mark Litwintschik found that a single GPU server is 74x faster than a larger cluster of Redshift over 1.1 billion rows. Not 74% faster, 74 times faster. Against Postgres, that figure was 3,500x faster — milliseconds vs. tens of minutes.

That’s significant because working sets have grown commensurately with data. A few million row dataset used to be big, now it is tiny. The new normal starts at several hundred million and runs to the billions.

Second, GPUs are not simply about straight-line speed. Other systems can be optimized for specific tasks and queries. GPUs are also extraordinary at graphics. In fact, the native rendering pipeline of GPUs makes them the ideal engine for data visualization.

This manifests itself not only in better looking dashboards, but also in more responsive, faster dashboards. The reason is that if you can do the query on the same chip as the render, then you don’t have to move your data around. This might not be a problem when dealing with only a few million rows, but it’s a big problem when you cross a billion, let alone several billion.

Finally, GPUs deliver supercomputing-class computational throughput. GPUs dominate the machine learning and deep learning ranks because they excel at matrix multiplication. Again, the ability to co-locate the querying and machine learning on the same chip lets you enjoy exceptional efficiency in feeding the machine learning algorithm with the data needed for training and inference.

MapD crushes a billion+ row taxi data benchmark How to Put GPUs into Play in Your Organization

True performance will come from an integrated system, one that combines GPU hardware with a GPU-tuned database, a GPU-tuned frontend/visualization layer and a GPU-tuned machine learning layer.

Upgrading just one component, however, creates the weakest link problem. A GPU database feeding a CPU visualization frontend will be faster, but it won’t be as fast as a GPU database feeding a GPU visualization frontend.

Any potential combination creates the same challenge, introduces the same weak CPU link.

The optimal system benefits from GPU hardware and GPU-tuned software at every turn.

Speed, visualization, advanced analytics — they’re all GPU-oriented. To use hardware or software that is designed for legacy compute platforms is to choose to wait, to downsample, to overpay on scaleout — even in the elastic world in which we live.

Integrated systems exist. And they have a headstart on incorporating other key tasks or subtasks that benefit disproportionately from GPUs. The integrated GPU stack has major implications for BI, IT, data science and other areas of the enterprise. This is precisely why MapD thinks this is the Age of the GPU, and why we’re so pleased to be part of the revolution.

The post The Argument for Accelerated and Integrated Analytics appeared first on The Official NVIDIA Blog.

House or horse, bird or barn? Deep learning and GPU computing have quickly advanced the abilities of image recognition technology to superhuman levels.

Now, PicsArt, maker of the social photo editor by the same name, is applying this breakthrough in artificial intelligence to the creation of images.

Hitting the market today, “Magic Effects” is a new feature in the latest version of the PicsArt app, which has been downloaded more than 300 million times and boasts 80 million active monthly users. Magic Effects uses GPU-powered AI to analyze the quality and context of photos, and enables users to transform their pics in seconds with an array of filtering effects that are customized based on the AI analysis.

If, for example, a user applies the “Neo Pop” effect to a photo, the result won’t be standardized. Instead, it will be customized based on the qualities of the photo in question. Users can further customize the filters using a variety of touch-interface tools.

Here’s an example of a Magic Effects filter in action, turning a photo into a colorful painting.

Making Everyone an Artist

The idea is make art and photography more accessible to a larger number of people, says PicsArt co-founder Artavazd Mehrabyan.

“With AI, we can build much more sophisticated tools that understand the visual context of a photo and apply learned techniques to make the image more of what the photographer wants,” said Mehrabyan. “Artificial intelligence is opening a whole universe of new creative techniques in photography.”

The updated PicsArt app also will enable users to search for similar images, search by image type and generally allow amateur photographers to do more with their photos than ever before.

GPUs Meeting Demands of AI

PicsArt used a combination of GeForce GTX GPUs and Amazon Web Services GPU instances to train its AI and similar-image search algorithms, as well as for scaling and production. Mehrabyan said the company could not have used CPUs to achieve the kind of speed to market and photo effects capabilities that GPUs enabled.

GeForce GTX 1080 gets the PicsArt treatment.

“Building the AI models behind our new Magic Effects requires long, multiple day training sessions for each filter,” he said. “This is a process we have to repeat over and over again until we get optimal results.”

The capabilities PicsArt is introducing are only the beginning. GPU-powered AI will continue to expand the possibilities in photography, enabling anyone with a smartphone to turn their photos into works of art, says Mehrabyan. (And share it via social media, which PicsArt makes easy.)

“We see much more automation on the photo tools side, and that’s going to make it possible for the mass consumer to have easy access to the most advanced creativity tools,” said Mehrabyan. “AI will not replace creativity, but will enable people to explore and express their creativity.”

The post Picture Perfect: AI-Powered Photo Enhancement Coming to a Smartphone Near You appeared first on The Official NVIDIA Blog.