Skip to main content

3D News

For the last few years we’ve worked with the National Stereoscopic Association to support the 3D Digital Showcase photo competition featured at the NSA’s annual conventions. The images from this past year’s showcase are now live for everyone to view. We really enjoy the diversity of images submitted by 3D artists and enthusiasts to this event, and this gallery is certainly no different. You’ll see everything from close ups of insects to people juggling fire. Simply put,...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in driver 344.11, increasing game support and adding some new interface elements. You can get the new driver at www.geforce.com/drivers or via the update option in Geforce Experience. With the release of 344.11, new 3D...
We’re fortunate enough to have another fine 3D video from New Media Film Festival to share with you here on 3DVisionLive—a pop music video from Italy called “The Way,” which you can view here. Even better, New Media Film Festival has provided an interview with one of the co-directors of the video, Edoardo Ballanti, which provides insights on how the video was created and the vision behind it. Enjoy! (Alice Corsi also co-directed the video.) What was the Inspiration behind “...
The Fall Photo Contest received nearly 100 images – thanks to all that entered! The contest called for your best “nature” shots with the only other requirement being that they had to be true stereo images. Submissions ranged from shots of spiders in gardens to artistic approaches to tasteful nudes. As before, members were invited to vote for the winner by tagging images in the contest gallery as favorites. Without further ado, the winner is: Autumn Goodbye to Summer This...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in beta driver 337, increasing game support and adding a toggle key to enable/disable the mode. Games with 3D Compatibility Mode will launch in this mode by default. To change the render mode back to standard 3D Vision...

Recent Blog Entries

Imagine your cat walking through this scene.

Thanks to Instagram and Snapchat, adding filters to images and videos is pretty straight forward. But what if you could repaint your smartphone videos in the style of van Gogh’s “Starry Night” or Munch’s “The Scream”?

A team of researchers from Germany’s University of Freiburg has made significant strides toward this goal using an approach to artificial intelligence called deep learning.

The team developed a method that uses a deep neural network to extract a specific artistic style from a source painting, and then synthesizes this information with the content of a separate video. NVIDIA GPUs make it possible to crunch through this computationally intensive work with striking results.

An Algorithm with Long-Term Memory

Prior work has used deep learning to transfer artistic styles from image to image with success. Earlier research found that when a deep neural network processes an image, its neural activation encodes the image’s style information — brushstrokes, color and other abstract details. The network can then be used to apply this style onto what the network understands as the content of a second image.

But videos have lots of moving parts. It’s not as simple as taking the technique of style transfer for still images and applying it to each frame of a video.

“If you just apply the algorithm frame by frame, you don’t get a coherent video — you get flickering in the sequence,” says University of Freiburg postdoc Alexey Dosovitskiy. “What we do is introduce additional constraints, which make the video consistent.”

Dosovitskiy and his fellow researchers enforce this consistency by controlling the variation between one frame and the next, which needs to account for three major challenges:

  • A character onscreen should look the same as it moves across a scene,
  • Static components, such as backdrop, should remain visually consistent from frame to frame, and
  • After a character passes across the field of view, the background should look the way it did before the character moved.

The team’s algorithm incorporates restrictions to solve these issues, penalizing successive frames that look too different from one another. It also uses long-term contingencies to aide continuity — the image composition of an area of a scene from several frames earlier is replicated when that area reappears.

Smartly constraining a deep learning algorithm produced better consistency in stylizing an animated video.

To make this complex process a reality, the researchers use NVIDIA GPUs. Powered by a GeForce GTX TITAN X GPU, artistic style transfer takes eight to 10 minutes a frame for a high-resolution video. That’s 20x faster than with a multi-core CPU.

“GPUs are crucial because this process is quite time consuming,” Dosovitskiy says.

The team also uses our cuDNN deep learning software, which allows them to perform style transfer on high-resolution videos due to its smaller memory requirements. Multi-GPU systems could speed up the process further — but even so, real-time artistic style transfer for videos is some ways away.

So far, the team has tried its algorithm on both live and animated videos. They render equally well, but Dosovitskiy thinks viewers have higher standards for live video.

“It turns out people are sensitive to this flickering, so even if it’s quite small you can still see it very well when you watch a video,” he says.

Read more about the team’s work in their paper.

The post How Deep Learning Can Paint Videos in the Style of Art’s Great Masters appeared first on The Official NVIDIA Blog.

In a crowded market, you’ve got to stand out from the competition. High-tech Taiwanese carmaker LUXGEN Motor Co. is making its mark by becoming the first to bring a premium infotainment system to Taiwan’s mainstream automobile market.

LUXGEN’s S3 model comes equipped with the THINK+ 4.0 advanced infotainment system powered by the NVIDIA Tegra mobile processor. Tegra integrates a range of processors – including a multi-core ARM CPU, a powerful GPU and dedicated audio, video and image processors – all while just sipping power.

LUXGEN’s S3 model features Tegra-based infotainment system.

The LUXGEN S3 is a five-seat sub-compact sedan car, which hits the streets of Taiwan this month. Built with a 1.6-liter four-cylinder engine, the S3 is equipped with a wide range of advanced driver assistance system (ADAS) features, including the 3D X-View+ 3D vision assist system, the Active Eagle View+ 360-degree camera system, the Side View+ camera system, all managed by THINK+ 4.0 infotainment system powered by NVIDIA.

The ADAS and infotainment features have impressed the mainstream Taiwanese market, and LUXGEN has its eye on China next.

In addition to Tegra, LUXGEN leverages NVIDIA technologies in other areas of its operations:

  • NVIDIA Quadro pro graphics power HP workstations that LUXGEN uses to design and manufacture its vehicles. In fact, the carmaker has the most Quadro Design Centers in Taiwan.
  • NVIDIA Iray-based app Lumiscaphe, which enables state-of-the-art workflows, lets designers interact in real time with 3D virtual models.
  • LUXGEN uses Quadro GPUs and Iray for Maya to render the LUXGEN ProVR showroom app. LUXGEN is the first to use Unreal Engine 4 to deploy NVIDIA VRWorks in designing the ProVR app in Greater China market.
  • VR Ready NVIDIA GeForce GTX 980 GPUs and Oculus headsets provide the ultimate virtual experience for potential car buyers at LUXGEN’s VR showrooms across Taiwan.

To learn more about LUXGEN’s S3 model, check out www.luxgen-motor.com.

The post LUXGEN Motors Bringing NVIDIA-Powered Infotainment to Taiwan Market appeared first on The Official NVIDIA Blog.