Skip to main content

3D News

If you’ve a penchant for liking superhero-themed anything and playing games in 3D, the Batman: Arkham series has been a match made in heaven. Simply put, when it comes to 3D Vision titles it just doesn’t get much better – and it’s hard to see how it could.We’re happy to report that Batman: Arkham Origins, which releases today, continues this tradition. Out of the box, Origins is rated 3D Vision Ready, so you know it’s going to look spectacular. We’ve played it quite a bit...
Contest closed - stay tuned to 3DVisionlive.com for details about upcoming contests.     3DVisionLive.com is excited to unveil the latest in a series of photo contests aimed at giving you a platform to show off your images and potentially win some cool prizes. Like our most recent Spring Contest, this one will span three months - October, November, and December - and is themed: Your image must be something that captures or shows the essence of "nature" and what...
With sincere apologies for the delay, NVIDIA is pleased to announce the results of the Spring Photo Contest. We received more than 80 submissions from 3DVisionLive members and, for the first time, invited the membership to select the winner. The only criteria for the contest was the photos had to represent the meaning of Spring in some fashion, and be an original image created by the member that submitted it. All submitted photos were put in a gallery and ample time was...
For the third year in a row, NVIDIA worked with the National Stereoscopic Association to sponsor a 3D digital image competition called the Digital Image Showcase, which is shown at the NSA convention - held this past June in Michigan. This year, the 3D Digital Image Showcase competition consisted of 294 images, submitted by 50 different makers. Entrants spanned the range from casual snapshooters to both commercial and fine art photographers. The competition was judged by...
  VOTING IS NOW CLOSED - Thanks to all that participated. Results coming soon!   The submission period for the Spring Photo Contest is now closed, and we are happy to report we’ve received 80 images from our members for consideration. And, for the first time, we’re opening the judging process to our community as well to help us determine the winners. So, between now and the end of June (11:59 PST, June 30st), please view all of the images in the gallery and place...

Recent Blog Entries

Imagine your cat walking through this scene.

Thanks to Instagram and Snapchat, adding filters to images and videos is pretty straight forward. But what if you could repaint your smartphone videos in the style of van Gogh’s “Starry Night” or Munch’s “The Scream”?

A team of researchers from Germany’s University of Freiburg has made significant strides toward this goal using an approach to artificial intelligence called deep learning.

The team developed a method that uses a deep neural network to extract a specific artistic style from a source painting, and then synthesizes this information with the content of a separate video. NVIDIA GPUs make it possible to crunch through this computationally intensive work with striking results.

An Algorithm with Long-Term Memory

Prior work has used deep learning to transfer artistic styles from image to image with success. Earlier research found that when a deep neural network processes an image, its neural activation encodes the image’s style information — brushstrokes, color and other abstract details. The network can then be used to apply this style onto what the network understands as the content of a second image.

But videos have lots of moving parts. It’s not as simple as taking the technique of style transfer for still images and applying it to each frame of a video.

“If you just apply the algorithm frame by frame, you don’t get a coherent video — you get flickering in the sequence,” says University of Freiburg postdoc Alexey Dosovitskiy. “What we do is introduce additional constraints, which make the video consistent.”

Dosovitskiy and his fellow researchers enforce this consistency by controlling the variation between one frame and the next, which needs to account for three major challenges:

  • A character onscreen should look the same as it moves across a scene,
  • Static components, such as backdrop, should remain visually consistent from frame to frame, and
  • After a character passes across the field of view, the background should look the way it did before the character moved.

The team’s algorithm incorporates restrictions to solve these issues, penalizing successive frames that look too different from one another. It also uses long-term contingencies to aide continuity — the image composition of an area of a scene from several frames earlier is replicated when that area reappears.

Smartly constraining a deep learning algorithm produced better consistency in stylizing an animated video.

To make this complex process a reality, the researchers use NVIDIA GPUs. Powered by a GeForce GTX TITAN X GPU, artistic style transfer takes eight to 10 minutes a frame for a high-resolution video. That’s 20x faster than with a multi-core CPU.

“GPUs are crucial because this process is quite time consuming,” Dosovitskiy says.

The team also uses our cuDNN deep learning software, which allows them to perform style transfer on high-resolution videos due to its smaller memory requirements. Multi-GPU systems could speed up the process further — but even so, real-time artistic style transfer for videos is some ways away.

So far, the team has tried its algorithm on both live and animated videos. They render equally well, but Dosovitskiy thinks viewers have higher standards for live video.

“It turns out people are sensitive to this flickering, so even if it’s quite small you can still see it very well when you watch a video,” he says.

Read more about the team’s work in their paper.

The post How Deep Learning Can Paint Videos in the Style of Art’s Great Masters appeared first on The Official NVIDIA Blog.

In a crowded market, you’ve got to stand out from the competition. High-tech Taiwanese carmaker LUXGEN Motor Co. is making its mark by becoming the first to bring a premium infotainment system to Taiwan’s mainstream automobile market.

LUXGEN’s S3 model comes equipped with the THINK+ 4.0 advanced infotainment system powered by the NVIDIA Tegra mobile processor. Tegra integrates a range of processors – including a multi-core ARM CPU, a powerful GPU and dedicated audio, video and image processors – all while just sipping power.

LUXGEN’s S3 model features Tegra-based infotainment system.

The LUXGEN S3 is a five-seat sub-compact sedan car, which hits the streets of Taiwan this month. Built with a 1.6-liter four-cylinder engine, the S3 is equipped with a wide range of advanced driver assistance system (ADAS) features, including the 3D X-View+ 3D vision assist system, the Active Eagle View+ 360-degree camera system, the Side View+ camera system, all managed by THINK+ 4.0 infotainment system powered by NVIDIA.

The ADAS and infotainment features have impressed the mainstream Taiwanese market, and LUXGEN has its eye on China next.

In addition to Tegra, LUXGEN leverages NVIDIA technologies in other areas of its operations:

  • NVIDIA Quadro pro graphics power HP workstations that LUXGEN uses to design and manufacture its vehicles. In fact, the carmaker has the most Quadro Design Centers in Taiwan.
  • NVIDIA Iray-based app Lumiscaphe, which enables state-of-the-art workflows, lets designers interact in real time with 3D virtual models.
  • LUXGEN uses Quadro GPUs and Iray for Maya to render the LUXGEN ProVR showroom app. LUXGEN is the first to use Unreal Engine 4 to deploy NVIDIA VRWorks in designing the ProVR app in Greater China market.
  • VR Ready NVIDIA GeForce GTX 980 GPUs and Oculus headsets provide the ultimate virtual experience for potential car buyers at LUXGEN’s VR showrooms across Taiwan.

To learn more about LUXGEN’s S3 model, check out www.luxgen-motor.com.

The post LUXGEN Motors Bringing NVIDIA-Powered Infotainment to Taiwan Market appeared first on The Official NVIDIA Blog.