Skip to main content

3D News

Okay, we've gone over all the submissions for our first Winter Photo Contest and debated at length over our favorites. And, we've finally come to a consensus, which will introduce our second, second-time contest winner: ZZ3D.   First Prize: Snow Fight   ZZ3D's a long-time contributor to 3DVisionLive and has shared some amazing work with us. Snow Fight is certainly no exception! We felt this image captured the essence of the contest's Winter theme very well, and...
The votes have all been cast and we can now, finally, bring you the results of our First Annual Summer Photo Contest. Dozen's of excellent images were submitted and it was a challenge to whittle all the entries down and select the prize winners. Without further ado we get to the results - drumroll please!   First Prize: "Soap Bubble" Zoran Zelic (ZZ3D)'s "Soap Bubble 1" takes the top prize. We like the spontaneity the image implies along with the overall composition...
Sometimes it’s just great “when a plan comes together.” An avid warbird photographer, I’d been familiar with Christian Kieffer’s outstanding pinup photography for years – his company produces some truly amazing nostalgic calendars featuring vintage WWII aircraft and models done up to mimic the pinups from the same era that helped to keep many an airman’s spirits high. Thinking the subject matter would lend itself well to 3D, I approached Christian a few months ago about...
The 2004 release of id Software’s Doom 3 spurred many PC gamers to upgrade their rigs – with many building completely new machines with the sole intent of driving this game at its ultimate eye-candy settings. And many gamers still came up a bit short, which is just one reason why they are looking forward to jumping into the corridor-crawling fray again with the release of Doom 3 BFG Edition.Silverlight.createObject("http://d2q1944p6r21t1.cloudfront.net/files/...
We’ve rolled out a new look for the Photo page that updates the page to have a similar look and feel to the home and video pages. We’ve added a pane of larger thumbnails across the top that is user-navigable. Just click the right or left arrows to cycle. (We will be adding an auto-scroll mechanism to this soon.) And these are viewable in 3D - just click the 2D/3D toggle button at the top right of the page. Make sure to upgrade to the most recent drivers for best performance...

Recent Blog Entries

From diagnosing certain mental disorders to optimizing the placement of images in textbooks, eye tracking is useful across a variety of fields: psychology, medicine, advertising, marketing and more.

Scientists and researchers can learn a lot from understanding where people look and why. But making eye tracking easy and ubiquitous has been hard. Deep learning and NVIDIA GPUs are changing that.

Tapping Mobile’s Reach

Given its potential, it’s nagged researchers that getting one’s eyes tracked wasn’t easier. “It was quite shocking to me that we all don’t have eye-trackers,” says Aditya Khosla, a graduate student in the computer science and artificial intelligence laboratory of MIT’s electrical engineering and computer science department.

Khosla and a team of six other researchers from the University of Georgia and the Max Planck Institute of Informatics in Saarbruecken, Germany, set out to achieve a straightforward goal: create eye-tracking software that could run on any mobile phone with a camera.

The combination of powerful mobile technology and the ability to reach a huge number of users was irresistible to the team.

“If you need a big piece of equipment in a lab to do eye-tracking, then you can only reach a small audience,” says Kyle Krafka, a software engineer at Google who was finishing up his graduate computer science degree from the University of Georgia when the project began.

GPUs figured prominently in the team’s work, which relied on the NVIDIA GeForce GTX TITAN X in combination with the Caffe deep learning framework, for both training and inference of the neural network, which the team dubbed iTracker.

Krafka said the TITAN X NVIDIA’s GPU Grant Program donated to the project, enabled him and Khosla to use parallel processing to run through hundreds of models, which wouldn’t have been possible on CPUs.

“It allowed us to experiment rapidly, try new ideas and find out what worked and what didn’t,” Krafka says.

Data, and Then Some

But to train iTracker the team needed data. They took a novel approach to getting it: using Amazon Mechanical Turk, a sort of artificial intelligence crowd-sourcing marketplace. It allowed them to collect a much larger dataset than they could have through a traditional lab approach.

“Finding a way to make participation easy helped fuel the dataset, which fueled findings,” says Khosla. Using Amazon Mechanical Turk, the team was able to accumulate an eye-tracking dataset on nearly 1,500 participants — 30 times as many as any previous studies.

An overview of iTracker, the team’s eye-tracking convolutional neural network.

That ground-breaking dataset was then used to train iTracker. Powered by the TITAN X, the training demonstrated that iTracker can run in real time on a mobile device. And it improved accuracy by a large margin over previous approaches.

The team is working on an app but, Khosla says, the group hasn’t decided whether to commercialize the technology. In the meantime, he says they’re planning to open source the work to the developer community and see what results.

For more details, check out the project website.

The post Eye Fidelity: How Deep Learning Will Help Your Smartphone Track Your Gaze appeared first on The Official NVIDIA Blog.

Ask people what they think of when it comes to virtual reality, and most will describe some version of 3D gaming. But businesses are getting into VR big time, too. From engineering and product design to sales training and retail experiences, VR promises to change the way we work and live.

Delivering VR is a complex challenge since immersive VR requires seven times the graphics processing power compared to traditional 3D applications and games. Delivering VR virtually — streaming it from a data center to any device — is an even bigger challenge.

NVIDIA and VMware are working to make it a reality.

At VMworld 2016 we’ll show, for the first time, photo-realistic, immersive VR in a VMware virtual environment. See it for yourself at the VMvillage by taking a spin through the following technologies:

  • Iray VR — Iray physically based rendering produces dynamic panoramas and fully immersive light fields with stunningly realistic virtual environments. Strap on a headset and explore photorealistic virtual environments in VR.
  • Point Cloud — Check out time-lapse rendering created with point cloud showing the building of NVIDIA’s spectacular new building in Silicon Valley.
  • VRED VR — Participate in a collaborative VR design review of a Formula 1 race car created with Autodesk VRED 3D visualization software.
Virtualizing VR: How It Works

The server powering these demos has four high-end NVIDIA Quadro GPUs and VMware ESXi running on top. This set-up allows us to run four simultaneous virtual machines with VR content. We’re passing through an entire GPU and delivering a native NVIDIA driver to each VM. This provides amazing performance to multiple VR instances running on the same server.

After your VR experience, head over to NVIDIA booth 955 to learn more about the power of NVIDIA GRID. Or check out one of our VMworld talks. Follow all the events of the show at #vmworld.

The post NVIDIA and VMware Virtualize VR at VMworld 2016 appeared first on The Official NVIDIA Blog.