Skip to main content

3D News

In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in driver 344.11, increasing game support and adding some new interface elements. You can get the new driver at www.geforce.com/drivers or via the update option in Geforce Experience. With the release of 344.11, new 3D...
We’re fortunate enough to have another fine 3D video from New Media Film Festival to share with you here on 3DVisionLive—a pop music video from Italy called “The Way,” which you can view here. Even better, New Media Film Festival has provided an interview with one of the co-directors of the video, Edoardo Ballanti, which provides insights on how the video was created and the vision behind it. Enjoy! (Alice Corsi also co-directed the video.) What was the Inspiration behind “...
The Fall Photo Contest received nearly 100 images – thanks to all that entered! The contest called for your best “nature” shots with the only other requirement being that they had to be true stereo images. Submissions ranged from shots of spiders in gardens to artistic approaches to tasteful nudes. As before, members were invited to vote for the winner by tagging images in the contest gallery as favorites. Without further ado, the winner is: Autumn Goodbye to Summer This...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in beta driver 337, increasing game support and adding a toggle key to enable/disable the mode. Games with 3D Compatibility Mode will launch in this mode by default. To change the render mode back to standard 3D Vision...
3DVisionLive’s first-ever short-form 3D video contest received 14 entries that showed a great deal of diversity, ranging from video game captures to commercial-style clips to raw captures of pets or people doing cool things (such as bashing each other with swords). During judging we laughed, we cried (okay, maybe not), and we simply scratched our heads…. But seriously: thank-you to all that participated and we hope to see more of your content uploaded to the site for all to...

Recent Blog Entries

Google and Baidu dropped some big ideas about deep learning at our GPU Technology Conference last month.

But keynote addresses from the two search giants weren’t the only show in town. Five startups took the stage in the “Show & Tell” event at GTC’s Emerging Companies Summit to demonstrate how they’re using GPUs in bold new ways.

They’re hoping to be part of a now-grand tradition of other ECS participants that have gone on to glory. For example, Oculus was acquired by Facebook for $2 billion and Natural Motion was bought by Zynga for $527 million.

Check out the five below for an early look at technology that could help change the world:

Herta Security — Barcelona-based Herta may be a small operation, but it’s big in the world of facial recognition. It’s developed the world’s fastest facial recognition system and delivering results in crowded environments to customers in the security and marketing industries.

At Show & Tell, CEO Javier Rodriguez Saeta revealed his company’s technology was used at the Golden Globe Awards to nab party crashers and stalkers. For advertisers trying to reach a specific audience, Herta’s system can identify parameters such as gender, approximate age, use of glasses, and facial expressions.

Watch Herta’s system scan faces from recorded video at 12X real-time speeds. It looks right through makeup and beards to identify the actors underneath:

Watch the replay of Saeta’s presentation.

Paracosm — Paracosm CEO Amir Rubin doesn’t want you to have to imagine playing Quidditch with Harry Potter. He wants you to host the game in your living room.

Paracosm’s rover ran circles, and other pathways, in a lunar demo at GTC.

Based in Massachusetts, Paracosm uses depth sensors in advanced phones and tablets, like Google’s Tegra-powered Project Tango tablet, to capture the dimensions of interior spaces. It stitches these images into a 3D map that corresponds in one-to-one fashion with the real world. With maps like these, machines can navigate our world as well as people can.

And developers can create novel immersive experiences. Imagine guided tours of museums that react to the movement of patrons. Or robots mapping caverns or other planets that are too dangerous for humans to explore. And one day perhaps an augmented reality game that blends your living room with virtual versions of Potter and crew.

See the bot’s-eye view of Paracosm’s lunar rover demo at GTC. Watch the replay of Rubin’s presentation.

Jibo — A multi-tasker extraordinaire for the home, the cuddly Jibo family robot can take pictures or video; alert you to calendar items, voicemails and incoming texts; read books and play games with the kids; manage home automation; video conference; and more.

Jibo is packed with tech, as you’d imagine, including Wi-Fi, stereo vision, a microphone array and tactile senses on its body. But it’s also a deep learning demon, with natural language understanding and machine learning so it can perceive the world, make decisions and learn from its experience.

As a development platform, Jibo awaits applications that stretch the imagination. Learn more about the Cambridge, Mass.-based startup’s mascot in this video:

And watch the replay of the presentation by Jibo Founder and Chief Scientist Cynthia Breazeal.

Clarifai — In the future, you won’t need to tag and sort images. Artificial intelligence will do it for you, near instantly. Using the power of deep learning, Clarifai’s image recognition technology sorts through millions of images at lightning-fast speed to change the nature of visual search.

The New York startup’s latest trick is real-time video analysis. CEO Matthew Zeiler dropped a URL to a 3.5-minute-long video of outdoor scenery into the Clarifai engine. Ten seconds later all the scenes were scanned, identified and associated with predicted tags.

Clarifai’s technology uses deep learning to automatically tag images.

The entire clip, and every other one added to a database, was now sortable frame by frame. Looking for the scene from the forest or the mountains? Or the mountains with snow or without? It’s all at your fingertips.

Clarifai’s tech also understands the shades of meaning with human language. The tag “jaguar” will pull up the car and various kinds of the cat, so you can explore the world visually. Try it yourself with Clarifai’s online demo.

Watch the replay of Zeiler’s presentation.

Mirriad — In an age when skipping commercials has never been easier, London-based Mirriad aims to make paid sponsorship attractive again.

Its computer vision technology relies on 21 algorithms running in parallel to tailor ad placement in video. The tech turns 2D video into 3D data, figuring out factors such as how the camera is moving and what’s in foreground versus background. It then inserts into the video 3D ad placements that exist and react within the scene as if they were there at the time of filming.

Mirriad’s technology can place ads, and adjust them. So, for example, when a series goes into syndication across dozens of countries, ads can be changed to the local brands and languages.

Check out Mirriad’s work here:

And watch the replay of the presentation by Mirriad CEO Mark Popkiewicz.

The post 5 Wild Ways Startups Are Using GPUs appeared first on The Official NVIDIA Blog.

A day after NVIDIA helped stun the crowd at Microsoft’s BUILD conference, we’re behind a second jaw-dropping DirectX 12 demo that’s grabbing headlines.

King of Wushu, earmarked to be the first DX12 title in China, is also the first CryEngine-based game to take advantage of the next-generation graphics API.

It took two engineers just six weeks to port King of Wushu from DirectX 11 to DX12, and its performance improvements are stunning.

DirectX 12 gives games like King of Wushu a stunning new level of realism.

NVIDIA GameWorks Effects Studio is working with game developer Snail Games to enhance King of Wushu with NVIDIA GameWorks technologies such as NVIDIA HairWorks, PhysX Clothing, and more that are yet to be announced.

It’s the latest sign that DirectX 12 deployment is coming fast. Square Enix’s moving demo during the Microsoft’s BUILD keynote stunned the gamers with its use of advanced graphics to portray the human emotion of crying.

There’s more coming. At BUILD, Microsoft called DirectX 12 the fastest adoption by titles under developments since Direct3D 9. Work on the new DX12 API is now complete. Working drivers are released. Around 50 percent of gamers already have DX12-ready hardware installed.

In fact, NVIDIA’s Maxwell and Kepler GPU architectures already support DX12, with support for Fermi coming later. DX12 has experienced rapid adoption by a broad range of game engines.

NVIDIA is working on DX12 on many different platforms — providing drivers, working with game engine providers, co-developing with Microsoft and helping game developers deploy their DX12 titles.

And demos like the ones from this week show it’s an effort that’s paying off.

The post CryEngine Gets Its First DirectX 12 Game at BUILD 2015 appeared first on The Official NVIDIA Blog.