Skip to main content

3D News

In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in driver 344.11, increasing game support and adding some new interface elements. You can get the new driver at www.geforce.com/drivers or via the update option in Geforce Experience. With the release of 344.11, new 3D...
We’re fortunate enough to have another fine 3D video from New Media Film Festival to share with you here on 3DVisionLive—a pop music video from Italy called “The Way,” which you can view here. Even better, New Media Film Festival has provided an interview with one of the co-directors of the video, Edoardo Ballanti, which provides insights on how the video was created and the vision behind it. Enjoy! (Alice Corsi also co-directed the video.) What was the Inspiration behind “...
The Fall Photo Contest received nearly 100 images – thanks to all that entered! The contest called for your best “nature” shots with the only other requirement being that they had to be true stereo images. Submissions ranged from shots of spiders in gardens to artistic approaches to tasteful nudes. As before, members were invited to vote for the winner by tagging images in the contest gallery as favorites. Without further ado, the winner is: Autumn Goodbye to Summer This...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in beta driver 337, increasing game support and adding a toggle key to enable/disable the mode. Games with 3D Compatibility Mode will launch in this mode by default. To change the render mode back to standard 3D Vision...
3DVisionLive’s first-ever short-form 3D video contest received 14 entries that showed a great deal of diversity, ranging from video game captures to commercial-style clips to raw captures of pets or people doing cool things (such as bashing each other with swords). During judging we laughed, we cried (okay, maybe not), and we simply scratched our heads…. But seriously: thank-you to all that participated and we hope to see more of your content uploaded to the site for all to...

Recent Blog Entries

GPU technology toppled letters of the iconic “Hollywood” sign and lashed the Golden Gate Bridge with a tsunami in this summer’s blockbuster San Andreas. But that’s the movies.

Hollywood’s iconic sign in Los Angeles falls following an earthquake in Universal Pictures’ movie “San Andreas.”

In real life, researchers at the Southern California Earthquake Center are using GPU-powered high performance computing to develop CyberShake, a complex model that calculates how earthquake waves move through a 3D model of the Earth.

This helps develop earthquake forecasts and more accurate hazard assessments.

SCEC’s initial target is the real Los Angeles region, where the Pacific and American tectonic plates run into each other to create the famed San Andreas Fault, which runs the length of California, and adjacent states.

Their groundbreaking work earlier this year helped SCEC and their collaborators win NVIDIA’s inaugural Global Impact Award and its $150,000 prize.

This spring, the team used National Science Foundation and Department of Energy supercomputers — Blue Waters and Titan — to produce the most sophisticated seismic hazard analysis forecast yet for the Southern California region.

Seismic Waves

They calculated results for 336 separate locations in the region, and doubled the maximum simulated frequency from 0.5 Hertz to 1 Hertz. This is important because seismic waves of 1 Hertz and greater can cause serious damage to buildings, bridges and other structures.

But the required scientific calculation poses a huge computational challenge. At 1 Hertz, the CyberShake calculation for each specific location required 33X more computational work as at 0.5 Hertz. Thanks to the parallel processing efficiency of GPUs, however, they needed only 7X as many node hours.

SCEC, located at the University of Southern California, is led by Director Thomas H. Jordan. Working with him is Yifeng Cui, director of the High Performance Geocomputing Laboratory at the San Diego Supercomputer Center, at the University of California, San Diego.

“With more people moving to cities in seismically active regions, economic risks from a devastating earthquake are high and getting higher,” said Cui. “GPU capabilities, combined with high-level GPU programming language CUDA, provide the computing power required for acceleration of numerically intensive 3D simulations.”

CyberShake Study 15.4 hazard map of Los Angeles basin with 336 sites marked with white triangles (on the left). Maps display the shaking intensity expected with a 2% probability in 50 years. Warm colors represent areas of high hazard.

Hazard Information Maps

The goal is to build more accurate hazard information maps from earthquake hazard simulations — the kind supplied by the U.S. Geological Survey, which supports SCEC’s work. The maps would also aid seismologists and utility companies in addition to engineers responsible for building codes.

“The general public wants immediate (short-term) forecasts, but there’s no good scientific technique to make predictions yet — it’s not like a weather forecaster saying it’s going to rain, so you know to take a coat,” said Philip Maechling, an associate director at SCEC who collaborated with Cui on the study.

With GPU-powered supercomputing architecture, more complex quake simulations can be run efficiently and quickly. Structures respond in different ways to seismic waves of different frequencies. Skyscrapers and highway overpasses are at most risk during long-period shaking, while smaller buildings are more vulnerable to high-frequency shaking.

“We want our information to be applicable to a wider range of buildings,” Maechling said. Engineers will be able to apply these models to other parts of California, and the globe, so one day no one will have to face the devastation in San Andreas outside of the movie theater.

NVIDIA invites submissions for the 2016 Global Impact Award through the end of October.

The post Fault Finding: SoCal Researchers Use GPUs to Detect Earthquake Hazards Coming Our Way appeared first on The Official NVIDIA Blog.

For VMworld this week in San Francisco, we designed and built an enormous “Tower of Power” that serves as a centerpiece for our presence at the show.

Supported by our DesignWorks developer suite and state-of-the-art demo engine technology, our 16-foot tall, four-sided creation showcases how our GRID 2.0 technology can put any application on any device.

The “Tower of Power” – 336 micro-tiles grouped into 56 desk-sized displays – does more than just display a host of advanced, remotely-hosted apps. It makes them dance.

The tower is an imposing presence. Its four walls are 14 tiles tall and six wide. Line up all 336 of the tower’s 16 inch by 10-inch rear projection tiles would stretch 450 feet. That’s the length of one and a half football fields.

Erecting our “Tower of Power,” took nerves, GPUs, and a really tall ladder.

But despite its size, the images on the tower’s screens seem to flit effortlessly across it.  Our engineers have figured out ways to send a wave of ripples across these virtual desktops. Or twist these apps into a whirling storm of pixels for a virtual 3D tornado. All across the surface of a display generating more than 7.4 billion pixels a second.

Step One: Putting GRID 2.0 to Work

Our goal: to show how NVIDIA GRID 2.0  can accelerate powerful visual computing apps that can be served up to any display. NVIDIA GRID accelerates virtual desktops and applications, giving enterprises the power to deliver powerful graphics to any user, on any device. Even one the size of a building.

As a result, our “Tower of Power” is a machine that connects to real apps, with real capabilities.  Even as it was being assembled during a quick two week sprint, our “Tower of Power” was getting work done.

In fact, Michael Thompson, part of the small team of engineers who helped assemble and test the tower in a loading dock at our Silicon Valley campus, would use the huge display to beam into his desktop PC on the other side of our campus to update the demo software running the display.

“No way we could get this kind of resolution in our cubes,” the tall, t-shirt clad engineer said last week as he updated the code powering it all while sitting at a folding table just in front of the half-finished display.

The story behind the story: NVIDIA GRID 2.0. All the apps on the display are running on VMWare Horizon virtual machines powered by four HP blades. Each of the four blade servers runs four of our new Tesla M6 GPUs. And each blade server can run 16 virtual machines. That gives us the power to support a total of 64 different virtual desktops.

Step Two: Using Quadro to Put GRID 2.0 on Display

While GRID 2.0 is the engine that makes these apps scream, our Quadro GPUs pours all this content into a remarkable custom display. The four walls of displays are powered by four NVIDIA Quadro GPUs. Another four help take the virtual desktops generated by NVIDIA GRID 2.0 and – using our new DesignWorks developer suite—turn them into pixels we can pick up and play with.

The message behind the monolith: if our technology can support 7.4 billion pixels of worth of virtualized desktops, just imagine what it can do for your enterprise.

 

The post At VMworld, GRID 2.0 Powered “Tower of Power” Drives Billions of Pixels appeared first on The Official NVIDIA Blog.