Skip to main content

3D News

For the third year in a row, NVIDIA worked with the National Stereoscopic Association to sponsor a 3D digital image competition called the Digital Image Showcase, which is shown at the NSA convention - held this past June in Michigan. This year, the 3D Digital Image Showcase competition consisted of 294 images, submitted by 50 different makers. Entrants spanned the range from casual snapshooters to both commercial and fine art photographers. The competition was judged by...
  VOTING IS NOW CLOSED - Thanks to all that participated. Results coming soon!   The submission period for the Spring Photo Contest is now closed, and we are happy to report we’ve received 80 images from our members for consideration. And, for the first time, we’re opening the judging process to our community as well to help us determine the winners. So, between now and the end of June (11:59 PST, June 30st), please view all of the images in the gallery and place...
Okay, we've gone over all the submissions for our first Winter Photo Contest and debated at length over our favorites. And, we've finally come to a consensus, which will introduce our second, second-time contest winner: ZZ3D.   First Prize: Snow Fight   ZZ3D's a long-time contributor to 3DVisionLive and has shared some amazing work with us. Snow Fight is certainly no exception! We felt this image captured the essence of the contest's Winter theme very well, and...
The votes have all been cast and we can now, finally, bring you the results of our First Annual Summer Photo Contest. Dozen's of excellent images were submitted and it was a challenge to whittle all the entries down and select the prize winners. Without further ado we get to the results - drumroll please!   First Prize: "Soap Bubble" Zoran Zelic (ZZ3D)'s "Soap Bubble 1" takes the top prize. We like the spontaneity the image implies along with the overall composition...
Sometimes it’s just great “when a plan comes together.” An avid warbird photographer, I’d been familiar with Christian Kieffer’s outstanding pinup photography for years – his company produces some truly amazing nostalgic calendars featuring vintage WWII aircraft and models done up to mimic the pinups from the same era that helped to keep many an airman’s spirits high. Thinking the subject matter would lend itself well to 3D, I approached Christian a few months ago about...

Recent Blog Entries

Editor’s note: This is one of four profiles of finalists for NVIDIA’s 2018 Global Impact Award, which provides $200,000 to researchers using NVIDIA technology for groundbreaking work that addresses social, humanitarian and environmental problems.

The noisy, old MRI machine may get a new lease on life.

Magnetic resonance imaging machines in recent years have faced a newer model on the scene, the magnetic resonance angiography machine. MRA units have been heavy lifters for more detailed images of blood vessels, enabling detection of aneurysms and other life-threatening conditions.

Now, researchers have unleashed deep learning on MRI data to mimic the results of MRA equipment. The discovery holds the potential to lower the costs of obtaining the latest and greatest in medical imaging, which would be a boon to hospitals, especially those in rural areas and emerging markets unable to afford an MRA machine.

Aaron Lee, an assistant professor of ophthalmology at the University of Washington, and his team trained deep learning models to make inferences from single-shot structural images from both MRI and OCT (optical coherence tomography) machines, creating angiography imaging from each, respectively.

The pioneering work in medical imaging synthesis with artificial intelligence represents a first of its kind, making assessment of a host of vascular diseases more widely available, Lee said.

“The idea that you can take a single snapshot and extrapolate what’s happening is kind of mind-boggling,” Lee said of the advances poised to boost older machines.

AI for Imaging

The researchers used algorithms to map between OCT and OCTA images as well as between MRI and MRA images, work that was made possible by GPU-accelerated deep learning.

The University of Washington team’s methods could potentially be applied to libraries of medical images to make screening easier for a variety of diseases.

This achievement has placed Lee and his team researchers among four finalists for NVIDIA’s 2018 Global Impact Award. The award provides an annual grant of $200,000 for groundbreaking work that addresses the world’s most important social and humanitarian problems. The 2018 awards will go to researchers or institutions using NVIDIA technology to achieve breakthrough results with broad impact.

GPUs for Imaging

The University of Washington team had nearly 2 terabytes of data but lacked the computing heft to run its algorithms on it. Using CPUs would have taken years to handle the task of processing the dataset, Lee said.

Thanks to advances in deep learning for computer vision and the application of GPUs to benefit a wide array of fields, the researchers were able to go to work on their dataset. The team used NVIDIA TITAN X GPUs running the Pascal architecture to rapidly speed up the training of large deep convolutional neural networks.

“The graphics cards allowed us to do the algebra necessary for deep learning very quickly,” Lee said.

University of Washington’s researchers also run servers outfitted with NVIDIA Tesla P100 GPU accelerators.

We’ll announce the winner of the 2018 Global Impact Award at the GPU Technology Conference, March 26-29, in Silicon Valley. To register for the conference, visit our GTC registration page.

Other Global Impact Award 2018 finalists include researchers from Princeton University Massachusetts General Hospital and the University of Málaga.

Check out the work of last year’s Global Impact Award winners.

The post University of Washington Researchers Give MRIs an AI Facelift appeared first on The Official NVIDIA Blog.

You’ve seen this movie before. Literally.

There may not be many people outside of computer graphics who know what ray tracing is, but there aren’t many people on the planet who haven’t seen it.

Just go to your nearest multiplex, plunk down a twenty and pick up some popcorn.

Ray tracing is the technique modern movies rely on to generate or enhance special effects. Think realistic reflections, refractions and shadows. Getting these right makes starfighters in sci-fi epics scream. It makes fast cars look furious. It makes the fire, smoke and explosions of war films look real.

Ray tracing produces images that can be indistinguishable from those captured by a camera. Live-action movies blend computer-generated effects and images captured in the real world seamlessly, while animated feature films cloak digitally generated scenes in light and shadow as expressive as anything shot by a cameraman.

The easiest way to think of ray tracing is to look around you, right now. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

If you’ve been to the movies lately, you’ve seen ray tracing in action.

Historically, though, computer hardware hasn’t been fast enough to use these techniques in real time, such as in video games. Moviemakers can take as long as they like to render a single frame, so they do it offline in render farms. Video games have only a fraction of a second. As a result, most real-time graphics rely on another technique, rasterization.

What Is Rasterization?

Real-time computer graphics have long used a technique called “rasterization” to display three-dimensional objects on a two-dimensional screen. It’s fast. And, the results have gotten very good, even if it’s still not always as good as what ray tracing can do.

With rasterization, objects on the screen are created from a mesh of virtual triangles, or polygons, that create 3D models of objects. In this virtual mesh, the corners of each triangle — known as vertices — intersect with the vertices of other triangles of different sizes and shapes. A lot of information is associated with each vertex, including its position in space, as well as information about color, texture and its “normal,” which is used to determine the way the surface of an object is facing.

Computers then convert the triangles of the 3D models into pixels, or dots, on a 2D screen. Each pixel can be assigned an initial color value from the data stored in the triangle vertices.

Further pixel processing or “shading,” including changing pixel color based on how lights in the scene hit the pixel, and applying one or more textures to the pixel, combine to generate the final color applied to a pixel.

This is computationally intensive. There can be millions of polygons used for all the object models in a scene, and roughly 8 million pixels in a 4K display. And each frame, or image, displayed on a screen is typically refreshed 30 to 90 times each second on the display.

Additionally, memory buffers, a bit of temporary space set aside to speed things along, are used to render upcoming frames in advance before they’re displayed on screen. A depth or “z-buffer” is also used to store pixel depth information to ensure front-most objects at a pixel’s x-y screen location are displayed on-screen, and objects behind the front-most object remain hidden.

This is why modern, graphically rich computer games rely on powerful GPUs.

What Is Ray Tracing?

Ray tracing is different. In the real-world, the 3D objects we see are illuminated by light sources, and photons can bounce from one object to another before reaching the viewer’s eyes.

Light may be blocked by some objects, creating shadows. Or light may reflect from one object to another, such as when we see the images of one object reflected in the surface of another. And then there are refractions — when light changes as it passes through transparent or semi-transparent objects, like glass or water.

Ray tracing captures those effects by working back from our eye (or view camera) — a technique that was first described by IBM’s Arthur Appel, in 1969, in “Some Techniques for Shading Machine Renderings of Solids.” It traces the path of a light ray through each pixel on a 2D viewing surface out into a 3D model of the scene.

The next major breakthrough came a decade later. In a 1979 paper, “An Improved Illumination Model for Shaded Display,” Turner Whitted, now with NVIDIA Research, showed how to capture reflection, shadows and refraction.

Turner Whitted’s 1979 paper jump started a ray-tracing renaissance that has remade movies.

With Whitted’s technique, when a ray encounters an object in the scene, the color and lighting information at the point of impact on the object’s surface contributes to the pixel color and illumination level. If the ray bounces off or travels through the surfaces of different objects before reaching the light source, the color and lighting information from all those objects can contribute to the final pixel color.

Another pair of papers in the 1980s laid the rest of the intellectual foundation for the computer graphics revolution that upended the way movies are made.

In 1984, Lucasfilm’s Robert Cook, Thomas Porter and Loren Carpenter detailed how ray tracing could incorporate a number of common filmmaking techniques — including motion blur, depth of field, penumbras, translucency and fuzzy reflections — that could, until then, only be created with cameras.

Two years later, CalTech professor Jim Kajiya’s paper, “The Rendering Equation,” finished the job of mapping the way computer graphics were generated to physics to better represent the way light scatters throughout a scene.

Combine this research with modern GPUs, and the results are computer-generated images that capture shadows, reflections and refractions in ways that can be indistinguishable from photographs or video of the real world. That realism is why ray tracing has gone on to conquer modern moviemaking.

It’s also very computationally intensive. That’s why movie makers rely on vast numbers of servers, or rendering farms. And it can take days, even weeks, to render complex special effects.

To be sure, many factors contribute to the overall graphics quality and performance of ray tracing. In fact, because ray tracing is so computationally intensive, it’s often used for rendering those areas or objects in a scene that benefit the most in visual quality and realism from the technique, while the rest of the scene is rendered using rasterization. Rasterization can still deliver excellent graphics quality.

What’s Next?

As GPUs continue to grow more powerful, putting ray tracing to work for ever more people is the next logical step. For example, armed with ray-tracing tools such as Arnold from Autodesk, V-Ray from Chaos Group or Pixar’s Renderman — and powerful GPUs — product designers and architects use ray tracing to generate photorealistic mockups of their products in seconds, letting them collaborate better and skip expensive prototyping.

Ray tracing has proven itself to architects and lighting designers, who are using its capabilities to model how light interacts with their designs.

As GPUs offer ever more computing power, video games are the next frontier for this technology. On Monday, NVIDIA announced NVIDIA RTX, a ray-tracing technology that brings real-time, movie-quality rendering to game developers. It’s the result of a decade of work in computer graphics algorithms and GPU architectures.

It consists of a ray-tracing engine running on NVIDIA Volta architecture GPUs. It’s designed to support ray tracing through a variety of interfaces. NVIDIA partnered with Microsoft to enable full RTX support via Microsoft’s new DirectX Raytracing (DXR) API.

And to help game developers take advantage of these capabilities, NVIDIA also announced the GameWorks SDK will add a ray tracing denoiser module. The updated GameWorks SDK, coming soon, includes ray-traced area shadows and ray-traced glossy reflections.

All of this will give game developers, and others, the ability to bring ray-tracing techniques to their work to create more realistic reflections, shadows and refractions. As a result, the games you enjoy at home will get more of the cinematic qualities of a Hollywood blockbuster.

The downside: You’ll have to make your own popcorn.

Check out “Physically Based Rendering: From Theory to Implementation,” by Matt Phar, Wenzel Jakob and Greg Humphreys. It offers both mathematical theories and practical techniques for putting modern photorealistic rendering to work.

Want to know what this means for gamers? See “NVIDIA RTX Technology: Making Real-Time Ray Tracing A Reality for Games,” on Developer? See “Introduction to NVIDIA RTX for Ray Tracing.”

The post What’s the Difference Between Ray Tracing and Rasterization? appeared first on The Official NVIDIA Blog.