Skip to main content

3D News

For the last few years we’ve worked with the National Stereoscopic Association to support the 3D Digital Showcase photo competition featured at the NSA’s annual conventions. The images from this past year’s showcase are now live for everyone to view. We really enjoy the diversity of images submitted by 3D artists and enthusiasts to this event, and this gallery is certainly no different. You’ll see everything from close ups of insects to people juggling fire. Simply put,...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in driver 344.11, increasing game support and adding some new interface elements. You can get the new driver at www.geforce.com/drivers or via the update option in Geforce Experience. With the release of 344.11, new 3D...
We’re fortunate enough to have another fine 3D video from New Media Film Festival to share with you here on 3DVisionLive—a pop music video from Italy called “The Way,” which you can view here. Even better, New Media Film Festival has provided an interview with one of the co-directors of the video, Edoardo Ballanti, which provides insights on how the video was created and the vision behind it. Enjoy! (Alice Corsi also co-directed the video.) What was the Inspiration behind “...
The Fall Photo Contest received nearly 100 images – thanks to all that entered! The contest called for your best “nature” shots with the only other requirement being that they had to be true stereo images. Submissions ranged from shots of spiders in gardens to artistic approaches to tasteful nudes. As before, members were invited to vote for the winner by tagging images in the contest gallery as favorites. Without further ado, the winner is: Autumn Goodbye to Summer This...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in beta driver 337, increasing game support and adding a toggle key to enable/disable the mode. Games with 3D Compatibility Mode will launch in this mode by default. To change the render mode back to standard 3D Vision...

Recent Blog Entries

“I dream my painting, then I paint my dream.”

—Vincent van Gogh

Through the ages, artists of all types have been creating beautiful, richly detailed oil paintings on canvas that have inspired us all.  But these artists likely never dreamed that one day they would be able to choose any brush they like, pick from a limitless array of paint colors, and use the same natural twists and turns of the brush to create the colorful texture of oil, all on a digital canvas.

That’s exactly what Project Wetbrush from Adobe Research does, along with the help of the heavy-duty computational power of NVIDIA GPUs and CUDA.

Technically speaking, Project Wetbrush is the world’s first real-time simulation-based 3D painting system with bristle-level interactions. The painting and drawing tools most of us have used are 2D.  They’re simple and they’re fun.  But Project Wetbrush is completely different. This is a full 3D simulation, complete with multiple levels of thickness, depth and texture.  It feels real and it’s immersive.

Oil painting on an actual canvas is full of complex interactions within the paint, between the brush and the paint, and among the bristles themselves.  Project Wetbrush simulates all this in real-time, including the complexity of maintaining paint viscosity, variable brush speeds, color mixing and even the drying of paint. The bottom line is that it’s not easy to build a digital oil painting tool that lets artists paint so fluidly and naturally that they can ignore the technology and simply immerse themselves in their art.

So what’s new here?  Digital painting tools certainly aren’t new, but realistic digital oil painting that dynamically simulates the motions and interactions of each bristle is absolutely a breakthrough development.  Adobe Research first developed the core algorithms for this in 2015.  It’s an ambitious project that demanded a huge computing resource.  That’s why Adobe targeted NVIDIA GPUs for their supercomputer-class parallel processing power.  With collaboration from NVIDIA software experts, the entire system was highly tuned with key GPU optimizations to take Project Wetbrush even further.

Like most research, this is just the beginning. The future holds promise for more optimizations, better rendering and even deep learning, where NVIDIA GPUs play a major role.  Using GPU-accelerated deep learning, some of the most computationally difficult physical simulations could potentially be handled to create more responsive and realistic brush dynamics.  Wetbrush could even learn from itself in the future.  It’s possible that a database of realistic high-quality painting and brush strokes can be used to train a deep learning system for oil painting effect synthesizing.

To see Project Wetbrush in action, visit the NVIDIA booth #509 at SIGGRAPH for a live demo.  If you want to dig into the nuts and bolt of how this technology came to life, read the Project Wetbrush technical paper or hear Adobe’s Zhili Chen and NVIDIA’s Chris Hebert speak at the NVIDIA Theater on Wednesday, July 27.  See complete list of NVIDIA talks at SIGGRAPH here. Follow the latest happenings at #SIGGRAPH2016.

Note

Thanks to Adobe Research principals, Zhili Chen and Byungmoon Kim, for their imagination and skill in developing Wetbrush.  And special thanks to Chris Hebert of NVIDIA for his tireless efforts to guide the GPU optimization effort. All images created with Wetbrush by artist, Daniela Flamm Jackson.

The post Adobe Research, NVIDIA Collaborate on World’s First Real-Time 3D Oil Painting Simulator appeared first on The Official NVIDIA Blog.

The team at Hack Rod aims to create the world’s first car engineered with artificial intelligence and designed in a virtual environment — and may well reinvent the manufacturing supply chain in the process.

And because the Los Angeles-based startup will need a ton of computational horsepower they turned to design software leader Autodesk and NVIDIA GPU technology.

Computers that use AI to creatively come up with design ideas on their own are at the heart of what’s known as generative design. Driving the trend: advancements in AI and powerful GPUs that can quickly simulate complex phenomena. These have enabled software to play an active, participatory role in the invention of form.

Generative Design with Autodesk Dreamcatcher

Autodesk has adopted this concept with Dreamcatcher. Share your goal with the computer, tell it what you want to achieve, as well as the constraints involved, and the software generates ideas that expand on the creative possibilities for a designer.

The Hack Rod team fabricated a chassis with proven geometries but the data captured was made up of physical forces affecting the car and driver. During several test drives, the car and driver were rigged with hundreds of sensors to measure stresses, etc., and that data was fed into Autodesk Dreamcatcher to generate new chassis designs.

Then, they fed it to Dreamcatcher software, which uses NVIDIA GPUs to deliver the computational horsepower necessary necessary to quickly analyze and produce the resulting recommendations.

Using Autodesk VRED virtual prototyping tools, the resulting recommendations will be shown as part of a VR collaborative design review, powered by Silverdraft Demon VR,  taking place in NVIDIA booth 509 at SIGGRAPH, running July 24-28 in Anaheim, Calif.

“The Democratization of Manufacturing”

“The way your world opens up from a design perspective in photoreal VR is just magic,” says Mouse McCoy, Hack Rod founder, creative director and former professional race car driver. “The speed at which you can make decisions about your final product is unrivaled and when you start to add in AI/machine learning, it’s like you have 1,000 engineers working for you solving problems in a fraction of the time that it used to take. It’s the democratization of manufacturing.”

Once a final design is selected, it’s handed off to Autodesk Design Graph, a machine learning search application that makes parts recommendations (think actual nuts and bolts) to match the criteria.

Generative design is creating complex engineering results that were previously not able to be manufactured. The rapid progress in the advanced manufacturing space is now allowing for this complexity to be produced. Hack Rod, Autodesk and NVIDIA will showcase some large metal 3D printed examples of this at Autodesk University in November, in Las Vegas, and we’ll be showing a full car going from Game to Garage at SXSW in March, in Austin, Texas.

“The future of making things looks really cool,” says McCoy. “When you combine collaborative photo-real 3D VR design with AI-based generative design, machine learning and advanced manufacturing, it creates a supply chain of the future that puts the power of large organizations in the hands of the Everyman.”

Learn more by checking out the Autodesk Design Graph and VRED talks at SIGGRAPH. See the complete listing of GPU computing talks here. And follow the latest at the show at #SIGGRAPH2016.

The post Where VR Meets the Road: How GPUs Power ‘Hack Rod’, World’s First AI-Generated Car appeared first on The Official NVIDIA Blog.