Skip to main content

3D News

For the last few years we’ve worked with the National Stereoscopic Association to support the 3D Digital Showcase photo competition featured at the NSA’s annual conventions. The images from this past year’s showcase are now live for everyone to view. We really enjoy the diversity of images submitted by 3D artists and enthusiasts to this event, and this gallery is certainly no different. You’ll see everything from close ups of insects to people juggling fire. Simply put,...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in driver 344.11, increasing game support and adding some new interface elements. You can get the new driver at www.geforce.com/drivers or via the update option in Geforce Experience. With the release of 344.11, new 3D...
We’re fortunate enough to have another fine 3D video from New Media Film Festival to share with you here on 3DVisionLive—a pop music video from Italy called “The Way,” which you can view here. Even better, New Media Film Festival has provided an interview with one of the co-directors of the video, Edoardo Ballanti, which provides insights on how the video was created and the vision behind it. Enjoy! (Alice Corsi also co-directed the video.) What was the Inspiration behind “...
The Fall Photo Contest received nearly 100 images – thanks to all that entered! The contest called for your best “nature” shots with the only other requirement being that they had to be true stereo images. Submissions ranged from shots of spiders in gardens to artistic approaches to tasteful nudes. As before, members were invited to vote for the winner by tagging images in the contest gallery as favorites. Without further ado, the winner is: Autumn Goodbye to Summer This...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in beta driver 337, increasing game support and adding a toggle key to enable/disable the mode. Games with 3D Compatibility Mode will launch in this mode by default. To change the render mode back to standard 3D Vision...

Recent Blog Entries

Artificial intelligence and deep learning. VR and augmented reality. Autonomous vehicles and intelligent machines.

At the center of these technologies is GPU computing. And the GPU Technology Conference is where the people behind these technologies and dozens more connect to shape the future.

Share your innovation: Become a featured GTC speaker.

GTC 2017 provides developers and thought leaders with the opportunity to share their work with thousands of the world’s brightest minds. Past speakers have included developers from Audi, ESPN, Facebook, Google, IBM, Toyota and many more companies, along with top researchers from universities worldwide.

Enter your submission now for talks, research posters and instructor-led labs at the May 8-11 event, in Silicon Valley.

Startups are also invited to apply to join the daylong Emerging Companies Summit. One highlight: the Early Stage Challenge, where CEOs get four minutes to pitch their company’s GPU innovation to a panel of expert judges for a shot at a $100,000 cash prize, awarded on the spot.

Showcase for GPU Developers

GTC is the world’s most important GPU developer conference. The 2016 event had more than 5,500 attendees, and 600+ sessions on GPU breakthroughs in science, technology and industry.

Accepted submissions showing how GPUs are transforming accelerated computing and graphics will earn an All-Access conference pass. Plus you’ll get the opportunity to connect with experts from NVIDIA, a who’s who of top business and academic organizations, and 250+ global press and analysts.

Lead hands-on learning at one of GTC’s many labs.

Attendees can also participate in developer labs and social events and gain hands-on training on the innovative ways that GPU technologies are changing the world.

Call for Submissions

Submit your ideas for GTC 2017.

To get in on the action, submit your creative and groundbreaking work using GPUs:

See highlights from GTC 2016 here.

The post GTC 2017: Call for Submissions Now Open for World’s Top GPU Developer Event appeared first on The Official NVIDIA Blog.

NVIDIA today took the cloak off Parker, our newest mobile processor that will power the next generation of autonomous vehicles.

Speaking at the Hot Chips conference in Cupertino, California, we revealed the architecture and underlying technology of this highly advanced processor, which is ideally suited for automotive applications like self-driving cars and digital cockpits.

You may recall we mentioned Parker at CES earlier this year, when we introduced the NVIDIA DRIVE PX 2 platform (shown above). That platform uses two Parker processors and two Pascal architecture-based GPUs to power deep learning applications. More than 80 carmakers, tier 1 suppliers and university research centers around the world are now using our DRIVE PX 2 system to develop autonomous vehicles. This includes Volvo, which plans to road test DRIVE PX 2 systems in XC90 SUVs next year.

Forging a Future for Automotive

Parker delivers class-leading performance and energy efficiency, while supporting features important to the automotive market such as deep learning, hardware-level virtualization for tighter design integration, a hardware-based safety engine for reliable fault detection and error processing, and feature-rich IO ports for automotive integration.

Built around NVIDIA’s highest performing and most power-efficient Pascal GPU architecture and the next generation of NVIDIA’s revolutionary Denver CPU architecture, Parker delivers up to 1.5 teraflops(1) of performance for deep learning-based self-driving AI cockpit systems.

Need for Speed

Parker delivers 50 to 100 percent higher multi-core CPU performance than other mobile processors(2). This is thanks to its CPU architecture consisting of two next-generation 64-bit Denver CPU cores (Denver 2.0) paired with four 64-bit ARM Cortex A57 CPUs. These all work together in a fully coherent heterogeneous multi-processor configuration.

The Denver 2.0 CPU is a seven-way superscalar processor supporting the ARM v8 instruction set and implements an improved dynamic code optimization algorithm and additional low-power retention states for better energy efficiency. The two Denver cores and the Cortex A57 CPU complex are interconnected through a proprietary coherent interconnect fabric.

A new 256-core Pascal GPU in Parker delivers the performance needed to run advanced deep learning inference algorithms for self-driving capabilities. And it offers the raw graphics performance and features to power multiple high-resolution displays, such as cockpit instrument displays and in-vehicle infotainment panels.

Scalable Architecture

Working in concert with Pascal-based supercomputers in the cloud, Parker-based self-driving cars can be continually updated with newer algorithms and information to improve self-driving accuracy and safety.

Parker includes hardware-enabled virtualization that supports up to eight virtual machines. Virtualization enables carmakers to use a single Parker-based DRIVE PX 2 system to concurrently host multiple systems, such as in-vehicle infotainment systems, digital instrument clusters and driver assistance systems.

Parker is also a scalable architecture. Automakers can use a single unit for highly efficient systems. Or they can integrate it into more complex designs, such as NVIDIA DRIVE PX 2, which employs two Parker chips along with two discrete Pascal GPU cores.

In fact, DRIVE PX 2 delivers an unprecedented 24 trillion deep learning operations per second to run the most complex deep learning-based inference algorithms. Such systems deliver the supercomputer level of performance that self-driving cars need to safely navigate through all kinds of driving environments.

Parker Specifications

To address the needs of the automotive market, Parker includes features such as a dual-CAN (controller area network) interface to connect to the numerous electronic control units in the modern car, and Gigabit Ethernet to transport audio and video streams. Compliance with ISO 26262 is achieved through a number of safety features implemented in hardware, such as a safety engine that includes a dedicated dual-lockstep processor for reliable fault detection and processing.

Parker is architected to support both decode and encode of video streams up to 4K resolution at 60 frames per second. This will enable automakers to use higher resolution in-vehicle cameras for accurate object detection, and 4K display panels to enhance in-vehicle entertainment experiences.

Expect to see more details on Parker’s architecture and capabilities as we accelerate toward making the self-driving car a reality.

  1. References the native FP16 (16-bit floating-point) processing capability of Parker.
  2. Based on SpecINT2K-Rate performance measured on Parker development platform and devices based on competing mobile processors.

The post Get Under the Hood of Parker, Our Newest SOC for Autonomous Vehicles appeared first on The Official NVIDIA Blog.