Skip to main content

3D News

For the last few years we’ve worked with the National Stereoscopic Association to support the 3D Digital Showcase photo competition featured at the NSA’s annual conventions. The images from this past year’s showcase are now live for everyone to view. We really enjoy the diversity of images submitted by 3D artists and enthusiasts to this event, and this gallery is certainly no different. You’ll see everything from close ups of insects to people juggling fire. Simply put,...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in driver 344.11, increasing game support and adding some new interface elements. You can get the new driver at www.geforce.com/drivers or via the update option in Geforce Experience. With the release of 344.11, new 3D...
We’re fortunate enough to have another fine 3D video from New Media Film Festival to share with you here on 3DVisionLive—a pop music video from Italy called “The Way,” which you can view here. Even better, New Media Film Festival has provided an interview with one of the co-directors of the video, Edoardo Ballanti, which provides insights on how the video was created and the vision behind it. Enjoy! (Alice Corsi also co-directed the video.) What was the Inspiration behind “...
The Fall Photo Contest received nearly 100 images – thanks to all that entered! The contest called for your best “nature” shots with the only other requirement being that they had to be true stereo images. Submissions ranged from shots of spiders in gardens to artistic approaches to tasteful nudes. As before, members were invited to vote for the winner by tagging images in the contest gallery as favorites. Without further ado, the winner is: Autumn Goodbye to Summer This...
In driver 334.89 NVIDIA introduced a new proprietary rendering mode for 3D Vision that enables us to improve the 3D experience for many key DirectX 10 and 11 games. This mode is now called “3D Compatibility Mode”. We have continued to iterate on this feature in beta driver 337, increasing game support and adding a toggle key to enable/disable the mode. Games with 3D Compatibility Mode will launch in this mode by default. To change the render mode back to standard 3D Vision...

Recent Blog Entries

You’ve got $500 riding on the big game. And you’re just one field goal from cashing in when a big play threatens to sink your carefully considered bet. Do you scream in frustration? Or do you make a contrarian play and double down?

Thanks to Swish Analytics — and GPUs — you can see the odds of winning change just as fast as the players on the field drive the action.

“We want to be the ultimate second-screen experience for bettors and fantasy players to make smarter in-game wagers, track their teams and follow games,” says Corey Beaumont, Swish’s chief operating officer.

That kind of responsiveness opens up huge possibilities. Swish started by bringing the kinds of analytical tools used in the credit card industry to the $1 trillion sports betting market. Then, they accelerated their predictions with GPUs. The result: insights that move fast enough to appeal to hard-core bettors and casual fans alike.

It’s an opportunity that’s taken the 27-year-old entrepreneur and his co-founders, Joe Hagen and Bobby Skoff, out of the world of finance — where they were part of a team that built and sold a startup, ChargeSmart — and into a global sports culture that’s being upended by the proliferation of data.

“We all dabble in sports betting, fantasy sports and the analytics surrounding games,” says Beaumont, who’s a huge fan of the Golden State Warriors. “So we set out to build Swish for people like us.”

Give These Guys Some Credit

Swish takes the same kind of mathematical models lenders use to assess whether a borrower is a good risk and applies them to sports. While sports geeks love statistics, those stats have always consisted of backward-looking information — how a player has performed in his last few games or seasons — in the form of batting averages or quarterback ratings.

Swish’s pitch: as sports betting — and fantasy sports — flourish, and coverage of sports becomes more real-time focused, there’s an increased need for accurate and reliable predictive data. Every year, fans wager $400 billion on sports in the U.S. and upwards of $1 trillion globally.

Swish Analytics gives subscribers a suite of visually-appealing, web based dashboards and tools.

Swish gives subscribers predictions about the outcome of every game, a win confidence percentage, player and referee analysis, plus hundreds of other statistics through visually appealing dashboards and tools. They charge $99 a month per sport for betting analysis. Daily fantasy tools that help fans build winning teams cost $20 per month per sport.

While Swish doesn’t promise to deliver a winning pick every time, their track record is impressive. For example, Swish says it delivered a 30 percent return on investment to bettors who followed its recommendations for every game of the latest NFL season. 

Moving Faster with GPUs

But Swish wanted to move faster. Last November, at the L.A. Dodgers’ Accelerator Demo Day, Swish introduced moment-to-moment predictions that move as fast as the on-field action.

The Swish Analytics team with Earvin “Magic” Johnson during the LA Dodgers’ Accelerator Demo Day.

To create its magic, Swish sucks in data from more than 30 different sources. It then feeds that data to NVIDIA GPUs to help project the win probability for each team, expected points for the current drive, and predictions for three primary bet types — money line, point spread and point totals — along with real-time fantasy updates.

To build that service, Swish turned to NVIDIA GPUs hosted by Amazon Web Services (AWS) to hustle through the numbers after every play. Unlike CPUs, which sprint through a handful of computing tasks at a time, GPU work on thousands of computing tasks at once (see “What’s the Difference Between a CPU and a GPU?”).

Second screen: Swish Analytics lets you see the numbers driving the game from your smartphone.

Swish’s developers used NVIDIA’s CUDA parallel computing platform to make calls to Amazon’s GPUs from the algorithms they’d built into traditional Python and R code. “It made a real impact in the number of iterations we can achieve when developing our live NFL analytics tool,” Beaumont says.

That promises to open up access to sophisticated analytics capabilities to average bettors as the most sophisticated sports books — and the biggest bettors — are using real-time data to wager on ever smaller slices of the action.

The Next Big Play

More’s coming. Soon, Beaumont says, Swish will be providing predictions on potential play types before they happen. So, for example, Swish could let users see what the outcome could be if a football team chooses to run the ball or pass it, and how those outcomes could change the game.

Those kinds of insights could give Swish appeal to even casual sports fans. Imagine real-time predictions in sportscasts that give viewers the ability to know the significance of a change in field position in an instant. Or apps that let fans — like Beaumont’s mother — check on the outlook for tonight’s game on their smartphones.

“It will be like checking the weather forecast,” Beaumont says. “Only more accurate.”

Sounds like a smart bet to us.

Featured photo: Daniel X. O’Neill

The post Take Your Fantasy Football Pals to the Cleaners with GPU Computing appeared first on The Official NVIDIA Blog.

A system of automated electric vehicles, known as WEpods, just made history by becoming the first self-driving shuttles to take to public roads. They’re the first vehicles in the world without a steering wheel to be given license plates.

Unlike other forms of automated transport, these cheery little six-passenger vehicles don’t travel in special lanes, and they’re not guided by rails, magnets or wires.

Instead, they’re steered through traffic by a complex set of systems, including several NVIDIA-powered brains, between the towns of Wageningen and Ede in the central Dutch province of Gelderland.

To summon a WEpod, passengers just tap on an app on their smartphone.

Hitting the Road with Deep Learning

The story behind this first: a new kind of technology — called deep learning — that lets computers teach themselves about the world through a training process that is widely adopted for vision-based systems.

The WEpods are steered with the help of several NVIDIA-powered brains through the Dutch province of Gelderland.

Deep learning has already given computers the ability to surpass human capabilities on a number of tasks. And it’s critical for autonomous vehicles, where it’s just not possible to hand-code for every possible situation a self-driving car might encounter. Especially with regards to interpreting the objects surrounding the vehicle.

No wonder, then, that the WEpod team at the Delft University of Technology — along with auto manufacturers such Audi, BMW, Ford and Mercedes — have turned to deep learning on NVIDIA GPUs.

Data Driven

The result is a vehicle that’s able to build a complete picture of the environment around it as it travels through traffic.

Each WEpod continuously assesses its environment and options at high rates, resulting in a dynamic system able to deal with real-world situations of mixed traffic quickly, reliably and safely.

“This is a massive computing challenge,” said Dimitrios Kotiadis, senior researcher from TU Delft.

A GPU-Powered Supercomputer on Wheels

GPUs have been key in meeting this challenge. Unlike CPUs, which sprint through a handful of computing tasks at a time, GPU are built to work on thousands of computing tasks at once.

Their parallel architecture makes our GPUs ideal for many kinds of deep learning tasks.

This parallel architecture — coupled with our software tools — make GPUs ideal for many kinds of deep learning tasks (see “Accelerating AI with GPUs: A New Computing Model”). And it was key to accelerating the training and deployment of WEPod’s autonomous vehicles.

“NVIDIA technology plays a crucial role in enabling us to meet our computational requirements,” Kotiadis said. “Each WEpod is in many ways a supercomputer on wheels.”

Summoned by a Smartphone

The result is a new kind of public transport concept that offers the convenience of a personal vehicle, without the hassles of car ownership.

Although the vehicles are running on a fixed route for now, the WEpod team hopes other cities will adopt WEpod technology once the trials are complete. The system will start operations  in May.

“Autonomous, on-demand transit systems like WEpod have the potential to revolutionize our cities,” said WEpod Project Manager Jan Willem van der Wiel.

We’re glad to be along for the ride.

The post WEpod Becomes First Driverless Car to Play in Traffic appeared first on The Official NVIDIA Blog.