Skip to main content

3D News

NVIDIA is pleased to announce the first Photo Champion for 3D Vision Live, Nick Saglimbeni. Regular visitors to the site should be well familiar with Nick's images. His Warehouse Wonderland image won the site's first monthly Photo Contest, and he was also the first repeat winner of the Contest two months later with Kim Kardashian's Wild West - one of the site's first 3D celebrity images. Nick is receiving the 2012 3D Vision Live Photo Champion Award as our formal...
Sorry folks for the delay in announcing the winner for May's Photo Contest - we had an issue with the search function and needed to make sure all entries were considered. Without further ado, on to the results! Alex Savin has been submitting some excellent images from his European adventures for some time now, and his "Fontana di Trevi" is a wonderful example of stereo photography that just plain works. The composition is top notch and the image is sharp throughout, which...
James Cameron continues to pioneer 3D technology. With the first Avatar he showed what 3D could add to the film experience. After criticizing the fast conversions from 2D to 3D that many Hollywood studios have released since Avatar, Cameron oversaw a team that turned Titanic into a 3D blockbuster. That film has been a commercial and critical success, showing what a year of meticulous conversion and $18 million can add to a 15-year-old movie. The director talks about Avatar,...
Marvel Entertainment was one of the first major Hollywood companies to commit to 3D movies. Beginning last summer, every movie based on a Marvel comic property was to be either filmed in 3D or converted to 3D for theatrical and home entertainment releases. When this mandate came down, Ari Arad (Iron Man), producer of Ghost Rider: Spirit of Vengeance, turned to NVIDIA to help with the production of the Sony Pictures sequel, which is now out on Blu-ray 3D, Blu-ray and DVD....
People are flocking to the theater to take in Pixar’s latest animated film, Brave, which we recommend seeing in 3D, of course. After seeing the movie you can relive the adventure by picking up the gorgeous Brave: The Video Game for PC. The third-person action/adventure game lets you play the role of Princess Merida—Pixar’s first female lead character—as you follow her adventures in a family-friendly storyline based on the film. Engage in bow-and-arrow and sword combat and...

Recent Blog Entries

You’ve got $500 riding on the big game. And you’re just one field goal from cashing in when a big play threatens to sink your carefully considered bet. Do you scream in frustration? Or do you make a contrarian play and double down?

Thanks to Swish Analytics — and GPUs — you can see the odds of winning change just as fast as the players on the field drive the action.

“We want to be the ultimate second-screen experience for bettors and fantasy players to make smarter in-game wagers, track their teams and follow games,” says Corey Beaumont, Swish’s chief operating officer.

That kind of responsiveness opens up huge possibilities. Swish started by bringing the kinds of analytical tools used in the credit card industry to the $1 trillion sports betting market. Then, they accelerated their predictions with GPUs. The result: insights that move fast enough to appeal to hard-core bettors and casual fans alike.

It’s an opportunity that’s taken the 27-year-old entrepreneur and his co-founders, Joe Hagen and Bobby Skoff, out of the world of finance — where they were part of a team that built and sold a startup, ChargeSmart — and into a global sports culture that’s being upended by the proliferation of data.

“We all dabble in sports betting, fantasy sports and the analytics surrounding games,” says Beaumont, who’s a huge fan of the Golden State Warriors. “So we set out to build Swish for people like us.”

Give These Guys Some Credit

Swish takes the same kind of mathematical models lenders use to assess whether a borrower is a good risk and applies them to sports. While sports geeks love statistics, those stats have always consisted of backward-looking information — how a player has performed in his last few games or seasons — in the form of batting averages or quarterback ratings.

Swish’s pitch: as sports betting — and fantasy sports — flourish, and coverage of sports becomes more real-time focused, there’s an increased need for accurate and reliable predictive data. Every year, fans wager $400 billion on sports in the U.S. and upwards of $1 trillion globally.

Swish Analytics gives subscribers a suite of visually-appealing, web based dashboards and tools.

Swish gives subscribers predictions about the outcome of every game, a win confidence percentage, player and referee analysis, plus hundreds of other statistics through visually appealing dashboards and tools. They charge $99 a month per sport for betting analysis. Daily fantasy tools that help fans build winning teams cost $20 per month per sport.

While Swish doesn’t promise to deliver a winning pick every time, their track record is impressive. For example, Swish says it delivered a 30 percent return on investment to bettors who followed its recommendations for every game of the latest NFL season. 

Moving Faster with GPUs

But Swish wanted to move faster. Last November, at the L.A. Dodgers’ Accelerator Demo Day, Swish introduced moment-to-moment predictions that move as fast as the on-field action.

The Swish Analytics team with Earvin “Magic” Johnson during the LA Dodgers’ Accelerator Demo Day.

To create its magic, Swish sucks in data from more than 30 different sources. It then feeds that data to NVIDIA GPUs to help project the win probability for each team, expected points for the current drive, and predictions for three primary bet types — money line, point spread and point totals — along with real-time fantasy updates.

To build that service, Swish turned to NVIDIA GPUs hosted by Amazon Web Services (AWS) to hustle through the numbers after every play. Unlike CPUs, which sprint through a handful of computing tasks at a time, GPU work on thousands of computing tasks at once (see “What’s the Difference Between a CPU and a GPU?”).

Second screen: Swish Analytics lets you see the numbers driving the game from your smartphone.

Swish’s developers used NVIDIA’s CUDA parallel computing platform to make calls to Amazon’s GPUs from the algorithms they’d built into traditional Python and R code. “It made a real impact in the number of iterations we can achieve when developing our live NFL analytics tool,” Beaumont says.

That promises to open up access to sophisticated analytics capabilities to average bettors as the most sophisticated sports books — and the biggest bettors — are using real-time data to wager on ever smaller slices of the action.

The Next Big Play

More’s coming. Soon, Beaumont says, Swish will be providing predictions on potential play types before they happen. So, for example, Swish could let users see what the outcome could be if a football team chooses to run the ball or pass it, and how those outcomes could change the game.

Those kinds of insights could give Swish appeal to even casual sports fans. Imagine real-time predictions in sportscasts that give viewers the ability to know the significance of a change in field position in an instant. Or apps that let fans — like Beaumont’s mother — check on the outlook for tonight’s game on their smartphones.

“It will be like checking the weather forecast,” Beaumont says. “Only more accurate.”

Sounds like a smart bet to us.

Featured photo: Daniel X. O’Neill

The post Take Your Fantasy Football Pals to the Cleaners with GPU Computing appeared first on The Official NVIDIA Blog.

A system of automated electric vehicles, known as WEpods, just made history by becoming the first self-driving shuttles to take to public roads. They’re the first vehicles in the world without a steering wheel to be given license plates.

Unlike other forms of automated transport, these cheery little six-passenger vehicles don’t travel in special lanes, and they’re not guided by rails, magnets or wires.

Instead, they’re steered through traffic by a complex set of systems, including several NVIDIA-powered brains, between the towns of Wageningen and Ede in the central Dutch province of Gelderland.

To summon a WEpod, passengers just tap on an app on their smartphone.

Hitting the Road with Deep Learning

The story behind this first: a new kind of technology — called deep learning — that lets computers teach themselves about the world through a training process that is widely adopted for vision-based systems.

The WEpods are steered with the help of several NVIDIA-powered brains through the Dutch province of Gelderland.

Deep learning has already given computers the ability to surpass human capabilities on a number of tasks. And it’s critical for autonomous vehicles, where it’s just not possible to hand-code for every possible situation a self-driving car might encounter. Especially with regards to interpreting the objects surrounding the vehicle.

No wonder, then, that the WEpod team at the Delft University of Technology — along with auto manufacturers such Audi, BMW, Ford and Mercedes — have turned to deep learning on NVIDIA GPUs.

Data Driven

The result is a vehicle that’s able to build a complete picture of the environment around it as it travels through traffic.

Each WEpod continuously assesses its environment and options at high rates, resulting in a dynamic system able to deal with real-world situations of mixed traffic quickly, reliably and safely.

“This is a massive computing challenge,” said Dimitrios Kotiadis, senior researcher from TU Delft.

A GPU-Powered Supercomputer on Wheels

GPUs have been key in meeting this challenge. Unlike CPUs, which sprint through a handful of computing tasks at a time, GPU are built to work on thousands of computing tasks at once.

Their parallel architecture makes our GPUs ideal for many kinds of deep learning tasks.

This parallel architecture — coupled with our software tools — make GPUs ideal for many kinds of deep learning tasks (see “Accelerating AI with GPUs: A New Computing Model”). And it was key to accelerating the training and deployment of WEPod’s autonomous vehicles.

“NVIDIA technology plays a crucial role in enabling us to meet our computational requirements,” Kotiadis said. “Each WEpod is in many ways a supercomputer on wheels.”

Summoned by a Smartphone

The result is a new kind of public transport concept that offers the convenience of a personal vehicle, without the hassles of car ownership.

Although the vehicles are running on a fixed route for now, the WEpod team hopes other cities will adopt WEpod technology once the trials are complete. The system will start operations  in May.

“Autonomous, on-demand transit systems like WEpod have the potential to revolutionize our cities,” said WEpod Project Manager Jan Willem van der Wiel.

We’re glad to be along for the ride.

The post WEpod Becomes First Driverless Car to Play in Traffic appeared first on The Official NVIDIA Blog.