Skip to main content

3D News

If you’ve a penchant for liking superhero-themed anything and playing games in 3D, the Batman: Arkham series has been a match made in heaven. Simply put, when it comes to 3D Vision titles it just doesn’t get much better – and it’s hard to see how it could.We’re happy to report that Batman: Arkham Origins, which releases today, continues this tradition. Out of the box, Origins is rated 3D Vision Ready, so you know it’s going to look spectacular. We’ve played it quite a bit...
Contest closed - stay tuned to 3DVisionlive.com for details about upcoming contests.     3DVisionLive.com is excited to unveil the latest in a series of photo contests aimed at giving you a platform to show off your images and potentially win some cool prizes. Like our most recent Spring Contest, this one will span three months - October, November, and December - and is themed: Your image must be something that captures or shows the essence of "nature" and what...
With sincere apologies for the delay, NVIDIA is pleased to announce the results of the Spring Photo Contest. We received more than 80 submissions from 3DVisionLive members and, for the first time, invited the membership to select the winner. The only criteria for the contest was the photos had to represent the meaning of Spring in some fashion, and be an original image created by the member that submitted it. All submitted photos were put in a gallery and ample time was...
For the third year in a row, NVIDIA worked with the National Stereoscopic Association to sponsor a 3D digital image competition called the Digital Image Showcase, which is shown at the NSA convention - held this past June in Michigan. This year, the 3D Digital Image Showcase competition consisted of 294 images, submitted by 50 different makers. Entrants spanned the range from casual snapshooters to both commercial and fine art photographers. The competition was judged by...
  VOTING IS NOW CLOSED - Thanks to all that participated. Results coming soon!   The submission period for the Spring Photo Contest is now closed, and we are happy to report we’ve received 80 images from our members for consideration. And, for the first time, we’re opening the judging process to our community as well to help us determine the winners. So, between now and the end of June (11:59 PST, June 30st), please view all of the images in the gallery and place...

Recent Blog Entries

You’ve got $500 riding on the big game. And you’re just one field goal from cashing in when a big play threatens to sink your carefully considered bet. Do you scream in frustration? Or do you make a contrarian play and double down?

Thanks to Swish Analytics — and GPUs — you can see the odds of winning change just as fast as the players on the field drive the action.

“We want to be the ultimate second-screen experience for bettors and fantasy players to make smarter in-game wagers, track their teams and follow games,” says Corey Beaumont, Swish’s chief operating officer.

That kind of responsiveness opens up huge possibilities. Swish started by bringing the kinds of analytical tools used in the credit card industry to the $1 trillion sports betting market. Then, they accelerated their predictions with GPUs. The result: insights that move fast enough to appeal to hard-core bettors and casual fans alike.

It’s an opportunity that’s taken the 27-year-old entrepreneur and his co-founders, Joe Hagen and Bobby Skoff, out of the world of finance — where they were part of a team that built and sold a startup, ChargeSmart — and into a global sports culture that’s being upended by the proliferation of data.

“We all dabble in sports betting, fantasy sports and the analytics surrounding games,” says Beaumont, who’s a huge fan of the Golden State Warriors. “So we set out to build Swish for people like us.”

Give These Guys Some Credit

Swish takes the same kind of mathematical models lenders use to assess whether a borrower is a good risk and applies them to sports. While sports geeks love statistics, those stats have always consisted of backward-looking information — how a player has performed in his last few games or seasons — in the form of batting averages or quarterback ratings.

Swish’s pitch: as sports betting — and fantasy sports — flourish, and coverage of sports becomes more real-time focused, there’s an increased need for accurate and reliable predictive data. Every year, fans wager $400 billion on sports in the U.S. and upwards of $1 trillion globally.

Swish Analytics gives subscribers a suite of visually-appealing, web based dashboards and tools.

Swish gives subscribers predictions about the outcome of every game, a win confidence percentage, player and referee analysis, plus hundreds of other statistics through visually appealing dashboards and tools. They charge $99 a month per sport for betting analysis. Daily fantasy tools that help fans build winning teams cost $20 per month per sport.

While Swish doesn’t promise to deliver a winning pick every time, their track record is impressive. For example, Swish says it delivered a 30 percent return on investment to bettors who followed its recommendations for every game of the latest NFL season. 

Moving Faster with GPUs

But Swish wanted to move faster. Last November, at the L.A. Dodgers’ Accelerator Demo Day, Swish introduced moment-to-moment predictions that move as fast as the on-field action.

The Swish Analytics team with Earvin “Magic” Johnson during the LA Dodgers’ Accelerator Demo Day.

To create its magic, Swish sucks in data from more than 30 different sources. It then feeds that data to NVIDIA GPUs to help project the win probability for each team, expected points for the current drive, and predictions for three primary bet types — money line, point spread and point totals — along with real-time fantasy updates.

To build that service, Swish turned to NVIDIA GPUs hosted by Amazon Web Services (AWS) to hustle through the numbers after every play. Unlike CPUs, which sprint through a handful of computing tasks at a time, GPU work on thousands of computing tasks at once (see “What’s the Difference Between a CPU and a GPU?”).

Second screen: Swish Analytics lets you see the numbers driving the game from your smartphone.

Swish’s developers used NVIDIA’s CUDA parallel computing platform to make calls to Amazon’s GPUs from the algorithms they’d built into traditional Python and R code. “It made a real impact in the number of iterations we can achieve when developing our live NFL analytics tool,” Beaumont says.

That promises to open up access to sophisticated analytics capabilities to average bettors as the most sophisticated sports books — and the biggest bettors — are using real-time data to wager on ever smaller slices of the action.

The Next Big Play

More’s coming. Soon, Beaumont says, Swish will be providing predictions on potential play types before they happen. So, for example, Swish could let users see what the outcome could be if a football team chooses to run the ball or pass it, and how those outcomes could change the game.

Those kinds of insights could give Swish appeal to even casual sports fans. Imagine real-time predictions in sportscasts that give viewers the ability to know the significance of a change in field position in an instant. Or apps that let fans — like Beaumont’s mother — check on the outlook for tonight’s game on their smartphones.

“It will be like checking the weather forecast,” Beaumont says. “Only more accurate.”

Sounds like a smart bet to us.

Featured photo: Daniel X. O’Neill

The post Take Your Fantasy Football Pals to the Cleaners with GPU Computing appeared first on The Official NVIDIA Blog.

A system of automated electric vehicles, known as WEpods, just made history by becoming the first self-driving shuttles to take to public roads. They’re the first vehicles in the world without a steering wheel to be given license plates.

Unlike other forms of automated transport, these cheery little six-passenger vehicles don’t travel in special lanes, and they’re not guided by rails, magnets or wires.

Instead, they’re steered through traffic by a complex set of systems, including several NVIDIA-powered brains, between the towns of Wageningen and Ede in the central Dutch province of Gelderland.

To summon a WEpod, passengers just tap on an app on their smartphone.

Hitting the Road with Deep Learning

The story behind this first: a new kind of technology — called deep learning — that lets computers teach themselves about the world through a training process that is widely adopted for vision-based systems.

The WEpods are steered with the help of several NVIDIA-powered brains through the Dutch province of Gelderland.

Deep learning has already given computers the ability to surpass human capabilities on a number of tasks. And it’s critical for autonomous vehicles, where it’s just not possible to hand-code for every possible situation a self-driving car might encounter. Especially with regards to interpreting the objects surrounding the vehicle.

No wonder, then, that the WEpod team at the Delft University of Technology — along with auto manufacturers such Audi, BMW, Ford and Mercedes — have turned to deep learning on NVIDIA GPUs.

Data Driven

The result is a vehicle that’s able to build a complete picture of the environment around it as it travels through traffic.

Each WEpod continuously assesses its environment and options at high rates, resulting in a dynamic system able to deal with real-world situations of mixed traffic quickly, reliably and safely.

“This is a massive computing challenge,” said Dimitrios Kotiadis, senior researcher from TU Delft.

A GPU-Powered Supercomputer on Wheels

GPUs have been key in meeting this challenge. Unlike CPUs, which sprint through a handful of computing tasks at a time, GPU are built to work on thousands of computing tasks at once.

Their parallel architecture makes our GPUs ideal for many kinds of deep learning tasks.

This parallel architecture — coupled with our software tools — make GPUs ideal for many kinds of deep learning tasks (see “Accelerating AI with GPUs: A New Computing Model”). And it was key to accelerating the training and deployment of WEPod’s autonomous vehicles.

“NVIDIA technology plays a crucial role in enabling us to meet our computational requirements,” Kotiadis said. “Each WEpod is in many ways a supercomputer on wheels.”

Summoned by a Smartphone

The result is a new kind of public transport concept that offers the convenience of a personal vehicle, without the hassles of car ownership.

Although the vehicles are running on a fixed route for now, the WEpod team hopes other cities will adopt WEpod technology once the trials are complete. The system will start operations  in May.

“Autonomous, on-demand transit systems like WEpod have the potential to revolutionize our cities,” said WEpod Project Manager Jan Willem van der Wiel.

We’re glad to be along for the ride.

The post WEpod Becomes First Driverless Car to Play in Traffic appeared first on The Official NVIDIA Blog.