Skip to main content

3D News

Recent Blog Entries

It’s no wonder Dr. Elliot Fishman sounds frustrated when he talks about pancreatic cancer.

As a diagnostic radiologist at Johns Hopkins Hospital, one of the world’s largest centers for pancreatic cancer treatment, he has the grim task of examining pancreatic CT scans for signs of a disease that’s usually too advanced to treat.

Because symptoms seldom show up in the early stages of pancreatic cancer, most patients don’t get CT scans or other tests until the cancer has spread. By then, the odds of survival are low: Just 7 percent of patients live five years after diagnosis, the lowest rate for any cancer.

“Our goal is early detection of pancreatic cancer, and that would save lives,” Fishman said.

Fishman aims to spot pancreatic cancers far sooner than humans alone can by applying GPU-accelerated deep learning to the task. He helps spearhead Johns Hopkins’ Felix project, a multimillion dollar effort supported by the Lustgarten Foundation to improve doctors’ ability to detect the disease.

This video depicts pancreatic cancer that has invaded the vessels — the branch-like structures at the center of the picture — surrounding the pancreas. That means the disease is too advanced to be treated with surgery. Video courtesy of Dr. Elliot Fishman, Johns Hopkins Hospital. Deep Learning Aids Hunt for Silent Killer

The pancreas — a six-inch long organ located behind the stomach — plays an essential role in converting the food we eat into fuel for the body’s cells. It’s located deep in the abdomen, making it hard for doctors to feel during routine examinations, and making it difficult to detect tumors using imaging tests like CT scans.

Some radiologists, like Fishman, see thousands of cases a year. But others lack the experience to spot the cancer, especially when the lesions — abnormalities in organs and tissue — are at their smallest in the early stages of the disease.

“If people are getting scanned and diagnoses aren’t being made, what can we do differently?” Fishman asked in a recent talk at the GPU Technology Conference, in San Jose. “We believe deep learning will work for the pancreas.”

Johns Hopkins is ideally suited to developing a deep learning solution because it has the massive amounts of data on pancreatic cancer needed to teach a computer to detect the disease in a CT scan. Hospital researchers also have our DGX-1 AI supercomputer, an essential tool for deep learning research.

The pancreas, a fish-shaped organ, is pictured here in golden brown, above the kidneys and below the spleen. The dark circle at the center of the image is a tumor. Image courtesy of Dr. Elliot Fishman, Johns Hopkins Hospital. Detecting Pancreatic Cancer with Greater Accuracy

Working with a team of computer scientists, oncologists, pathologists and other physicians, Fishman is helping  train deep learning algorithms to spot minute textural changes to tissue of the pancreas and nearby organs. These changes are often the first indication of cancer.

The team trained its algorithms on about 2,000 CT scans, including 800 from patients with confirmed pancreatic cancer. It wasn’t easy. Although Johns Hopkins has ample data, the images must be labeled to point out key characteristics that are important in determining the state of the pancreas. At four hours per case, it’s a massive undertaking.

In the first year of the project, the team trained an algorithm to recognize the pancreas and the organs that surround it, achieving a 70 percent accuracy rate. In tests this year, the deep learning model has accurately detected pancreatic cancer about nine times out of 10.

Earlier Diagnosis Possible  

The team is now examining instances where cancer was missed to improve its algorithm. It’s also working to go beyond identifying tumor cells to predict likely survival rates and whether the patient is a candidate for surgery.

Finding an answer is urgent because even though pancreatic cancer is rare, it’s on the rise. Not long ago, it was the fourth-leading cause of cancer deaths in the U.S. Today it’s No. 3. And less than a fifth of patients are eligible for surgery at the time of presentation, the primary treatment for the disease.

For Fishman, deep learning detection methods could mean earlier diagnosis. He estimates that nearly a third of the cases he sees could have been detected four-12 months sooner.

“We want to train the computer to be the best radiologist in the world,” Fishman said. “We’re hopeful we can make a difference.”

To learn more about Fishman’s research, watch his GTC talk, The Early Detection of Pancreatic Cancer Using Deep Learning: Preliminary Observations.

Also, here are two of his recent papers:

* Main image for this story pictures a pancreatic cancer cell.

The post Hidden Figures: How AI Could Spot a Silent Cancer in Time to Save Lives appeared first on The Official NVIDIA Blog.

We’re bringing NVIDIA researchers — the brains behind our bots — to the International Conference on Robotics and Automation (ICRA) in Brisbane, Australia, from May 21-25. And they want to meet you.

Held annually since 1984, ICRA has become a premier forum for robotics researchers from across the globe to present their work.

The conference is a great opportunity to meet our team, go in-depth with our recent work shaping robotics research and development, and learn how NVIDIA GPUs and AI are powering the biggest advancements in autonomous machines.

From conference talks and poster sessions to two nights of meetups, ICRA will be chock-full of opportunities to connect with some of the sharpest minds in robotics and automation. You can score a deal on an NVIDIA Jetson TX2 Developer Kit, too.

Stop by ICRA stands 7 and 8 to sync with our recruiting team to learn more about careers at NVIDIA.

Get Some Face Time with the Brains Behind Our Bots

Two amazing new members of our robotics team will be at ICRA all week long.

Meet the NVIDIA researchers who are driving the latest robotic innovations.

Claire Delaunay, vice president of engineering, will be hosting evening meetups May 21 and 22 at the iconic Fox Hotel (more on that below). She’ll be joined by Dieter Fox, who heads up our robotics lab.

For more than a decade, Delaunay has led robotics teams at startups, research labs and big companies, including Google, where she was the program lead.

Most recently she co-founded Otto, which was acquired by Uber, where she served as director of engineering before coming to work with us to develop robotic solutions.

Fox joined NVIDIA to head our robotics research lab in Seattle. The goal of the lab is to develop the next generation of robots that can robustly manipulate the physical world and interact with people naturally.

He also runs the University of Washington Robotics and State Estimation Lab, where his research focuses on robotics with strong connections to AI, computer vision and machine learning.

Join Fox and his colleagues throughout the week at conference talks and poster sessions:

Examples of object detection from image-centric domain randomization, showing the seven detected vertices. We Want to Meet You at Our Meetups

After the conference on Monday and Tuesday evenings, our Jetson meetups at the iconic Fox Hotel will be the place to be.

Delaunay, Fox and other NVIDIA researchers — along with our developers and partners — will be on hand to connect with you over good drinks and great food.

We’ll have talks from NVIDIA Research and technology demos that you won’t want to miss. It’s a chance for you to listen, learn and connect with industry luminaries and peers in a fun, relaxed setting.

During the meetup, there’ll be special pricing on the Jetson TX2 Dev Kits for just AUD $599. Space is limited, so register today.

The Jetson TX2 Dev Kit is the best tool for all of your robot needs.

Follow @NVIDIAEmbedded and #brainsbehindthebots for all of the latest news.

The post Getting Brainy in Brisbane: NVIDIA Talks Robots, Research at ICRA appeared first on The Official NVIDIA Blog.