Deep Learning and You
It’s all the craze today. Every where you turn, you’re hearing about deep learning. What seems like a topic that is out of a sci-fi movie, deep learning is here and it’s AMAZING! It really is everything you every could expect it to be… and more.
For those of you who may not be familiar with deep learning, or may have only just scratched the surface of understanding it, I’ll give you a small backgrounder thanks to our friends over at NVIDIA who have a whole section of their site committed to deep learning and deep learning tools. They help to explain it the best (I.M.H.O.) by stating, “Deep learning is the fastest growing area of machine learning. [It] uses convolutional neural networks to learn many levels of abstraction. The levels of abstractions range from simple concepts to complex, the more complex require more layers in your network. Each layer categorizes some kind of information, refines it and passes it along to the next. These many layers are what put the “deep” into deep learning. Deep learning enables a machine to use this process to build a hierarchical representation. The first layer might look for simple edges. The next might look for collections of edges that form simple shapes like rectangles, or circles. The third might identify features like eyes and noses. After five or six layers, the neural network can put these features together. The result: a machine that can recognize faces.”
It doesn’t just stop there either. Deep learning is being used across so many industries and in so many different applications such as image recognition; speech recognition; natural language processing; neural machine translation, cancer detection, diabetic grading, drug discovery, augmented reality, video search and processing, and (one of my all-time favorites) autonomous machines.
One of the many reason I love working at Cirrascale is because of the technology that we create and deploy daily to companies that are on the bleeding edge of discovery. In fact, our customer list grows every month to include more and more organizations that are directly involved with deep learning and artificial intelligence. You may recall the news about New York University working with Cirrascale and NVIDIA to deliver their deep learning computing system — called “ScaLeNet.” It’s an eight-node Cirrascale cluster with 32 top-of-the-line NVIDIA® Tesla® K80 dual-GPU accelerators. Since then, we’ve announced the Cirrascale GB5600 with breakthrough technology enabling us to support up to 16 GPUs on a single root complex, something that no one in the industry has yet to accomplish. Yann LeCunn from New York University states, “Multi-GPU machines are a necessary tool for future progress in AI and deep learning. Potential applications include self-driving cars, medical image analysis systems, real-time speech-to-speech translation, and systems that can truly understand natural language and hold dialogs with people.”
How does this all affect you? Well, we are now living in a world where soon you’ll walk into your doctor and stand in a machine that will scan your body and recognize cancer spots before doctors would have been able to see them. Early detection is one of the many things deep learning is being used for. Talking to your appliances, letting your car drive you to work while you read, and having your phone tell you that you’ll be late to work if you “don’t leave now” are all things that are being made possible RIGHT NOW thanks to deep learning.
If you think you’re just beginning to hear about deep learning, just wait. It will become more and more prevalent over the coming months. NVIDIA and Cirrascale are partnering together to give companies the opportunity to advance deep learning into new areas and to utilize the very best multi-GPU solutions in doing so.
If you’d like to learn more about how we may be able to help you with a deep learning solution with NVIDIA GPU accelerators, contact us today or visit us at Supercomputing 2015 in booth #1627. We’ll talk to you there!