Deep learning is an exciting subfield at the cutting edge of machine learning and artificial intelligence. Deep learning has led to major breakthroughs in exciting subjects just such computer vision, audio processing, and even self-driving cars. In this guide, we’ll be reviewing the essential stack of Python deep learning libraries.

These packages support a variety of deep learning architectures such as feed-forward networks, auto-encoders, recurrent neural networks (RNNs), and convolutional neural networks (CNNs).

Why only 5 libraries?

We write every guide with the practitioner in mind, and we don’t want to flood you with options. There are over a dozen deep learning libraries in Python, but you only ever need a couple. This is an opinionated guide that features the 5 Python deep learning libraries we’ve found to be the most useful and popular.

Do I need to learn every library below?

No, in fact, you typically only need to learn 1 or 2 to be able to do what you want. Here’s a summary:

  • Theano is a low-level library that specializes in efficient computation. You’ll only use this directly if you need fine-grain customization and flexibility.
  • TensorFlow is another low-level library that is less mature than Theano. However, it’s supported by Google and offers out-of-the-box distributed computing.
  • Lasagne is a lightweight wrapper for Theano. Use this if need the flexibility of Theano but don’t want to always write neural network layers from scratch.
  • Keras is a heavyweight wrapper for both Theano and Tensorflow. It’s minimalistic, modular, and awesome for rapid experimentation. This is our favorite Python library for deep learning and the best place to start for beginners.
  • MXNet is another high-level library similar to Keras. It offers bindings for multiple languages and support for distributed computing.

Why are they genius?

Because they are brilliant! So without further ado…

The Mentor: Theano

We grant Theano “The Mentor” honor because it led to other deep learning libraries we know and love.

For example, Lasagne and Keras are both built on Theano.

At its core, Theano is a library for doing math using multi-dimensional arrays. It’s fast, and it’s optimized using GPU (140x faster than CPU!).

In other words, it serves as the building blocks for neural networks. (Likewise, NumPy serves as the building blocks for scientific computing.)

From our experience, you’ll rarely be writing Theano code directly. You’ll usually be using a higher level wrapper unless you need low-level customization.

For example, here’s how you would write a logistic activation function in Theano:

The wrappers we’ll cover later do that for you under the hood with a single function parameter.

Resources

The Kid: TensorFlow

TensorFlow is the “the new kid on the block,” and it’s getting a lot of buzz.

Google’s own AI team developed TensorFlow, and they recently made it open source.

TensorFlow allows efficient numerical computation using data flow graphs. It’s marketed as a Theano 2.0 from scratch that has learned from the lessons of Theano. Its support from a powerhouse like Google is also promising.

Even so, Theano is still faster than TensorFlow in many ways, and it supports a wider range of operations.

However, one benefit of TensorFlow is that it supports distributed computing out-of-the-box. This makes training deep networks on multiple GPUs much easier.

Resources

The Augment: Lasagne

Lasagne is a lightweight wrapper for Theano. It allows you to build and train neural networks using Theano’s optimized computing.

And by lightweight, we mean it. In Lasagne, you’ll still need to get fairly low-level and declare each network layer. It simply provides modular building blocks on top of Theano.

The end result is that your code will be verbose… but you can at least program with NN structures instead of multi-dimensional arrays.

Treat Lasagne as the compromise between Theano’s flexibility and Keras’s simplicity.

Resources

The Cyborg: Keras

Among all the Python deep learning libraries, Keras is favorite. We love it for 3 reasons:

First, Keras is a wrapper that allows you to use either the Theano or the TensorFlow backend! That means you can easily switch between the two, depending on your application.

Second, it has beautiful guiding principles: modularity, minimalism, extensibility, and Python-nativeness. In practice, this makes working in Keras simple and enjoyable.

Finally, Keras has out-of-the-box implementations of common network structures. It’s fast and easy to get a convolutional neural network up and running.

Here’s an example of a super-quick sequential model:

Easy, huh? Keras is the ideal library for rapid experimentation. Currently, the biggest downside to Keras is that it doesn’t support multi-GPU environments for parallel training.

Resources

The Polyglot: MXNet

MXNet is high-level library, like Keras, but it shines in different ways.

On one hand, it takes more effort to build a network using MXNet than using Keras. We’ve also found it to have a steeper learning curve thanks to fewer tutorials.

However, it makes up for this by supporting over 7 different language bindings! These include C++, Python, R, Javascript, and even Matlab.

MXNet is a true polyglot, and it’s great for teams that share models across different languages.

Another distinct advantage of MXNet is that it supports distributed computing. That means that if you need the speed of training over multiple CPUs or GPUs, MXNet is your answer.

Resources