Tuesday, June 23, 2015

A Deep Learning Tutorial: From Perceptrons to Deep Networks.

BY IVAN VASILEV - JAVA DEVELOPER @ TOPTAL.
source link

In recent years, there’s been a resurgence in the field of Artificial Intelligence. It’s spread beyond the academic world with major players like Google, Microsoft, and Facebook creating their own research teams and making some impressive acquisitions.

Some this can be attributed to the abundance of raw data generated by social network users, much of which needs to be analyzed, as well as to the cheap computational power available via GPGPUs.

But beyond these phenomena, this resurgence has been powered in no small part by a new trend in AI, specifically in machine learning, known as “Deep Learning”. In this tutorial, I’ll introduce you to the key concepts and algorithms behind deep learning, beginning with the simplest unit of composition and building to the concepts of machine learning in Java.

(For full disclosure: I’m also the author of a Java deep learning library, available here, and the examples in this article are implemented using the above library. If you like it, you can support it by giving it a star on GitHub, for which I would be grateful. Usage instructions are available on the homepage.)

A Thirty Second Primer on Machine Learning

In case you’re not familiar, check out this introduction to machine learning:

The general procedure is as follows:

We have some algorithm that’s given a handful of labeled examples, say 10 images of dogs with the label 1 (“Dog”) and 10 images of other things with the label 0 (“Not dog”)—note that we’re mainly sticking to supervised, binary classification for this post.
The algorithm “learns” to identify images of dogs and, when fed a new image, hopes to produce the correct label (1 if it’s an image of a dog, and 0 otherwise).
This setting is incredibly general: your data could be symptoms and your labels illnesses; or your data could be images of handwritten characters and your labels the actual characters they represent.

Perceptrons: Early Deep Learning Algorithms

One of the earliest supervised training algorithms is that of the perceptron, a basic neural network building block.

Say we have n points in the plane, labeled ‘0’ and ‘1’. We’re given a new point and we want to guess its label (this is akin to the “Dog” and “Not dog” scenario above). How do we do it?

One approach might be to look at the closest neighbor and return that point’s label. But a slightly more intelligent way of going about it would be to pick a line that best separates the labeled data and use that as your classifier.

A depiction of input data in relation to a linear classifier is a basic approach to deep learning.

In this case, each piece of input data would be represented as a vector x = (x_1, x_2) and our function would be something like “‘0’ if below the line, ‘1’ if above”.

To represent this mathematically, let our separator be defined by a vector of weights w and a vertical offset (or bias) b. Then, our function would combine the inputs and weights with a weighted sum transfer function:

The result of this transfer function would then be fed into an activation function to produce a labeling. In the example above, our activation function was a threshold cutoff (e.g., 1 if greater than some value):

Training the Perceptron

The training of the perceptron consists of feeding it multiple training samples and calculating the output for each of them. After each sample, the weights w are adjusted in such a way so as to minimize the output error, defined as the difference between the desired (target) and the actual outputs. There are other error functions, like the mean square error, but the basic principle of training remains the same.

Single Perceptron Drawbacks

The single perceptron approach to deep learning has one major drawback: it can only learn linearly separable functions. How major is this drawback? Take XOR, a relatively simple function, and notice that it can’t be classified by a linear separator (notice the failed attempt, below):

The drawback to this deep learning approach is that some functions cannot be classified by a linear separator.

To address this problem, we’ll need to use a multilayer perceptron, also known as feedforward neural network: in effect, we’ll compose a bunch of these perceptrons together to create a more powerful mechanism for learning.

Feedforward Neural Networks for Deep Learning

A neural network is really just a composition of perceptrons, connected in different ways and operating on different activation functions.

Feedforward neutral network deep learning is a more complex approach than single perceptrons.

For starters, we’ll look at the feedforward neural network, which has the following properties:

An input, output, and one or more hidden layers. The figure above shows a network with a 3-unit input layer, 4-unit hidden layer and an output layer with 2 units (the terms units and neurons are interchangeable).
Each unit is a single perceptron like the one described above.
The units of the input layer serve as inputs for the units of the hidden layer, while the hidden layer units are inputs to the output layer.
Each connection between two neurons has a weight w (similar to the perceptron weights).
Each unit of layer t is typically connected to every unit of the previous layer t - 1 (although you could disconnect them by setting their weight to 0).
To process input data, you “clamp” the input vector to the input layer, setting the values of the vector as “outputs” for each of the input units. In this particular case, the network can process a 3-dimensional input vector (because of the 3 input units). For example, if your input vector is [7, 1, 2], then you’d set the output of the top input unit to 7, the middle unit to 1, and so on. These values are then propagated forward to the hidden units using the weighted sum transfer function for each hidden unit (hence the term forward propagation), which in turn calculate their outputs (activation function).
The output layer calculates it’s outputs in the same way as the hidden layer. The result of the output layer is the output of the network.
Beyond Linearity

What if each of our perceptrons is only allowed to use a linear activation function? Then, the final output of our network will still be some linear function of the inputs, just adjusted with a ton of different weights that it’s collected throughout the network. In other words, a linear composition of a bunch of linear functions is still just a linear function. If we’re restricted to linear activation functions, then the feedforward neural network is no more powerful than the perceptron, no matter how many layers it has.

A linear composition of a bunch of linear functions is still just a linear function, so most neural networks use non-linear activation functions.
Because of this, most neural networks use non-linear activation functions like the logistic, tanh, binary or rectifier. Without them the network can only learn functions which are linear combinations of its inputs.

Training Perceptrons

The most common deep learning algorithm for supervised training of the multilayer perceptrons is known as backpropagation. The basic procedure:

A training sample is presented and propagated forward through the network.
The output error is calculated, typically the mean squared error:

Where t is the target value and y is the actual network output. Other error calculations are also acceptable, but the MSE is a good choice.
Network error is minimized using a method called stochastic gradient descent. Gradient descent

Gradient descent is universal, but in the case of neural networks, this would be a graph of the training error as a function of the input parameters. The optimal value for each weight is that at which the error achieves a global minimum. During the training phase, the weights are updated in small steps (after each training sample or a mini-batch of several samples) in such a way that they are always trying to reach the global minimum—but this is no easy task, as you often end up in local minima, like the one on the right. For example, if the weight has a value of 0.6, it needs to be changed towards 0.4.

This figure represents the simplest case, that in which error depends on a single parameter. However, network error depends on every network weight and the error function is much, much more complex.

Thankfully, backpropagation provides a method for updating each weight between two neurons with respect to the output error. The derivation itself is quite complicated, but the weight update for a given node has the following (simple) form:

Where E is the output error, and w_i is the weight of input i to the neuron.

Essentially, the goal is to move in the direction of the gradient with respect to weight i. The key term is, of course, the derivative of the error, which isn’t always easy to calculate: how would you find this derivative for a random weight of a random hidden node in the middle of a large network?

The answer: through backpropagation. The errors are first calculated at the output units where the formula is quite simple (based on the difference between the target and predicted values), and then propagated back through the network in a clever fashion, allowing us to efficiently update our weights during training and (hopefully) reach a minimum.
Hidden Layer

The hidden layer is of particular interest. By the universal approximation theorem, a single hidden layer network with a finite number of neurons can be trained to approximate an arbitrarily random function. In other words, a single hidden layer is powerful enough to learn any function. That said, we often learn better in practice with multiple hidden layers (i.e., deeper nets).

The hidden layer is where the network stores it's internal abstract representation of the training data.
The hidden layer is where the network stores it’s internal abstract representation of the training data, similar to the way that a human brain (greatly simplified analogy) has an internal representation of the real world. Going forward in the tutorial, we’ll look at different ways to play around with the hidden layer.

An Example Network

You can see a simple (4-2-3 layer) feedforward neural network that classifies the IRIS dataset implemented in Java here through the testMLPSigmoidBP method. The dataset contains three classes of iris plants with features like sepal length, petal length, etc. The network is provided 50 samples per class. The features are clamped to the input units, while each output unit corresponds to a single class of the dataset: “1/0/0” indicates that the plant is of class Setosa, “0/1/0” indicates Versicolour, and “0/0/1” indicates Virginica). The classification error is 2/150 (i.e., it misclassifies 2 samples out of 150).

The Problem with Large Networks

A neural network can have more than one hidden layer: in that case, the higher layers are “building” new abstractions on top of previous layers. And as we mentioned before, you can often learn better in-practice with larger networks.

However, increasing the number of hidden layers leads to two known issues:

Vanishing gradients: as we add more and more hidden layers, backpropagation becomes less and less useful in passing information to the lower layers. In effect, as information is passed back, the gradients begin to vanish and become small relative to the weights of the networks.
Overfitting: perhaps the central problem in machine learning. Briefly, overfitting describes the phenomenon of fitting the training data too closely, maybe with hypotheses that are too complex. In such a case, your learner ends up fitting the training data really well, but will perform much, much more poorly on real examples.
Let’s look at some deep learning algorithms to address these issues.

Autoencoders

Most introductory machine learning classes tend to stop with feedforward neural networks. But the space of possible nets is far richer—so let’s continue.

An autoencoder is typically a feedforward neural network which aims to learn a compressed, distributed representation (encoding) of a dataset.

An autoencoder is a neural deep learning network that aims to learn a certain representation of a dataset.

Conceptually, the network is trained to “recreate” the input, i.e., the input and the target data are the same. In other words: you’re trying to output the same thing you were input, but compressed in some way. This is a confusing approach, so let’s look at an example.

Compressing the Input: Grayscale Images

Say that the training data consists of 28x28 grayscale images and the value of each pixel is clamped to one input layer neuron (i.e., the input layer will have 784 neurons). Then, the output layer would have the same number of units (784) as the input layer and the target value for each output unit would be the grayscale value of one pixel of the image.

The intuition behind this architecture is that the network will not learn a “mapping” between the training data and its labels, but will instead learn the internal structure and features of the data itself. (Because of this, the hidden layer is also called feature detector.) Usually, the number of hidden units is smaller than the input/output layers, which forces the network to learn only the most important features and achieves a dimensionality reduction.

We want a few small nodes in the middle to learn the data at a conceptual level, producing a compact representation.
In effect, we want a few small nodes in the middle to really learn the data at a conceptual level, producing a compact representation that in some way captures the core features of our input.

Flu Illness

To further demonstrate autoencoders, let’s look at one more application.

In this case, we’ll use a simple dataset consisting of flu symptoms (credit to this blog post for the idea). If you’re interested, the code for this example can be found in the testAEBackpropagation method.

Here’s how the data set breaks down:

There are six binary input features.
The first three are symptoms of the illness. For example, 1 0 0 0 0 0 indicates that this patient has a high temperature, while 0 1 0 0 0 0 indicates coughing, 1 1 0 0 0 0 indicates coughing and high temperature, etc.
The final three features are “counter” symptoms; when a patient has one of these, it’s less likely that he or she is sick. For example, 0 0 0 1 0 0 indicates that this patient has a flu vaccine. It’s possible to have combinations of the two sets of features: 0 1 0 1 0 0 indicates a vaccines patient with a cough, and so forth.
We’ll consider a patient to be sick when he or she has at least two of the first three features and healthy if he or she has at least two of the second three (with ties breaking in favor of the healthy patients), e.g.:

111000, 101000, 110000, 011000, 011100 = sick
000111, 001110, 000101, 000011, 000110 = healthy
We’ll train an autoencoder (using backpropagation) with six input and six output units, but only two hidden units.

After several hundred iterations, we observe that when each of the “sick” samples is presented to the machine learning network, one of the two the hidden units (the same unit for each “sick” sample) always exhibits a higher activation value than the other. On the contrary, when a “healthy” sample is presented, the other hidden unit has a higher activation.

Going Back to Machine Learning

Essentially, our two hidden units have learned a compact representation of the flu symptom data set. To see how this relates to learning, we return to the problem of overfitting. By training our net to learn a compact representation of the data, we’re favoring a simpler representation rather than a highly complex hypothesis that overfits the training data.

In a way, by favoring these simpler representations, we’re attempting to learn the data in a truer sense.

Like what you're reading?Get the latest updates first.

No spam. Just great engineering posts.
Restricted Boltzmann Machines

The next logical step is to look at a Restricted Boltzmann machines (RBM), a generative stochastic neural network that can learn a probability distribution over its set of inputs.

In machine learning, Restricted Botlzmann Machines are composed of visible and hidden units.

RBMs are composed of a hidden, visible, and bias layer. Unlike the feedforward networks, the connections between the visible and hidden layers are undirected (the values can be propagated in both the visible-to-hidden and hidden-to-visible directions) and fully connected (each unit from a given layer is connected to each unit in the next—if we allowed any unit in any layer to connect to any other layer, then we’d have a Boltzmann (rather than a restricted Boltzmann) machine).

The standard RBM has binary hidden and visible units: that is, the unit activation is 0 or 1 under a Bernoulli distribution, but there are variants with other non-linearities.

While researchers have known about RBMs for some time now, the recent introduction of the contrastive divergence unsupervised training algorithm has renewed interest.

Contrastive Divergence

The single-step contrastive divergence algorithm (CD-1) works like this:

Positive phase:
An input sample v is clamped to the input layer.
v is propagated to the hidden layer in a similar manner to the feedforward networks. The result of the hidden layer activations is h.
Negative phase:
Propagate h back to the visible layer with result v’ (the connections between the visible and hidden layers are undirected and thus allow movement in both directions).
Propagate the new v’ back to the hidden layer with activations result h’.
Weight update:

Where a is the learning rate and v, v’, h, h’, and w are vectors.
The intuition behind the algorithm is that the positive phase (h given v) reflects the network’s internal representation of the real world data. Meanwhile, the negative phase represents an attempt to recreate the data based on this internal representation (v’ given h). The main goal is for the generated data to be as close as possible to the real world and this is reflected in the weight update formula.

In other words, the net has some perception of how the input data can be represented, so it tries to reproduce the data based on this perception. If its reproduction isn’t close enough to reality, it makes an adjustment and tries again.

Returning to the Flu

To demonstrate contrastive divergence, we’ll use the same symptoms data set as before. The test network is an RBM with six visible and two hidden units. We’ll train the network using contrastive divergence with the symptoms v clamped to the visible layer. During testing, the symptoms are again presented to the visible layer; then, the data is propagated to the hidden layer. The hidden units represent the sick/healthy state, a very similar architecture to the autoencoder (propagating data from the visible to the hidden layer).

After several hundred iterations, we can observe the same result as with autoencoders: one of the hidden units has a higher activation value when any of the “sick” samples is presented, while the other is always more active for the “healthy” samples.

You can see this example in action in the testContrastiveDivergence method.

Deep Networks

We’ve now demonstrated that the hidden layers of autoencoders and RBMs act as effective feature detectors; but it’s rare that we can use these features directly. In fact, the data set above is more an exception than a rule. Instead, we need to find some way to use these detected features indirectly.

Luckily, it was discovered that these structures can be stacked to form deep networks. These networks can be trained greedily, one layer at a time, to help to overcome the vanishing gradient and overfitting problems associated with classic backpropagation.

The resulting structures are often quite powerful, producing impressive results. Take, for example, Google’s famous “cat” paper in which they use special kind of deep autoencoders to “learn” human and cat face detection based on unlabeled data.

Let’s take a closer look.

Stacked Autoencoders

As the name suggests, this network consists of multiple stacked autoencoders.

Stacked Autoencoders have a series of inputs, outputs, and hidden layers that contribute to machine learning outcomes.

The hidden layer of autoencoder t acts as an input layer to autoencoder t + 1. The input layer of the first autoencoder is the input layer for the whole network. The greedy layer-wise training procedure works like this:

Train the first autoencoder (t=1, or the red connections in the figure above, but with an additional output layer) individually using the backpropagation method with all available training data.
Train the second autoencoder t=2 (green connections). Since the input layer for t=2 is the hidden layer of t=1 we are no longer interested in the output layer of t=1 and we remove it from the network. Training begins by clamping an input sample to the input layer of t=1, which is propagated forward to the output layer of t=2. Next, the weights (input-hidden and hidden-output) of t=2 are updated using backpropagation. t=2 uses all the training samples, similar to t=1.
Repeat the previous procedure for all the layers (i.e., remove the output layer of the previous autoencoder, replace it with yet another autoencoder, and train with back propagation).
Steps 1-3 are called pre-training and leave the weights properly initialized. However, there’s no mapping between the input data and the output labels. For example, if the network is trained to recognize images of handwritten digits it’s still not possible to map the units from the last feature detector (i.e., the hidden layer of the last autoencoder) to the digit type of the image. In that case, the most common solution is to add one or more fully connected layer(s) to the last layer (blue connections). The whole network can now be viewed as a multilayer perceptron and is trained using backpropagation (this step is also called fine-tuning).
Stacked auto encoders, then, are all about providing an effective pre-training method for initializing the weights of a network, leaving you with a complex, multi-layer perceptron that’s ready to train (or fine-tune).

Deep Belief Networks

As with autoencoders, we can also stack Boltzmann machines to create a class known as deep belief networks (DBNs).

Deep belief networks are comprised of a stack of Boltzmann machines.

In this case, the hidden layer of RBM t acts as a visible layer for RBM t+1. The input layer of the first RBM is the input layer for the whole network, and the greedy layer-wise pre-training works like this:

Train the first RBM t=1 using contrastive divergence with all the training samples.
Train the second RBM t=2. Since the visible layer for t=2 is the hidden layer of t=1, training begins by clamping the input sample to the visible layer of t=1, which is propagated forward to the hidden layer of t=1. This data then serves to initiate contrastive divergence training for t=2.
Repeat the previous procedure for all the layers.
Similar to the stacked autoencoders, after pre-training the network can be extended by connecting one or more fully connected layers to the final RBM hidden layer. This forms a multi-layer perceptron which can then be fine tuned using backpropagation.
This procedure is akin to that of stacked autoencoders, but with the autoencoders replaced by RBMs and backpropagation replaced with the contrastive divergence algorithm.

(Note: for more on constructing and training stacked autoencoders or deep belief networks, check out the sample code here.)

Convolutional Networks

As a final deep learning architecture, let’s take a look at convolutional networks, a particularly interesting and special class of feedforward networks that are very well-suited to image recognition.

Convolutional networks are a special class of deep learning feedforward networks.Image via DeepLearning.net

Before we look at the actual structure of convolutional networks, we first define an image filter, or a square region with associated weights. A filter is applied across an entire input image, and you will often apply multiple filters. For example, you could apply four 6x6 filters to a given input image. Then, the output pixel with coordinates 1,1 is the weighted sum of a 6x6 square of input pixels with top left corner 1,1 and the weights of the filter (which is also 6x6 square). Output pixel 2,1 is the result of input square with top left corner 2,1 and so on.

With that covered, these networks are defined by the following properties:

Convolutional layers apply a number of filters to the input. For example, the first convolutional layer of the image could have four 6x6 filters. The result of one filter applied across the image is called feature map (FM) and the number feature maps is equal to the number of filters. If the previous layer is also convolutional, the filters are applied across all of it’s FMs with different weights, so each input FM is connected to each output FM. The intuition behind the shared weights across the image is that the features will be detected regardless of their location, while the multiplicity of filters allows each of them to detect different set of features.
Subsampling layers reduce the size of the input. For example, if the input consists of a 32x32 image and the layer has a subsampling region of 2x2, the output value would be a 16x16 image, which means that 4 pixels (each 2x2 square) of the input image are combined into a single output pixel. There are multiple ways to subsample, but the most popular are max pooling, average pooling, and stochastic pooling.
The last subsampling (or convolutional) layer is usually connected to one or more fully connected layers, the last of which represents the target data.
Training is performed using modified backpropagation that takes the subsampling layers into account and updates the convolutional filter weights based on all values to which that filter is applied.
You can see several examples of convolutional networks trained (with backpropagation) on the MNIST data set (grayscale images of handwritten letters) here, specifically in the the testLeNet* methods (I would recommend testLeNetTiny2 as it achieves a low error rate of about 2% in a relatively short period of time). There’s also a nice JavaScript visualization of a similar network here.

Implementation

Now that we’ve covered the most common neural network variants, I thought I’d write a bit about the challenges posed during implementation of these deep learning structures.

Broadly speaking, my goal in creating a Deep Learning library was (and still is) to build a neural network-based framework that satisfied the following criteria:

A common architecture that is able to represent diverse models (all the variants on neural networks that we’ve seen above, for example).
The ability to use diverse training algorithms (back propagation, contrastive divergence, etc.).
Decent performance.
To satisfy these requirements, I took a tiered (or modular) approach to the design of the software.

Structure

Let’s start with the basics:

NeuralNetworkImpl is the base class for all neural network models.
Each network contains a set of layers.
Each layer has a list of connections, where a connection is a link between two layers such that the network is a directed acyclic graph.
This structure is agile enough to be used for classic feedforward networks, as well as for RBMs and more complex architectures like ImageNet.

It also allows a layer to be part of more than one network. For example, the layers in a Deep Belief Network are also layers in their corresponding RBMs.

In addition, this architecture allows a DBN to be viewed as a list of stacked RBMs during the pre-training phase and a feedforward network during the fine-tuning phase, which is both intuitively nice and programmatically convenient.

Data Propagation

The next module takes care of propagating data through the network, a two-step process:

Determine the order of the layers. For example, to get the results from a multilayer perceptron, the data is “clamped” to the input layer (hence, this is the first layer to be calculated) and propagated all the way to the output layer. In order to update the weights during backpropagation, the output error has to be propagated through every layer in breadth-first order, starting from the output layer. This is achieved using various implementations of LayerOrderStrategy, which takes advantage of the graph structure of the network, employing different graph traversal methods. Some examples include the breadth-first strategy and the targeting of a specific layer. The order is actually determined by the connections between the layers, so the strategies return an ordered list of connections.
Calculate the activation value. Each layer has an associated ConnectionCalculator which takes it’s list of connections (from the previous step) and input values (from other layers) and calculates the resulting activation. For example, in a simple sigmoidal feedforward network, the hidden layer’s ConnectionCalculator takes the values of the input and bias layers (which are, respectively, the input data and an array of 1s) and the weights between the units (in case of fully connected layers, the weights are actually stored in a FullyConnected connection as a Matrix), calculates the weighted sum, and feeds the result into the sigmoid function. The connection calculators implement a variety of transfer (e.g., weighted sum, convolutional) and activation (e.g., logistic and tanh for multilayer perceptron, binary for RBM) functions. Most of them can be executed on a GPU using Aparapi and usable with mini-batch training.
GPU Computation with Aparapi

As I mentioned earlier, one of the reasons that neural networks have made a resurgence in recent years is that their training methods are highly conducive to parallelism, allowing you to speed up training significantly with the use of a GPGPU. In this case, I chose to work with the Aparapi library to add GPU support.

Aparapi imposes some important restrictions on the connection calculators:

Only one-dimensional arrays (and variables) of primitive data types are allowed.
Only member-methods of the Aparapi Kernel class itself are allowed to be called from the GPU executable code.
As such, most of the data (weights, input, and output arrays) is stored in Matrix instances, which use one-dimensional float arrays internally. All Aparapi connection calculators use either AparapiWeightedSum (for fully connected layers and weighted sum input functions), AparapiSubsampling2D (for subsampling layers), or AparapiConv2D (for convolutional layers). Some of these limitations can be overcome with the introduction of Heterogeneous System Architecture. Aparapi also allows to run the same code on both CPU and GPU.

Training

The training module implements various training algorithms. It relies on the previous two modules. For example, BackPropagationTrainer (all the trainers are using the Trainer base class) uses feedforward layer calculator for the feedforward phase and a special breadth-first layer calculator for propagating the error and updating the weights.

My latest work is on Java 8 support and some other improvements, which are available in this branch and will soon be merged into master.

Conclusion

The aim of this Java deep learning tutorial was to give you a brief introduction to the field of deep learning algorithms, beginning with the most basic unit of composition (the perceptron) and progressing through various effective and popular architectures, like that of the restricted Boltzmann machine.

The ideas behind neural networks have been around for a long time; but today, you can’t step foot in the machine learning community without hearing about deep networks or some other take on deep learning. Hype shouldn’t be mistaken for justification, but with the advances of GPGPU computing and the impressive progress made by researchers like Geoffrey Hinton, Yoshua Bengio, Yann LeCun and Andrew Ng, the field certainly shows a lot of promise. There’s no better time to get familiar and get involved like the present.

Appendix: Resources

If you’re interested in learning more, I found the following resources quite helpful during my work:

DeepLearning.net: a portal for all things deep learning. It has some nice tutorials, software library and a great reading list.
An active Google+ community.
Two very good courses: Machine Learning and Neural Networks for Machine Learning, both offered on Coursera.
The Stanford neural networks tutorial.
About the author

View full profile →
Hire the Author
Ivan Vasilev, Bulgaria
MEMBER SINCE OCTOBER 24, 2012
HTML5SQLJavaJavaScript+3 more
Ivan is an enthusiastic senior developer with an entrepreneurial spirit. His experiences range across a number of fields and technologies, but his primary focuses are in Java and JavaScript, as well as Machine Learning. [click to continue...]
Hiring? Meet the Top 10 Machine Learning engineers for Hire in June 2015

Monday, June 22, 2015

deep q-learning

Created by BCL easyConverter SDK 3 (HTML Version)
<![if ! IE]>
<![endif]>arXiv:1312.5602v1 [cs.LG] 19 Dec 2013
Playing Atari with Deep Reinforcement Learning
Volodymyr Mnih Koray Kavukcuoglu
David Silver Alex Graves Ioannis Antonoglou
Daan Wierstra
Martin Riedmiller
DeepMind Technologies
fvlad,koray,david,alex.graves,ioannis,daan,martin.riedmillerg @ deepmind.com
Abstract
We present the first deep learning model to successfully learn control policies di- rectly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learn- ing Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
1 Introduction
Learning to control agents directly from high-dimensional sensory inputs like vision and speech is one of the long-standing challenges of reinforcement learning (RL). Most successful RL applica- tions that operate on these domains have relied on hand-crafted features combined with linear value functions or policy representations. Clearly, the performance of such systems heavily relies on the quality of the feature representation.
Recent advances in deep learning have made it possible to extract high-level features from raw sen- sory data, leading to breakthroughs in computer vision [11, 22, 16] and speech recognition [6, 7]. These methods utilise a range of neural network architectures, including convolutional networks, multilayer perceptrons, restricted Boltzmann machines and recurrent neural networks, and have ex- ploited both supervised and unsupervised learning. It seems natural to ask whether similar tech- niques could also be beneficial for RL with sensory data.
However reinforcement learning presents several challenges from a deep learning perspective. Firstly, most successful deep learning applications to date have required large amounts of hand- labelled training data. RL algorithms, on the other hand, must be able to learn from a scalar reward signal that is frequently sparse, noisy and delayed. The delay between actions and resulting rewards, which can be thousands of timesteps long, seems particularly daunting when compared to the direct association between inputs and targets found in supervised learning. Another issue is that most deep learning algorithms assume the data samples to be independent, while in reinforcement learning one typically encounters sequences of highly correlated states. Furthermore, in RL the data distribu- tion changes as the algorithm learns new behaviours, which can be problematic for deep learning methods that assume a fixed underlying distribution.
This paper demonstrates that a convolutional neural network can overcome these challenges to learn successful control policies from raw video data in complex RL environments. The network is trained with a variant of the Q-learning [26] algorithm, with stochastic gradient descent to update the weights. To alleviate the problems of correlated data and non-stationary distributions, we use
1
Figure 1: Screen shots from five Atari 2600 Games: (Left-to-right) Pong, Breakout, Space Invaders, Seaquest, Beam Rider
an experience replay mechanism [13] which randomly samples previous transitions, and thereby smooths the training distribution over many past behaviors.
We apply our approach to a range of Atari 2600 games implemented in The Arcade Learning Envi- ronment (ALE) [3]. Atari 2600 is a challenging RL testbed that presents agents with a high dimen- sional visual input (210 160 RGB video at 60Hz) and a diverse and interesting set of tasks that were designed to be difficult for humans players. Our goal is to create a single neural network agent that is able to successfully learn to play as many of the games as possible. The network was not pro- vided with any game-specific information or hand-designed visual features, and was not privy to the internal state of the emulator; it learned from nothing but the video input, the reward and terminal signals, and the set of possible actions—just as a human player would. Furthermore the network ar- chitecture and all hyperparameters used for training were kept constant across the games. So far the network has outperformed all previous RL algorithms on six of the seven games we have attempted and surpassed an expert human player on three of them. Figure 1 provides sample screenshots from five of the games used for training.
2 Background
We consider tasks in which an agent interacts with an environment E, in this case the Atari emulator, in a sequence of actions, observations and rewards. At each time-step the agent selects an action at from the set of legal game actions, A = f1; : : : ; Kg. The action is passed to the emulator and modifies its internal state and the game score. In general E may be stochastic. The emulator’s internal state is not observed by the agent; instead it observes an image xt 2 Rd from the emulator, which is a vector of raw pixel values representing the current screen. In addition it receives a reward rt representing the change in game score. Note that in general the game score may depend on the whole prior sequence of actions and observations; feedback about an action may only be received after many thousands of time-steps have elapsed.
Since the agent only observes images of the current screen, the task is partially observed and many emulator states are perceptually aliased, i.e. it is impossible to fully understand the current situation from only the current screen xt. We therefore consider sequences of actions and observations, st = x1; a1; x2; :::; at 1; xt, and learn game strategies that depend upon these sequences. All sequences in the emulator are assumed to terminate in a finite number of time-steps. This formalism gives rise to a large but finite Markov decision process (MDP) in which each sequence is a distinct state. As a result, we can apply standard reinforcement learning methods for MDPs, simply by using the complete sequence st as the state representation at time t.
The goal of the agent is to interact with the emulator by selecting actions in a way that maximises future rewards. We make the standard assumption that future rewards are discounted by a factor of
per time-step, and define the future discounted return at time t as Rt =
T
0
trt0 , where T
t0=t
t

value function Q

(s; a)
is the time-step at which the game terminates. We define the optimal actionP-



as the maximum expected return achievable by following any strategy, after seeing some sequence s and then taking some action a, Q (s; a) = max E[Rtjst = s; at = a; ], where is a policy mapping sequences to actions (or distributions over actions).
The optimal action-value function obeys an important identity known as the Bellman equation. This is based on the following intuition: if the optimal value Q (s0; a0) of the sequence s0 at the next time-step was known for all possible actions a0, then the optimal strategy is to select the action a0
2
maximising the expected value of r + Q (s0; a0),




Es0E
h





i
Q
(s; a) =
r +
max Q (s
; a )
s; a
(1)



a0
0
0


The basic idea behind many
reinforcement learning
algorithms
is to estimate the action-
value function, by using the
Bellman
equation as
an
iterative update, Qi+1(s; a) =
E[r + maxa0 Qi(s0; a0)js; a].
Such value iteration algorithms converge to the optimal action-
value function, Qi ! Q as i ! 1 [23]. In practice, this basic approach is totally impractical, because the action-value function is estimated separately for each sequence, without any generali- sation. Instead, it is common to use a function approximator to estimate the action-value function, Q(s; a; ) Q (s; a). In the reinforcement learning community this is typically a linear function approximator, but sometimes a non-linear function approximator is used instead, such as a neural network. We refer to a neural network function approximator with weights as a Q-network. A Q-network can be trained by minimising a sequence of loss functions Li( i) that changes at each iteration i,
h
i

Li ( i) = Es;a ( )
(yi Q (s; a; i))2 ;
(2)
where yi = Es0E [r + maxa0 Q(s0; a0; i 1)js; a] is the target for iteration i and (s; a) is a probability distribution over sequences s and actions a that we refer to as the behaviour distribution.
The parameters from the previous iteration i 1 are held fixed when optimising the loss function Li ( i). Note that the targets depend on the network weights; this is in contrast with the targets used for supervised learning, which are fixed before learning begins. Differentiating the loss function with respect to the weights we arrive at the following gradient,
L (
) =
Es;a ( );s0E hr +
max Q(s ; a ;
i 1
)

Q(s; a;
)
Q(s; a;
)
:
(3)
r i i i

a0
0 0

i
r i
i
i

Rather than computing the full expectations in the above gradient, it is often computationally expe- dient to optimise the loss function by stochastic gradient descent. If the weights are updated after every time-step, and the expectations are replaced by single samples from the behaviour distributionand the emulator E respectively, then we arrive at the familiar Q-learning algorithm [26].
Note that this algorithm is model-free: it solves the reinforcement learning task directly using sam- ples from the emulator E, without explicitly constructing an estimate of E. It is also off-policy: it learns about the greedy strategy a = maxa Q(s; a; ), while following a behaviour distribution that ensures adequate exploration of the state space. In practice, the behaviour distribution is often se- lected by an -greedy strategy that follows the greedy strategy with probability 1 and selects a random action with probability .
3 Related Work
Perhaps the best-known success story of reinforcement learning is TD-gammon, a backgammon- playing program which learnt entirely by reinforcement learning and self-play, and achieved a super- human level of play [24]. TD-gammon used a model-free reinforcement learning algorithm similar to Q-learning, and approximated the value function using a multi-layer perceptron with one hidden layer1.
However, early attempts to follow up on TD-gammon, including applications of the same method to chess, Go and checkers were less successful. This led to a widespread belief that the TD-gammon approach was a special case that only worked in backgammon, perhaps because the stochasticity in the dice rolls helps explore the state space and also makes the value function particularly smooth [19].
Furthermore, it was shown that combining model-free reinforcement learning algorithms such as Q- learning with non-linear function approximators [25], or indeed with off-policy learning [1] could cause the Q-network to diverge. Subsequently, the majority of work in reinforcement learning fo- cused on linear function approximators with better convergence guarantees [25].
1In fact TD-Gammon approximated the state value function V (s) rather than the action-value function Q(s; a), and learnt on-policy directly from the self-play games
3
More recently, there has been a revival of interest in combining deep learning with reinforcement learning. Deep neural networks have been used to estimate the environment E; restricted Boltzmann machines have been used to estimate the value function [21]; or the policy [9]. In addition, the divergence issues with Q-learning have been partially addressed by gradient temporal-difference methods. These methods are proven to converge when evaluating a fixed policy with a nonlinear function approximator [14]; or when learning a control policy with linear function approximation using a restricted variant of Q-learning [15]. However, these methods have not yet been extended to nonlinear control.
Perhaps the most similar prior work to our own approach is neural fitted Q-learning (NFQ) [20]. NFQ optimises the sequence of loss functions in Equation 2, using the RPROP algorithm to update the parameters of the Q-network. However, it uses a batch update that has a computational cost per iteration that is proportional to the size of the data set, whereas we consider stochastic gradient updates that have a low constant cost per iteration and scale to large data-sets. NFQ has also been successfully applied to simple real-world control tasks using purely visual input, by first using deep autoencoders to learn a low dimensional representation of the task, and then applying NFQ to this representation [12]. In contrast our approach applies reinforcement learning end-to-end, directly from the visual inputs; as a result it may learn features that are directly relevant to discriminating action-values. Q-learning has also previously been combined with experience replay and a simple neural network [13], but again starting with a low-dimensional state rather than raw visual inputs.
The use of the Atari 2600 emulator as a reinforcement learning platform was introduced by [3], who applied standard reinforcement learning algorithms with linear function approximation and generic visual features. Subsequently, results were improved by using a larger number of features, and using tug-of-war hashing to randomly project the features into a lower-dimensional space [2]. The HyperNEAT evolutionary architecture [8] has also been applied to the Atari platform, where it was used to evolve (separately, for each distinct game) a neural network representing a strategy for that game. When trained repeatedly against deterministic sequences using the emulator’s reset facility, these strategies were able to exploit design flaws in several Atari games.
4 Deep Reinforcement Learning
Recent breakthroughs in computer vision and speech recognition have relied on efficiently training deep neural networks on very large training sets. The most successful approaches are trained directly from the raw inputs, using lightweight updates based on stochastic gradient descent. By feeding sufficient data into deep neural networks, it is often possible to learn better representations than handcrafted features [11]. These successes motivate our approach to reinforcement learning. Our goal is to connect a reinforcement learning algorithm to a deep neural network which operates directly on RGB images and efficiently process training data by using stochastic gradient updates.
Tesauro’s TD-Gammon architecture provides a starting point for such an approach. This architec- ture updates the parameters of a network that estimates the value function, directly from on-policy samples of experience, st; at; rt; st+1; at+1, drawn from the algorithm’s interactions with the envi- ronment (or by self-play, in the case of backgammon). Since this approach was able to outperform the best human backgammon players 20 years ago, it is natural to wonder whether two decades of hardware improvements, coupled with modern deep neural network architectures and scalable RL algorithms might produce significant progress.
In contrast to TD-Gammon and similar online approaches, we utilize a technique known as expe- rience replay [13] where we store the agent’s experiences at each time-step, et = (st; at; rt; st+1) in a data-set D = e1; :::; eN , pooled over many episodes into a replay memory. During the inner loop of the algorithm, we apply Q-learning updates, or minibatch updates, to samples of experience, e D, drawn at random from the pool of stored samples. After performing experience replay, the agent selects and executes an action according to an -greedy policy. Since using histories of arbitrary length as inputs to a neural network can be difficult, our Q-function instead works on fixed length representation of histories produced by a function . The full algorithm, which we call deep Q-learning, is presented in Algorithm 1.
This approach has several advantages over standard online Q-learning [23]. First, each step of experience is potentially used in many weight updates, which allows for greater data efficiency.
4
Algorithm 1 Deep Q-learning with Experience Replay
Initialize replay memory D to capacity N
Initialize action-value function Q with random weights for episode = 1; M do
Initialise sequence s1 = fx1g and preprocessed sequenced 1 = (s1) for t = 1; T do
With probability select a random action at otherwise select at = maxa Q ( (st); a; )
Execute action at in emulator and observe reward rt and image xt+1
Set st+1 = st; at; xt+1 and preprocess t+1 = (st+1) Store transition ( t; at; rt; t+1) in D
Sample random minibatch of transitions ( j; aj; rj; j+1) from D
rj
for terminal j+1
Set yj = rj + maxa0 Q( j+1; a0; )
for non-terminal j+1
Perform a gradient descent step on (yj Q( j; aj; ))2 according to equation 3 end for
end for
Second, learning directly from consecutive samples is inefficient, due to the strong correlations between the samples; randomizing the samples breaks these correlations and therefore reduces the variance of the updates. Third, when learning on-policy the current parameters determine the next data sample that the parameters are trained on. For example, if the maximizing action is to move left then the training samples will be dominated by samples from the left-hand side; if the maximizing action then switches to the right then the training distribution will also switch. It is easy to see how unwanted feedback loops may arise and the parameters could get stuck in a poor local minimum, or even diverge catastrophically [25]. By using experience replay the behavior distribution is averaged over many of its previous states, smoothing out learning and avoiding oscillations or divergence in the parameters. Note that when learning by experience replay, it is necessary to learn off-policy (because our current parameters are different to those used to generate the sample), which motivates the choice of Q-learning.
In practice, our algorithm only stores the last N experience tuples in the replay memory, and samples uniformly at random from D when performing updates. This approach is in some respects limited since the memory buffer does not differentiate important transitions and always overwrites with recent transitions due to the finite memory size N. Similarly, the uniform sampling gives equal importance to all transitions in the replay memory. A more sophisticated sampling strategy might emphasize transitions from which we can learn the most, similar to prioritized sweeping [17].
4.1Preprocessing and Model Architecture
Working directly with raw Atari frames, which are 210 160 pixel images with a 128 color palette, can be computationally demanding, so we apply a basic preprocessing step aimed at reducing the input dimensionality. The raw frames are preprocessed by first converting their RGB representation to gray-scale and down-sampling it to a 110 84 image. The final input representation is obtained by cropping an 84 84 region of the image that roughly captures the playing area. The final cropping stage is only required because we use the GPU implementation of 2D convolutions from [11], which expects square inputs. For the experiments in this paper, the function from algorithm 1 applies this preprocessing to the last 4 frames of a history and stacks them to produce the input to the Q-function.
There are several possible ways of parameterizing Q using a neural network. Since Q maps history- action pairs to scalar estimates of their Q-value, the history and the action have been used as inputs to the neural network by some previous approaches [20, 12]. The main drawback of this type of architecture is that a separate forward pass is required to compute the Q-value of each action, resulting in a cost that scales linearly with the number of actions. We instead use an architecture in which there is a separate output unit for each possible action, and only the state representation is an input to the neural network. The outputs correspond to the predicted Q-values of the individual action for the input state. The main advantage of this type of architecture is the ability to compute Q-values for all possible actions in a given state with only a single forward pass through the network.
5
We now describe the exact architecture used for all seven Atari games. The input to the neural network consists is an 84 84 4 image produced by . The first hidden layer convolves 16 8 8 filters with stride 4 with the input image and applies a rectifier nonlinearity [10, 18]. The second hidden layer convolves 32 4 4 filters with stride 2, again followed by a rectifier nonlinearity. The final hidden layer is fully-connected and consists of 256 rectifier units. The output layer is a fully- connected linear layer with a single output for each valid action. The number of valid actions varied between 4 and 18 on the games we considered. We refer to convolutional networks trained with our approach as Deep Q-Networks (DQN).
5 Experiments
So far, we have performed experiments on seven popular ATARI games – Beam Rider, Breakout, Enduro, Pong, Q*bert, Seaquest, Space Invaders. We use the same network architecture, learning algorithm and hyperparameters settings across all seven games, showing that our approach is robust enough to work on a variety of games without incorporating game-specific information. While we evaluated our agents on the real and unmodified games, we made one change to the reward structure of the games during training only. Since the scale of scores varies greatly from game to game, we fixed all positive rewards to be 1 and all negative rewards to be 1, leaving 0 rewards unchanged. Clipping the rewards in this manner limits the scale of the error derivatives and makes it easier to use the same learning rate across multiple games. At the same time, it could affect the performance of our agent since it cannot differentiate between rewards of different magnitude.
In these experiments, we used the RMSProp algorithm with minibatches of size 32. The behavior policy during training was -greedy with annealed linearly from 1 to 0:1 over the first million frames, and fixed at 0:1 thereafter. We trained for a total of 10 million frames and used a replay memory of one million most recent frames.
Following previous approaches to playing Atari games, we also use a simple frame-skipping tech- nique [3]. More precisely, the agent sees and selects actions on every kth frame instead of every frame, and its last action is repeated on skipped frames. Since running the emulator forward for one step requires much less computation than having the agent select an action, this technique allows the agent to play roughly k times more games without significantly increasing the runtime. We use k = 4 for all games except Space Invaders where we noticed that using k = 4 makes the lasers invisible because of the period at which they blink. We used k = 3 to make the lasers visible and this change was the only difference in hyperparameter values between any of the games.
5.1Training and Stability
In supervised learning, one can easily track the performance of a model during training by evaluating it on the training and validation sets. In reinforcement learning, however, accurately evaluating the progress of an agent during training can be challenging. Since our evaluation metric, as suggested by [3], is the total reward the agent collects in an episode or game averaged over a number of games, we periodically compute it during training. The average total reward metric tends to be very noisy because small changes to the weights of a policy can lead to large changes in the distribution of states the policy visits . The leftmost two plots in figure 2 show how the average total reward evolves during training on the games Seaquest and Breakout. Both averaged reward plots are indeed quite noisy, giving one the impression that the learning algorithm is not making steady progress. Another, more stable, metric is the policy’s estimated action-value function Q, which provides an estimate of how much discounted reward the agent can obtain by following its policy from any given state. We collect a fixed set of states by running a random policy before training starts and track the average of the maximum2 predicted Q for these states. The two rightmost plots in figure 2 show that average predicted Q increases much more smoothly than the average total reward obtained by the agent and plotting the same metrics on the other five games produces similarly smooth curves. In addition to seeing relatively smooth improvement to predicted Q during training we did not experience any divergence issues in any of our experiments. This suggests that, despite lacking any theoretical convergence guarantees, our method is able to train large neural networks using a reinforcement learning signal and stochastic gradient descent in a stable manner.
2The maximum for each state is taken over the possible actions.
6
<![if ! IE]>
<![endif]>Episode
250

Average Reward on Breakout
<![if ! IE]>
<![endif]>Episode
1800

















1600
200








1400
<![if ! IE]>
<![endif]>per
150








<![if ! IE]>
<![endif]>per
1200









<![if ! IE]>
<![endif]>Reward








<![if ! IE]>
<![endif]>Reward
1000









100








800








600









<![if ! IE]>
<![endif]>Average
50








<![if ! IE]>
<![endif]>Average
400









200
0








0











0
10
20
30
40
50
60
70
80
90 100





Training Epochs





Average Reward on Seaquest

4









<![if ! IE]>
<![endif]>(Q)









3.5









<![if ! IE]>
<![endif]>Value
3









2.5









<![if ! IE]>
<![endif]>Action









2









1.5









<![if ! IE]>
<![endif]>Average









1









0.5










0
10
20
30
40
50
60
70
80
90 100
0




Training Epochs





Average Q on Breakout


9









<![if ! IE]>
<![endif]>(Q)









8









<![if ! IE]>
<![endif]>Value
7









6









<![if ! IE]>
<![endif]>Action
5









4



















<![if ! IE]>
<![endif]>Average
3









2









1
0
10
20
30
40
50
60
70
80
90 100
0




Training Epochs



Average Q on Seaquest
0
10
20
30
40
50
60
70
80
90 100



Training Epochs


Figure 2: The two plots on the left show average reward per episode on Breakout and Seaquest respectively during training. The statistics were computed by running an -greedy policy with = 0:05 for 10000 steps. The two plots on the right show the average maximum predicted action-value of a held out set of states on Breakout and Seaquest respectively. One epoch corresponds to 50000 minibatch weight updates or roughly 30 minutes of training time.
Figure 3: The leftmost plot shows the predicted value function for a 30 frame segment of the game Seaquest. The three screenshots correspond to the frames labeled by A, B, and C respectively.
5.2Visualizing the Value Function
Figure 3 shows a visualization of the learned value function on the game Seaquest. The figure shows that the predicted value jumps after an enemy appears on the left of the screen (point A). The agent then fires a torpedo at the enemy and the predicted value peaks as the torpedo is about to hit the enemy (point B). Finally, the value falls to roughly its original value after the enemy disappears (point C). Figure 3 demonstrates that our method is able to learn how the value function evolves for a reasonably complex sequence of events.
5.3Main Evaluation
We compare our results with the best performing methods from the RL literature [3, 4]. The method labeled Sarsa used the Sarsa algorithm to learn linear policies on several different feature sets hand- engineered for the Atari task and we report the score for the best performing feature set [3]. Con- tingency used the same basic approach as Sarsa but augmented the feature sets with a learned representation of the parts of the screen that are under the agent’s control [4]. Note that both of these methods incorporate significant prior knowledge about the visual problem by using background sub- traction and treating each of the 128 colors as a separate channel. Since many of the Atari games use one distinct color for each type of object, treating each color as a separate channel can be similar to producing a separate binary map encoding the presence of each object type. In contrast, our agents only receive the raw RGB screenshots as input and must learn to detect objects on their own.
In addition to the learned agents, we also report scores for an expert human game player and a policy that selects actions uniformly at random. The human performance is the median reward achieved after around two hours of playing each game. Note that our reported human scores are much higher than the ones in Bellemare et al. [3]. For the learned methods, we follow the evaluation strategy used in Bellemare et al. [3, 5] and report the average score obtained by running an -greedy policy with= 0:05 for a fixed number of steps. The first five rows of table 1 show the per-game average scores on all games. Our approach (labeled DQN) outperforms the other learning methods by a substantial margin on all seven games despite incorporating almost no prior knowledge about the inputs.
We also include a comparison to the evolutionary policy search approach from [8] in the last three rows of table 1. We report two sets of results for this method. The HNeat Best score reflects the results obtained by using a hand-engineered object detector algorithm that outputs the locations and
7

B. Rider
Breakout
Enduro
Pong
Q*bert
Seaquest
S. Invaders
Random
354
1:2
0
20:4
157
110
179
Sarsa [3]
996
5:2
129
19
614
665
271
Contingency [4]
1743
6
159
17
960
723
268
DQN
4092
168
470
20
1952
1705
581
Human
7456
31
368
3
18900
28010
3690








HNeat Best [8]
3616
52
106
19
1800
920
1720
HNeat Pixel [8]
1332
4
91
16
1325
800
1145
DQN Best
5184
225
661
21
4500
1740
1075
Table 1: The upper table compares average total reward for various learning methods by running an -greedy policy with = 0:05 for a fixed number of steps. The lower table reports results of the single best performing episode for HNeat and DQN. HNeat produces deterministic policies that always get the same score while DQN used an -greedy policy with = 0:05.
types of objects on the Atari screen. The HNeat Pixel score is obtained by using the special 8 color channel representation of the Atari emulator that represents an object label map at each channel. This method relies heavily on finding a deterministic sequence of states that represents a successful exploit. It is unlikely that strategies learnt in this way will generalize to random perturbations; therefore the algorithm was only evaluated on the highest scoring single episode. In contrast, our algorithm is evaluated on -greedy control sequences, and must therefore generalize across a wide variety of possible situations. Nevertheless, we show that on all the games, except Space Invaders, not only our max evaluation results (row 8), but also our average results (row 4) achieve better performance.
Finally, we show that our method achieves better performance than an expert human player on Breakout, Enduro and Pong and it achieves close to human performance on Beam Rider. The games Q*bert, Seaquest, Space Invaders, on which we are far from human performance, are more chal- lenging because they require the network to find a strategy that extends over long time scales.
6 Conclusion
This paper introduced a new deep learning model for reinforcement learning, and demonstrated its ability to master difficult control policies for Atari 2600 computer games, using only raw pixels as input. We also presented a variant of online Q-learning that combines stochastic minibatch up- dates with experience replay memory to ease the training of deep networks for RL. Our approach gave state-of-the-art results in six of the seven games it was tested on, with no adjustment of the architecture or hyperparameters.
References
[1]Leemon Baird. Residual algorithms: Reinforcement learning with function approximation. In
Proceedings of the 12th International Conference on Machine Learning (ICML 1995), pages 30–37. Morgan Kaufmann, 1995.
[2]Marc Bellemare, Joel Veness, and Michael Bowling. Sketch-based linear value function ap- proximation. In Advances in Neural Information Processing Systems 25, pages 2222–2230, 2012.
[3]Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013.
[4]Marc G Bellemare, Joel Veness, and Michael Bowling. Investigating contingency awareness using atari 2600 games. In AAAI, 2012.
[5]Marc G. Bellemare, Joel Veness, and Michael Bowling. Bayesian learning of recursively fac- tored environments. In Proceedings of the Thirtieth International Conference on Machine Learning (ICML 2013), pages 1211–1219, 2013.
8
[6]George E. Dahl, Dong Yu, Li Deng, and Alex Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Pro- cessing, IEEE Transactions on, 20(1):30 –42, January 2012.
[7]Alex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deep recurrent neural networks. In Proc. ICASSP, 2013.
[8]Matthew Hausknecht, Risto Miikkulainen, and Peter Stone. A neuro-evolution approach to general atari game playing. 2013.
[9]Nicolas Heess, David Silver, and Yee Whye Teh. Actor-critic reinforcement learning with energy-based policies. In European Workshop on Reinforcement Learning, page 43, 2012.
[10]Kevin Jarrett, Koray Kavukcuoglu, MarcAurelio Ranzato, and Yann LeCun. What is the best multi-stage architecture for object recognition? In Proc. International Conference on Com- puter Vision and Pattern Recognition (CVPR 2009), pages 2146–2153. IEEE, 2009.
[11]Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Imagenet classification with deep con- volutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1106–1114, 2012.
[12]Sascha Lange and Martin Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In Neural Networks (IJCNN), The 2010 International Joint Conference on, pages 1–8. IEEE, 2010.
[13]Long-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, DTIC Document, 1993.
[14]Hamid Maei, Csaba Szepesvari, Shalabh Bhatnagar, Doina Precup, David Silver, and Rich Sutton. Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approxi- mation. In Advances in Neural Information Processing Systems 22, pages 1204–1212, 2009.
[15]Hamid Maei, Csaba Szepesvari,´ Shalabh Bhatnagar, and Richard S. Sutton. Toward off-policy learning control with function approximation. In Proceedings of the 27th International Con- ference on Machine Learning (ICML 2010), pages 719–726, 2010.
[16]Volodymyr Mnih. Machine Learning for Aerial Image Labeling. PhD thesis, University of Toronto, 2013.
[17]Andrew Moore and Chris Atkeson. Prioritized sweeping: Reinforcement learning with less data and less real time. Machine Learning, 13:103–130, 1993.
[18]Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann ma- chines. In Proceedings of the 27th International Conference on Machine Learning (ICML 2010), pages 807–814, 2010.
[19]Jordan B. Pollack and Alan D. Blair. Why did td-gammon work. In Advances in Neural Information Processing Systems 9, pages 10–16, 1996.
[20]Martin Riedmiller. Neural fitted q iteration–first experiences with a data efficient neural re- inforcement learning method. In Machine Learning: ECML 2005, pages 317–328. Springer, 2005.
[21]Brian Sallans and Geoffrey E. Hinton. Reinforcement learning with factored states and actions.
Journal of Machine Learning Research, 5:1063–1088, 2004.
[22]Pierre Sermanet, Koray Kavukcuoglu, Soumith Chintala, and Yann LeCun. Pedestrian de- tection with unsupervised multi-stage feature learning. In Proc. International Conference on Computer Vision and Pattern Recognition (CVPR 2013). IEEE, 2013.
[23]Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[24]Gerald Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58–68, 1995.
[25]John N Tsitsiklis and Benjamin Van Roy. An analysis of temporal-difference learning with function approximation. Automatic Control, IEEE Transactions on, 42(5):674–690, 1997.
[26]Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992.
9