Deep Learning

January 31, 2020 — 31 min

Deep learning is a subfield of the Machine Learning Science which is based on artificial neural networks. It has several derivatives such as Multi-Layer Perceptron-MLP-, Convolutional Neural Networks -CNN- and Recurrent Neural Networks -RNN- which can be applied to many fields including Computer Vision, Natural Language Processing, Machine Translation...

Deep learning is taking off for three main reasons:

  • Instinctive features engineering: while most of machine learning algorithms require human expertise for the feature engineering and extraction, deep learning handles automatically the choice of variables and their weights
  • Huge Datasets: the continuous collection of data has led to large databases which allow deeper neural networks
  • Hardware evolution: the new GPUs, for Graphical Process Units, allow faster algebraic calculation which is the core base of DL

In this blog, we will focus mainly on the Multi-Layer Perceptron -MLP- where we will detail the mathematical background behind the success of deep learning and explore the optimization algorithms used to improve its performances.

Tabe of contents

The summary is as follows:

  1. Definition
  2. Learning algorithm
  3. Parameter Initialization
  4. Forward - Backpropagation
  5. Activation functions
  6. Optimization algorithm

1 -Definition

A neuron

It is a bloc of mathematical operations linking between entities

Let’s consider the problem where we estimate the price of a house based on its size, it can be schematized as follows:

house simple

When including more description about the house by adding more variables, the graph becomes as follow:

reg maths

Each neuron is divided into two main blocks:

  • Computation of z using the inputs xix_i:

z=iwixi+bz=\sum_i w_i \star x_i +b

  • Computation of a, which is equal to y at the output layer, using z

a=ψ(z)a=\psi(z)

wiw_i are the weights, bb is the biasand ψ\psi is said to be the activation function.

In general, neural networks better known as MLP, for ‘Multi Layers Perceptron’, is a type of direct formal neural network organized into several layers in which information flows from the input layer to the output layer only.

Each layer consists of a defined number of neurons, we distinguish :

  • The input layer

  • The hidden layers

  • The output layer

The following graph represents a neural network with 5 neurons at the input, 3 in the first hidden layer, 3 in the second hidden layer and 2 out.

mlp network

Some variables in the hidden layers can be interpreted based on the input features: in the case of the house pricing and under the assumption that the first neuron of the first hidden layer pays more attention to the variables x1x_1 et x2x_2, it can be interpreted as the quantification of the family size of the house for instance.

DL as a supervised task

In most DL problems, we tend to predict an output y using a set of variables X, in this case, we suppose that for each row of the database XiX_i we have the corresponding prediction yiy_i, thus the labeled data.

Applications: Real Estate, Speech Recognition, Image Classification …

The used data can be:

  • Structured: explicit databases with features well defined
  • Unstructured: Audio, Image, Text, …

Universal approximation theorem

Deep learning in real life is the approximation of a given function ff. This approximation is possible and accurate thanks to the following theorem:

A multi-layer perceptron with a single hidden layer containing a finite number of neurons can approximate any continuous function ff on compact(){}^{(*)} subsets of RnR^n.
The class of deep neural networks is a universal approximator     \iff the activation function is not polynomial.

()^{(*)} In finite dimension, a set is said to be compact if it is closed and bounded. Visit this link for more details.
The main take-out of this algorithm is that deep learning allows solving any problem which can be mathematically expressed

Data Preprocessing

In any machine learning project in general, we divide our data into 3 sets:

  • Train set: used to train the algorithm and construct batches
  • Dev set: used to finetune the algorithm and evaluate bias and variance
  • Test set: used to generalize the error/precision of the final algorithm

The following table sums up the repartition of the three sets according the size of the data set mm:

Train Dev Test
m=104m=10^4 60% 20% 20%
m=106m=10^6 96% 2% 2%

Standard deep learning algorithms require a large dataset where the number of samples is around 500k500k lines. Now that the data is ready we will see in the next section the training algorithm.
Usually, before splitting the data, we also normalize the inputs, a step detailed later in this article.

2 - Learning algorithm

Learning in neural networks is the step of calculating the weights of the parameters associated with the various regressions throughout the network. In other words, we aim to find the best parameters that give the best prediction/approximation yi^\hat{y_i}, starting from the input xix_i, of the real value yiy_i.
For this, we define an objective function called the loss function and denoted J which quantifies the distance between the real and the predicted values on the overall training set.
We minimize J following two major steps:

  • Forward Propagation: we propagate the data through the network either in entirely or in batches, and we calculate the loss function on this batch which is nothing but the sum of the errors committed at the predicted output for the different rows.
  • Backpropagation: consists of calculating the gradients of the cost function with respect to the different parameters, then apply a descent algorithm to update them.

We iter the same process a number of times called epoch number. After defining the architecture, the learning algorithm is written as follows:

  • Initialization of the model parameters, a step equivalent to injecting noise into the model.
  • For i=1,2…N: (N is the number of epochs)

    • Perform forward propagation:

      • i\forall i, Compute the predicted value of xix_i through the neural network: y^iθ\hat{y}_i^{\theta}
      • Evaluate the function : J(θ)=1mi=1mL(y^iθ,yi)J(\theta)=\frac{1}{m}\sum_{i=1}^m \mathcal{L}(\hat{y}_i^{\theta}, y_i) where m is the size of the training set, θ the model parameters and L\mathcal{L} the cost(){}^{(*)} function
    • Perform backpropagation:

      • Apply a descent method to update the parameters : θ=:G(θ)\theta=:G(\theta)

(){}^{(*)} The cost function L\mathcal{L} evaluates the distances between the real and predicted value on a single point.

3 - Parameter initialization

The first step after defining the architecture of the neural network is parameter initialization. It is equivalent to injecting initial noise into the model’s weights.

  • Zero initialization: one can think of initializing the parameters with 0’s everywhere i.e W[i]=O,b[i]=OW^{[i]}=O, b^{[i]}=O. Using the forward propagation equations, we note that all the hidden units will be symmetric which penalizes the learning phase.
  • Random initialization: it’s an alternative commonly used and consists of injecting random noise in the parameters. If the noise is too large, some activation functions might get saturated which might later affect the computation of the gradient.

Two of the most famous initialization methods are:

  • Xavier’s: it consists of filling the parameters with values randomly sampled from a centered variable following the normal distribution N(0,2ni)\mathcal{N}(0, \frac{2}{n_i}).
  • Glorot’s: same approach with a different variance: N(0,2ni+ni+1)\mathcal{N}(0, \frac{2}{n_i+n_{i+1}}).

where nin_i is the number of nodes in the ithi^{th} layer.

4 - Forward - Backpropagation

Before diving into the algebra behind deep learning, we will first set the annotation which will be used in explicitting the equations of both the forward and the backpropagation.

Neural Network’s representation

The neural network is a sequence of regressions followed by an activation function. They both define what we call the forward propagation. W[i]W^{[i]} and b[i]b^{[i]} are the learned parameters at each layer ii. The backpropagation is also a sequence of algebraic operations carried out from the output towards the input.

Forward propagation

Algebra through the network

Let us consider a neural network having L layers as follows:

equa node

We consider the 1st1^{st} node of the 2nd2^{nd} hidden layer denoted a1[2]a^{[2]}_1.
It’s computed using the all the neurons of the previous layer as follows:

z1[2]=l=13w1,l[2]ai[1]+b[2]z^{[2]}_1=\sum_{l=1}^3 w^{[2]}_{1,l} a^{[1]}_i+b^{[2]} a1[2]=ψ[2](z1[2])\rightarrow a^{[2]}_1=\psi^{[2]}(z^{[2]}_1)

In general, considering the jthj^{th} node of the ithi^{th} layer we have the following equations:

zj[i]=l=1ni1wj,l[i]al[i1]+bj[i]z^{[i]}_j=\sum_{l=1}^{n_{i-1}} w^{[i]}_{j,l} a^{[i-1]}_l+b^{[i]}_j aj[i]=ψ[i](zj[i])\rightarrow a^{[i]}_j=\psi^{[i]}(z^{[i]}_j)

with ni1n_{i-1} being the number of neurons in the (i1)th(i-1)^{th} layer and WT{W}^T is the transpose of the matrix WW.

Finally, we denote:

  • W[i]=[w1[i],w2[i],..,wni[i]]W^{[i]}=[w^{[i]}_1, w^{[i]}_2,.., w^{[i]}_{n_i}] where dim(wj[i])=[ni1,1]dim(w^{[i]}_j)=[n_{i-1},1]
  • b[i]=T[b1[i],b2[i],..,bni[i]]b^{[i]}={}^T[b^{[i]}_1, b^{[i]}_2,.., b^{[i]}_{n_i}]
  • Z[i]=T[z1[i],z2[i],..,zni[i]];A[i]=T[a1[i],a2[i],..,ani[i]]\mathcal{Z}^{[i]}={}^T[z^{[i]}_1, z^{[i]}_2,.., z^{[i]}_{n_i}]; \mathcal{A}^{[i]}={}^T[a^{[i]}_1, a^{[i]}_2,.., a^{[i]}_{n_i}]
  • A[i]=ψ[i](Z[i])=T[ψ[i](z1[i]),ψ[i](z2[i]),..,ψ[i](zni[i])]\mathcal{A}^{[i]}=\psi^{[i]}(\mathcal{Z}^{[i]})={}^T[\psi^{[i]}(z^{[i]}_1), \psi^{[i]}(z^{[i]}_2),.., \psi^{[i]}(z^{[i]}_{n_i})]

Thus:

A[i]=ψ[i](Z[i])=ψ[i](W[i]TA[i1]+b[i])\mathcal{A}^{[i]}=\psi^{[i]}(\mathcal{Z}^{[i]})=\psi^{[i]}({W^{[i]}}^T\mathcal{A}^{[i-1]}+b^{[i]})

where

dim(Z[i])=dim(A[i])=[ni,1]dim(W[i]T)=Tdim(W[i])=[ni,ni1]dim(b[i])=[ni,1]dim(\mathcal{Z}^{[i]})=dim(\mathcal{A}^{[i]})=[n_i,1] \\ dim({W^{[i]}}^{T})={}^Tdim(W^{[i]})=[n_i,n_{i-1}] \\ dim(b^{[i]})=[n_i,1]

Algebra through the training set

Let us consider the prediction of the output of a single row data frame, denoted x(j)x^{(j)}, through the neural network We set a[0]=x(j)a^{[0]}=x^{(j)}, at each layer [i][i], we compute:

z[i][j]=W[i]Ta[i1][j]+b[i] and a[i][j]=ψ[i](z[i][j])z^{[i][j]}={W^{[i]}}^{T}a^{[i-1][j]}+b^{[i]}\text{ and } a^{[i][j]}=\psi^{[i]}(z^{[i][j]})

Until y^(j)=ψ[L](a[L])\hat{y}^{(j)}=\psi^{[L]}(a^{[L]}), where LL is the number of layers When dealing with a mm-row data set, repeating these operations separately for each line is very costly.
We have, at each layer [i][i]:

z[i][1]=W[i]Ta[i][0]+b[i] and a[i][1]=ψ[i](z>[i][1])..z[i][m]=W[i]Ta[i][m1]+b[i] and a[i][m]=ψ[i](z[i][m])z^{[i][1]}={W^{[i]}}^{T}a^{[i][0]}+b^{[i]}\text{ and }a^{[i][1]}=\psi^{[i]}(z^{>[i][1]}) \\.\\.\\ z^{[i][m]}={W^{[i]}}^{T}a^{[i][m-1]}+b^{[i]}\text{ and }a^{[i][m]}=\psi^{[i]}(z^{[i][m]})

We can use linear algebra to parallelize it as follows:

Z[i]=W[i]TA[i1]+b[i]A[i]=ψ[i](Z[i])Z^{[i]}={W^{[i]}}^{T}A^{[i-1]}+b^{[i]} \\ A^{[i]}=\psi^{[i]}(Z^{[i]})

Considering nin_i the number of neuron in the ithi^{th} layer:

Z[i]=[z[i][j]](i,j)[ni,m]A[i]=[a[i][j]](i,j)[ni,m]Z^{[i]}=\begin{bmatrix} z^{[i][j]} \end{bmatrix}_{(i,j)\in [n_i,m]} \\ A^{[i]}=\begin{bmatrix} a^{[i][j]} \end{bmatrix}_{(i,j)\in [n_i,m]}

Where:

dim(Z[i])=dim(A[i])=[ni,m]dim(W[i]T)=Tdim(W[i])=[ni,ni1]dim(b[i])=[ni,1]dim(Z^{[i]})=dim(A^{[i]})=[n_i,m] \\ dim({W^{[i]}}^{T})={}^Tdim(W^{[i]})=[n_i,n_{i-1}] \\ dim(b^{[i]})=[n_i,1]

The parameter b[i]b^{[i]} uses broadcasting to repeat itself through the columns. This can be summarized in the following graph:

mlp maths

Backpropagation

The backpropagation is the second step of the learning, which consists of injecting the error committed in the prediction (forward) phase into the network and update its parameters to perform better on the next iteration. Hence, the optimization of the function JJ, usually through a descent method.

Computational graph

Most of the descent methods require the computation of the gradient of the loss function denoted θJ(θ)\nabla_{\theta}J(\theta).
In a neural network, the operation is carried out using a computational graph which decomposes the function JJ into several intermediate variables.
Let us consider the following function: f(x,y,z)=(x+y).zf(x,y,z)=(x+y).z
The main objective is to calculate f(x,y,z)\nabla f(x,y,z) in (2,5,4)(-2,5,-4) where:

f(x,y,z)=T[fxfyfz]\nabla f(x,y,z)={}^T \begin{bmatrix} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} & \frac{\partial f}{\partial z} \end{bmatrix}

Let q=x+yf=q.zq=x+y \rightarrow f=q.z We carry out the computation using two passes:

  • Forward propagation: computes the value of ff from inputs to ouput:

f(2,5,4)=12f(-2,5,-4)=-12

  • Backpropagation: recursively apply chain rule to compute gradients from output to inputs:

ff=1fq=z=4fz=q=3fx=fq.qx+fz.zx=z.1+q.0=z=4fy=fq.qy+fz.zy=z.1+q.0=z=4\frac{\partial f}{\partial f}=1\\ \frac{\partial f}{\partial q}=z=-4\\ \frac{\partial f}{\partial z}=q=3\\ \frac{\partial f}{\partial x}=\frac{\partial f}{\partial q}.\frac{\partial q}{\partial x}+\frac{\partial f}{\partial z}.\frac{\partial z}{\partial x}=z.1+q.0=z=-4\\ \frac{\partial f}{\partial y}=\frac{\partial f}{\partial q}.\frac{\partial q}{\partial y}+\frac{\partial f}{\partial z}.\frac{\partial z}{\partial y}=z.1+q.0=z=-4

Hence:

f(x,y,z)(2,5,4)=T[443]\nabla f(x,y,z)|_{(-2,5,-4)}={^T}\begin{bmatrix} -4 & -4 & 3 \end{bmatrix}

The derivatives can be resumed in the following computational graph:
computational graph

Equations

Mathematicaly, we compute the gradients of the cost function, JJ, w.r.t the architecture’s parameters W[i]W^{[i]} and b[i]b^{[i]}. For a given parameter α\alpha, we set dα[i]=Jα[i]d\alpha^{[i]}=\frac{\partial J}{\partial \alpha^{[i]}} and we have at the ithi^{th} layer:

dZ[i]=dA[i]ψ[i](Z[i])dA[i1]=W[i]TdZ[i]dW[i]=dZ[i]A[i1]db[i]=dZ[i]dZ^{[i]}=dA^{[i]}\star\psi'^{[i]}(Z^{[i]})\\ dA^{[i-1]}={W^{[i]}}^TdZ^{[i]}\\ dW^{[i]}=dZ^{[i]}A^{[i-1]}\\ db^{[i]}=dZ^{[i]}

where \star is the element wise multiplication.
We recursively apply these equations for i=L,L1,...,1i=L,L-1,...,1

Gradient Checking

When carrying out the backpropagation, an additional checking is added to make sure that the algebric computations are correct. Algorithm:

  • We first reshape and stack all the parameters W[i]W^{[i]} and b[i]b^{[i]} into one vector denoted θ\theta
  • We carry out the same manoeuvre for their derivatives dW[i]dW^{[i]} and db[i]db^{[i]} and we denote dθd\theta the resulting vector.
  • i\forall i, We compute: dθapprox[i]=J(θ1,θ2,...,θi+ϵ,...)J(θ1,θ2,...,θiϵ,...)2ϵd\theta_{approx}^{[i]}=\frac{J(\theta_1,\theta_2,...,\theta_i+\epsilon,...)-J(\theta_1,\theta_2,...,\theta_i-\epsilon,...)}{2\epsilon} an O(ϵ2)O(\epsilon^2) approximation of Jθi=dθ[i]\frac{\partial J}{\partial\theta_i}=d\theta^{[i]} (where ϵ\epsilon is very small 107\approx 10^{-7})
  • We check the following quantity: dθapproxdθ2dθapprox2+dθ2\frac{\|d\theta_{approx}-d\theta\|_2}{\|d\theta_{approx}\|_2+\|d\theta\|_2}

It should be close to the value of ϵ\epsilon, an error is suspected when the value of the quantity is near 10310^{-3}.

Summing up in blocks

We can sum up the Forward and Backward propagation in the following block:

fp bp block

Parameters vs Hyperparameters

  • Parameters, denoted θ\theta, are the elements which we learn through the iterations and on which we apply backpropagation and update: W[i]W^{[i]} and b[i]b^{[i]}
  • Hyperparameters are all the other variables we define in our algorithm which can be tunned in order to improve the neural network:

    • Learning rate α\alpha
    • Number of iterations
    • Choice of activation functions
    • Number of layers LL
    • Number of units in each layer

5 - Activation functions

Activation functions are a kind of transfer functions that select the data propagated in the neural network. The underlying interpretation is to allow a neuron in the network to propagate learning data (if it is in a learning phase) only if it is sufficiently excited.

Here is a list of the most common functions:

  • ReLU:

ψ(x)=x1x0\psi(x)=x\mathcal{1}_{x\geq 0}

  • Sigmoid:

ψ(x)=11+ex\psi(x)=\frac{1}{1+e^{-x}}

  • Tanh:

ψ(x)=1e2x1+e2x\psi(x)=\frac{1-e^{-2x}}{1+e^{-2x}}

  • LeakyReLU:

ψ(x)=x1x0+αx1x0\psi(x)=x\mathcal{1}_{x\geq 0}+\alpha x\mathcal{1}_{x\leq 0}

Remark: if the activation functions are all linear, the neural network is precisely equivalent to a simple linear regression

6 - Optimization algorithm

Risk

Let us consider a neural network denoted by ff. The real objective to optimize is defined as the expected loss over all the corpora:

R(f)=p(X,Y)L(f(X),Y)dXdYR(f)=\int p(X,Y)\mathcal{L}(f(X),Y)dXdY

Where XX is an element from a continuous space of observables to which correspond a target YY and p(X,Y)p(X,Y) being the marginal probability of observing the couple (X,Y)(X, Y).

Empirical risk

Since we can not have all the corpora and hence we ignore the distribution pp, we restrict the estimation of the risk on a certain dataset well representative of the overall corpora and consider all the cases equiprobable.
In this case: =\int=\sum and p(X,Y)=1mp(X,Y)=\frac{1}{m} where m is the size of the representative corpora. Hence, we iteratively optimize the loss function defined as follows:

J(θ)=1mi=1mL(y^iθ,yi)J(\theta)=\frac{1}{m}\sum_{i=1}^m \mathcal{L}(\hat{y}_i^{\theta}, y_i)

Plus we can assert that:

minfR(f)minθJ(θ)min_f R(f)\approx min_{\theta} J(\theta)

There exist many techniques and algorithms, mainly based on gradient descent, which carry out the optimization. In the sections below, we will go through the most famous ones. It is important to note that these algorithms might get stuck in local minima and nothing assures reaching the global one.

Normalizing inputs

Before optimizing the loss function, we need to normalize the inputs in order to speed up the learning. In this case, J(θ)J(\theta) becomes tighter and more symmetric which helps gradient descent to find the minimum faster and thus in fewer iterations.
Standard data is the commonly used approach which consists of subtracting the mean of the variables and dividing by their standard deviation. Considering θ=T[θ1θ2]\theta={}^T[\theta_1 \theta_2], the following image illustrates the effect of normalizing the input on the contour lines of JJ -standard data on the right-:

var norm

Let X be a variable in our database, we set:

X:=XμσX:=\frac{X-\mu}{\sigma}

Where μ=1mi=1nx(i)\mu=\frac{1}{m}\sum_{i=1}^nx^{(i)} and σ=1mi=1n(x(i)μ)2\sigma=\frac{1}{m}\sum_{i=1}^n(x^{(i)}-\mu)^2

Gradient descent

In general, we tend to construct a convex and differentiable function JJ where any local minima is a global one. Mathematically speaking finding the global minimum of a convex function is equivalent to solving the equation J(θ)=0\nabla J(\theta)=0, we denote θ\theta^{\star} its solution. Most of the used algorithms are of kind θk+1=θk+αkdk\theta_{k+1}=\theta_{k}+\alpha_kd_k with θ0\theta_0 an initial guess, where αk\alpha_k is the step size and dkd_k the descent direction. We can assert that:

J(θk+1)=J(θk)+αkJ(θk)dk+o(θk)J(\theta_{k+1})=J(\theta_{k})+\alpha_k\nabla J(\theta_k)d_k+o(\theta_k)

Since we seek to have J(θk+1)<<J(θk)J(\theta_{k+1})<<J(\theta_{k}) then we need J(θk)dk\nabla J(\theta_k)d_k as negative as possible, meaning dk=J(θk)d_k=-\nabla J(\theta_k).

Algorithm:

  • θ0\theta_0 is given
  • for k=1,...,k=1,...,stopping criterion:

    • θk+1=θkαkJ(θk)\theta_{k+1}=\theta_{k}-\alpha_k\nabla J(\theta_k)

Choice of αk\alpha_k:

  • αk=α\alpha_k=\alpha a fixed step size
  • αk\alpha_k minimizes tJ(αktJ(θk))t\rightarrow J(\alpha_k-t\nabla J(\theta_k))
  • αk\alpha_k follows a certain decay law (see Learning rate decay section)

Mini-batch gradient descent

This technique consists of dividing the trainning set to batches (X{1},y{1}),(X{2},y{2}),...,(X{n},y{n})(X^{\{1\}},y^{\{1\}}), (X^{\{2\}}, y^{\{2\}}),...,(X^{\{n\}}, y^{\{n\}}), the training algorithm is as follows:

  • for t=1,…,n:

    • Carry out forward propagation on X{t}X^{\{t\}}
    • Compute the cost function normalized on the size of the batch
    • Carry out the backpropagation using (X{t},y{t},y^{t})(X^{\{t\}}, y^{\{t\}}, \hat{y}^{\{t\}})
    • Update the weight W[l]W^{[l]} and b[l];lb^{[l]}; \forall l

Choice of the mini-batch size:

  • Small number of rows 2000\sim 2000 lines
  • Typical size: power of 2 which is good for memory
  • Mini-batch should fit in CPU/GPU memory

Remark: in the case where there is only one data line in the batch, the algorithm is called stochastic gradient descent

Gradient descent with momentum

A variant of gradient descent which includes the notion of momentum, the algorithm is as follows:

  • Initialize VdW=0dWV_{dW}=0_{dW}, Vdb=0dbV_{db}=0_{db}
  • On iteration k:

    • Compute dWdW and dbdb on the current mini-batch
    • VdW=βVdW+(1β)dWV_{dW}=\beta V_{dW}+(1-\beta)dW; Vdb=βVdb+(1β)dbV_{db}=\beta V_{db}+(1-\beta)db
    • Update the parameters:

      • W:=WαdWW:=W-\alpha dW
      • b:=bαdbb:=b-\alpha db

(α,β\alpha, \beta) are hyperparameters. Since dθd\theta is calculated on a mini-batch, the resulting gradient J\nabla J is very noisy, this exponentially weighted averages included by the momentum give a better estimation of derivatives.

RMSprop

Root Mean Square prop is very similar to gradient descent with momentum, the only difference is that it includes the second-order momentum instead of the first-order one, plus a slight change on the parameters’ update:

  • Initialize SdW=0dWS_{dW}=0_{dW}, Sdb=0dbS_{db}=0_{db}
  • On iteration k:

    • Compute dWdW and dbdb on the current mini-batch
    • SdW=βSdW+(1β)dW2S_{dW}=\beta S_{dW}+(1-\beta)dW^\bold{2}; Sdb=βSdb+(1β)db2S_{db}=\beta S_{db}+(1-\beta)db^\bold{2}
    • Update the parameters:

      • W:=WαSdW+ϵdWW:=W-\frac{\alpha}{\sqrt{S_{dW}}+\epsilon}dW
      • b:=bαSdb+ϵdbb:=b-\frac{\alpha}{\sqrt{S_{db}}+\epsilon}db

(α,β\alpha, \beta) are hyperparameters and ϵ\epsilon assures numerical stability (108\approx 10^{-8})

Adam

Adam is an adaptive learning rate optimization algorithm designed specifically for training deep neural networks. Adam can be seen as a combination of RMSprop and gradient descent with momentum. It uses square gradients to set the learning rate at scale as RMSprop and takes advantage of momentum by using the moving average of the gradient instead of the gradient itself as the gradient descends with momentum. The main idea is to avoid oscillations during optimization by accelerating the descent in the right direction, say dW, using the VdWV_{dW} moment: if the descent is slow so VdWV_{dW} and SdWS_{dW} are small, a choice of the larger step α\alpha solves the problem, moreover by dividing by SdW\sqrt{S_{dW}}, the optimization is accelerated further. The algorithm of the Adam optimizer is the following:

  • Initialize: VdW=0V_{dW}=0, SdW=0S_{dW}=0, Vdb=0V_{db}=0, Sdb=0S_{db}=0;
  • On iteration k:

    • Computation of dWdW and dbdb through backpropagation
    • Momentum:

      • VdW=β1VdW+(1β1)dWV_{dW}=\beta_1V_{dW}+(1-\beta_1)dW
      • Vdb=β1Vdb+(1β1)dbV_{db}=\beta_1V_{db}+(1-\beta_1)db
    • RMSprop:

      • SdW=β2SdW+(1β2)dW2S_{dW}=\beta_2 S_{dW}+(1-\beta_2)dW^2
      • Sdb=β2Sdb+(1β2)db2S_{db}=\beta_2 S_{db}+(1-\beta_2)db^2
    • Correction:

      • VdW=VdW1β1kV_{dW}=\frac{V_{dW}}{1-\beta_1^k}
      • SdW=SdW1β2kS_{dW}=\frac{S_{dW}}{1-\beta_2^k}
      • Vdb=Vdb1β1kV_{db}=\frac{V_{db}}{1-\beta_1^k}
      • Sdb=Sdb1β2kS_{db}=\frac{S_{db}}{1-\beta_2^k}
    • Parameters’ update:

      • W=WαVdwSdW+ϵW=W-\alpha\frac{V_{dw}}{\sqrt{S_{dW}}+\epsilon};
      • b=bαVdbSdb+ϵb=b-\alpha\frac{V_{db}}{\sqrt{S_{db}}+\epsilon}

Learning rate decay

The main objective of the learning rate decay is to slowly reduce the learning rate over time/iterations. It finds justification in the fact that we afford to take big steps at the beginning of the learning but when approaching the global minimum, we slow down and thus decrease the learning rate. There exist many learning rate decay laws, here are some of the most common:

  • We decrease the learning rate by epoch i.e 1 pass through the data (all the mini-batches):

α(epoch_num)=11+β.epoch_numα0\alpha(epoch\_num)=\frac{1}{1+\beta.epoch\_num}\alpha_0

  • We can exponentially decrease the learning rate:

α(epoch_num)=0.95epoch_numα0\alpha(epoch\_num)=0.95^{epoch\_num}\alpha_0

  • We can also consider the following decay law:

α(epoch_num)=kepoch_numα0\alpha(epoch\_num)=\frac{k}{\sqrt{epoch\_num}}\alpha_0

(α0\alpha_0, kk, β\beta) are hyperparameters

Regularization

Variance/bias

When training a neural network, it might suffer from:

  • High bias: or underfitting, where the network fails to find the path in the data, in this case, JtrainJ_{train} is very high the same as JdevJ_{dev}. Mathematically speaking, when performing cross-validation; the mean of JJ on all the considered folds is high.
  • High variance or overfitting, the model fits perfectly on the training data but fails to generalize on unseen data, in this case, JtrainJ_{train} is very low and JdevJ_{dev} is relatively high. Mathematically speaking, when performing cross-validation; the variance of JJ on all the considered folds is high.

Let’s consider the dartboard game, where hitting the red target is the best-case scenario. Having a low biais (first line) means that on average we are close to the goal. In case, of a low variance the hits are all concentrated around the target (the variance of the hits’ distribution is low). When the variance is high, under the assumption of a low bias, the hits are spread out but still around the red circle. Vice-versa, we can define the high bias with a low/high variance.

var bias illus

Mathematically speaking, let ff be a true regression function: y=f(x)+ϵy=f(x)+\epsilon where ϵN(0,σ2)\epsilon \sim \mathcal{N}(0, \sigma^2). We fit a hypothesis h(x)=Wx+bh(x)=Wx+b with MSE and consider x0x_0 be a new data point, y0=f(x0)+ϵy_0=f(x_0)+\epsilon, the expected error can be defined by E[(y0h(x0))2]\mathbb{E}[(y_0-h(x_0))^2] and we can assert that:

E[(y0h(x0))2]=E[(h(x0)hˉ(x0))2](Variance)+(hˉ(x0)f(x0))2(bias)+E[(y0f(x0))2](Intrinsic)\mathbb{E}[(y_0-h(x_0))^2]= \mathbb{E}[(h(x_0)-\bar{h}(x_0))^2]\textbf{(Variance)}\\\hspace4cm +(\bar{h}(x_0)-f(x_0))^2\textbf{(bias)}\\\hspace6cm+\mathbb{E}[(y_0-f(x_0))^2]\textbf{(Intrinsic)}

where Zˉ=E[Z]\bar{Z}=\mathbb{E}[Z]

A trade-off must be found between variance and bias to find the optimum complexity of the model either by using the AICAIC criteria or using cross-validation. Here is a simple schema to follow to solve bias/variance issues:

schema var bias

L1 - L2 regularization

Regularization is an optimization technique which prevents overfitting. It consists of adding a term in the objective function to minimize as follows:

  • L1 regularization: JJ becomes:

J(θ)=1mi=1mcost(y^iθ,yi)+λ2mθ12J(\theta)=\frac{1}{m}\sum_{i=1}^m cost(\hat{y}_i^{\theta}, y_i)+\frac{\lambda}{2m}\|\theta\|_1^2

Where θ1=iθ[i]\|\theta\|_1=\sum_{i}|\theta^{[i]}|

  • L2 regularization: JJ becomes:

J(θ)=1mi=1mcost(y^iθ,yi)+λ2mθ22J(\theta)=\frac{1}{m}\sum_{i=1}^m cost(\hat{y}_i^{\theta}, y_i)+\frac{\lambda}{2m}\|\theta\|_2^2

Whereθ22=θTθ\|\theta\|_2^2=\theta^T\theta
λ\lambda is the hyperparameter of the regularization

  • Backpropagation and regularization The update of the parameters during backpropagation depends on the the gradient J\nabla J, to which is added a new regularization term. In L2 regularization, it becomes as follows:

dθreg=dθ+λmθθ:=θ(1λmα)αdθd\theta^{reg}=d\theta+\frac{\lambda}{m}\theta\rightarrow\theta:=\theta(1-\frac{\lambda}{m}\alpha)-\alpha d\theta

Considering λ>>1\lambda>>1, minimizing the cost function leads to weak values of parameters because of the term λ2mθ\frac{\lambda}{2m}\|\theta\| which simplifies the network and makes more consistent, hence less exposed to overfitting.

Dropout regularization

Roughly speaking, the main idea is to sample a uniform random variable, for each layer for each node, and have p\mathcal{p} chance of keeping the node and 1p1-\mathcal{p} of removing it which diminishes the network. The main intuition of dropout is based on the idea that the network shouldn’t rely on a specific feature but should instead spread out the weights! Mathematically speaking, when dropout is off and considering the jthj^{th} node of the ithi^{th} layer, we have the following equations:

zj[i]=Wj[i]TA[i1]+bj[i]aj[i]=ψ[i](zj[i])z^{[i]}_j={W^{[i]}_j}^T\mathcal{A^{[i-1]}}+b^{[i]}_j\\ \rightarrow a^{[i]}_j=\psi^{[i]}(z^{[i]}_j)

When dropout is on, the equations become as follows:

rj[i1]Bernoulli(p(i1))A^[i1]=A[i1].rj[i1]z^j[i]=Wj[i]TA^[i1]+bj[i]aj[i]=ψ[i](z^j[i])r^{[i-1]}_j\sim Bernoulli(p^{(i-1)})\\ \hat{\mathcal{A}}^{[i-1]}=\mathcal{A^{[i-1]}}.r^{[i-1]}_j \\ \hat{z}^{[i]}_j={W^{[i]}_j}^T\hat{\mathcal{A}}^{[i-1]}+b^{[i]}_j \\ \rightarrow a^{[i]}_j=\psi^{[i]}(\hat{z}^{[i]}_j)

Where p(i1)p^{(i-1)} is a hyperparameter.

Early stopping

This technique is quite simple and consists of stopping the iteration around the area when JtrainJ_{train} and JdevJ_{dev} start seperating:

early stopping

Gradient problems

The computation of gradients suffers from two major problems: gradient vanishing and gradient exploding. To illustrate both of the situations, let’s consider a neural network where all the activation functions ψ[i]\psi^{[i]} are linear, W[i]=[1,5001,5]W^{[i]}=\begin{bmatrix} 1,5 & 0\\0 & 1,5 \end{bmatrix} and b[i]=0,i=1,...,L1b^{[i]}=0, \forall i=1,...,L-1, thus:

y^=W[L].[1,5L1001,5L1]\hat{y}=W^{[L]}.\begin{bmatrix} 1,5^{L-1} & 0\\0 & 1,5^{L-1} \end{bmatrix}

We note that 1,5L11,5^{L-1} will explode exponentially as a function of the depth L. If we use 0.50.5 instead of 1,51,5 then 0,5L10,5^{L-1} will vanish exponentially as well.
The same issue occurs with gradients.

References