12.2 Feedforward Neural Network

12.2.1 Logistic Regression as Neural Network

Let’s look at logistic regression from the lens of neural network. For a binary classification problem, for example spam classifier, given \(m\) samples \(\{(x^{(1)}, y^{(1)}),(x^{(2)}, y^{(2)}),...,(x^{(m)}, y^{(m)})\}\), we need to use the input feature \(x^{(i)}\) (they may be the frequency of various words such as “money”, special characters like dollar signs, and the use of capital letters in the message etc.) to predict the output \(y^{(i)}\) (if it is a spam email). Assume that for each sample \(i\), there are \(n_{x}\) input features. Then we have:

\[\begin{equation} X=\left[\begin{array}{cccc} x_{1}^{(1)} & x_{1}^{(2)} & \dotsb & x_{1}^{(m)}\\ x_{2}^{(1)} & x_{2}^{(2)} & \dotsb & x_{2}^{(m)}\\ \vdots & \vdots & \vdots & \vdots\\ x_{n_{x}}^{(1)} & x_{n_{x}}^{(2)} & \dots & x_{n_{x}}^{(m)} \end{array}\right]\in\mathbb{R}^{n_{x}\times m} \tag{12.1} \end{equation}\]

\[y=[y^{(1)},y^{(2)},\dots,y^{(m)}] \in \mathbb{R}^{1 \times m}\]

To predict if sample \(i\) is a spam email, we first get the inactivated neuro \(z^{(i)}\) by a linear transformation of the input \(x^{(i)}\), which is \(z^{(i)}=w^Tx^{(i)} + b\). Then we apply a function to “activate” the neuro \(z^{(i)}\) and we call it “activation function”. In logistic regression, the activation function is sigmoid function and the “activated” \(z^{(i)}\) is the prediction:

\[\hat{y}^{(i)} = \sigma(w^Tx^{(i)} + b)\]

where \(\sigma(z) = \frac{1}{1+e^{-z}}\). The following figure summarizes the process:

There are two types of layers. The last layer connects directly to the output. All the rest are intermediate layers. Depending on your definition, we call it “0-layer neural network” where the layer count only considers intermediate layers. To train the model, you need a cost function which is defined as equation (12.2).

\[\begin{equation} J(w,b)=\frac{1}{m} \Sigma_{i=1}^m L(\hat{y}^{(i)}, y^{(i)}) \tag{12.2} \end{equation}\]

where

\[L(\hat{y}^{(i)}, y^{(i)}) = -y^{(i)}log(\hat{y}^{(i)})-(1-y^{(i)})log(1-\hat{y}^{(i)})\]

To fit the model is to minimize the cost function.

12.2.2 Gradient Descent

The general approach to minimize \(J(w,b)\) is by gradient descent, also known as back-propagation. In logistic regression, it is easy to calculate the gradient w.r.t the parameters \((w, b)\) using the chain rule for differentiation. The optimization process is a forward and backward sweep over the network. Let’s look at the gradient descent for logistic regression across m sample. The non-vectorized process is as follows.

First initialize \(w_1\), \(w_2\), … , \(w_{n_x}\), and \(b\). Then plug in the initialized value to the forward and backward propagation. The forward propagation takes the current weights and calculates the prediction \(\hat{h}^{(i)}\) and cost \(J^{(i)}\). The backward propagation calculates the gradient descent for the parameters. After iterating through all \(m\) samples, you can calculate gradient descent for the parameters. Then update the parameter by: \[w := w - \gamma \frac{\partial J}{\partial w}\] \[b := b - \gamma \frac{\partial J}{\partial b}\]

Repeat the progapation process using the updated parameter until the cost \(J\) stabilizes.

12.2.3 Deep Neural Network

Before people coined the term deep learning, a neural network refers to single hidden layer network. Neural networks with more than one layers are called deep learning. Network with the structure in figure 12.1 is the multiple layer perceptron (MLP) or feedforward neural network (FFNN).

Feedforward Neural Network

FIGURE 12.1: Feedforward Neural Network

Let’s look at a simple one-hidden-layer neural network (figure 12.2). First only consider one sample. From left to right, there is an input layer with 3 features (\(x_1, x_2, x_3\)), a hidden layer with four neurons and an output later to produce a prediction \(\hat{y}\).

1-layer Neural Network

FIGURE 12.2: 1-layer Neural Network

From input to the first hidden layer

Each inactivated neuron on the first layer is a linear transformation of the input vector \(x\). For example, \(z^{[1]}_1 = w^{[1]T}_1x^{(i)} + b_1^{[1]}\) is the first inactivated neuron for hidden layer one. We use superscript [l] to denote a quantity associated with the \(l^{th}\) layer and the subscript i to denote the \(i^{th}\) entry of a vector (a neuron or feature). Here \(w^{[1]}\) and \(b_1^{[1]}\) are the weight and bias parameters for layer 1. \(w^{[1]}\) is a \(4 \times 1\) vector and hence \(w^{[1]T}_1x^{(i)}\) us a linear combination of the four input features. Then use a sigmoid function \(\sigma(\cdot)\) to activate the neuron \(z^{[1]}_1\) to get \(a^{[1]}_1\).

From the first hidden layer to the output

Next, do a linear combination of the activated neurons from the first layer to get inactivated output, \(z^{[2]}_1\). And then activate the neuron to get the predicted output \(\hat{y}\). The parameters to estimate in this step are \(w^{[2]}\) and \(b_1^{[2]}\).

If you fully write out the process, it is the bottom right of figure 12.2. When you implement a neural network, you need to do similar calculation four times to get the activated neurons in the first hidden layer. Doing this with a for loop is inefficient. So people vectorize the four equations. Take an input and compute the corresponding \(z\) and \(a\) as a vector. You can vectorize each step and get the following representation:

\[\begin{array}{cc} z^{[1]}=W^{[1]}x+b^{[1]} & \ \ \sigma^{[1]}(z^{[1]})=a^{[1]}\\ z^{[2]}=W^{[2]}a^{[1]}+b^{[2]} & \ \ \ \ \ \sigma^{[2]}(z^{[2]})=a^{[2]}=\hat{y} \end{array}\]

\(b^{[1]}\) is the column vector of the four bias parameters shown above. \(z^{[1]}\) is a column vector of the four non-active neurons. When you apply an activation function to a matrix or vector, you apply it element-wise. \(W^{[1]}\) is the matrix by stacking the four row-vectors:

\[W^{[1]}=\left[\begin{array}{c} w_{1}^{[1]T}\\ w_{2}^{[1]T}\\ w_{3}^{[1]T}\\ w_{4}^{[1]T} \end{array}\right]\]

So if you have one sample, you can go through the above forward propagation process to calculate the output \(\hat{y}\) for that sample. If you have \(m\) training samples, you need to repeat this process each of the \(m\) samples. We use superscript (i) to denote a quantity associated with \(i^{th}\) sample. You need to do the same calculation for all \(m\) samples.

For i = 1 to m, do:

\[\begin{array}{cc} z^{[1](i)}=W^{[1]}x^{(i)}+b^{[1]} & \ \ \sigma^{[1]}(z^{[1](i)})=a^{[1](i)}\\ z^{[2](i)}=W^{[2]}a^{[1](i)}+b^{[2]} & \ \ \ \ \ \sigma^{[2]}(z^{[2](i)})=a^{[2](i)}=\hat{y}^{(i)} \end{array}\]

Recall that we defined the matrix X to be equal to our training samples stacked up as column vectors in equation (12.1). We do a similar thing here to stack vectors with the superscript (i) together across \(m\) samples. This way, the neural network computes the outputs on all the samples on at the same time:

\[\begin{array}{cc} Z^{[1]}=W^{[1]}X+b^{[1]} & \ \ \sigma^{[1]}(Z^{[1]})=A^{[1]}\\ Z^{[2]}=W^{[2]}A^{[1]}+b^{[2]} & \ \ \ \ \ \sigma^{[2]}(Z^{[2]})=A^{[2]}=\hat{Y} \end{array}\]

where \[X=\left[\begin{array}{cccc} | & | & & |\\ x^{(1)} & x^{(1)} & \cdots & x^{(m)}\\ | & | & & | \end{array}\right],\]

\[A^{[l]}=\left[\begin{array}{cccc} | & | & & |\\ a^{[l](1)} & a^{[l](1)} & \cdots & a^{[l](m)}\\ | & | & & | \end{array}\right]_{l=1\ or\ 2},\]

\[Z^{[l]}=\left[\begin{array}{cccc} | & | & & |\\ z^{[l](1)} & z^{[l](1)} & \cdots & z^{[l](m)}\\ | & | & & | \end{array}\right]_{l=1\ or\ 2}\]

You can add layers like this to get a deeper neural network as shown in the bottom right of figure 12.1.

When build a neural network with many layers, one of the choices you get to make is the activation function to use in the hidden layers and the output layer. So far, we only see sigmoid activation function. But there are other choices. Intermediate layers usually use different activation function than the output layer. Let’s look at some of the common options in the next section.

12.2.4 Activation Function

  • Sigmoid and Softmax Function

We have used the sigmoid (or logistic) activation function. The function is S-shape with an output value between 0 to 1. Therefore it is used as the output layer activation function to predict the probability when the response \(y\) is binary. However, it is rarely used as an intermediate layer activation function. One of the main reasons is that when \(z\) is away from 0, then the derivative of the function drops fast which slows down the optimization process through gradient descent. Even the fact that it is differentiable provides some convenience, the decreasing slope can cause a neural network to get stuck at the training time.

Sigmoid Function

FIGURE 12.3: Sigmoid Function

When the output has more than 2 categories, people use softmax function as the output layer activation function.

\[\begin{equation} f_i(\mathbf{z}) = \frac{e^{z_i}}{\Sigma_{j=1}^{J} e^{z_j} } \tag{12.3} \end{equation}\]

where \(\mathbf{z}\) is a vector.

  • Hyperbolic Tangent Function (tanh)

Another activation function with a similar S-shape is the hyperbolic tangent function. It works better than the sigmoid function as the intermediate layer.2

\[\begin{equation} tanh(z) = \frac{e^{z} - e^{-z}}{e^{z} + e^{-z}} \tag{12.4} \end{equation}\]

Hyperbolic Tangent Function

FIGURE 12.4: Hyperbolic Tangent Function

The tanh function crosses point (0, 0) and the value of the function is between 1 and -1 which makes the mean of the activated neurons closer to 0. The sigmoid function doesn’t have that property. When you preprocess the training input data, you sometimes center the data so that the mean is 0. The tanh function is kind of doing that data processing for you which makes learning for the next layer a little easier. This activation function is used a lot in the recurrent neural networks where you want to polarize the results.

  • Rectified Linear Unit (ReLU) Function

The most popular activation function is the Rectified Linear Unit (ReLU) function. It is a piecewise function, or a half rectified function:

\[\begin{equation} R(z) = max(0, z) \tag{12.5} \end{equation}\]

The derivative is 1 when z is positive and 0 when z is negative. You can define the derivative as either 0 or 1 when z is 0. When you implement this, it is unlikely that z equals to exactly 0 even it can be very close to 0.

Rectified Linear Unit Function

FIGURE 12.5: Rectified Linear Unit Function

The advantage of the ReLU is that when z is positive, the derivative doesn’t vanish as z getting larger. So it leads to faster computation than sigmoid or tanh. It is non-linear with an unconstrained response. However, the disadvantage is that when z is negative, the derivative is 0. It may not map the negative values appropriately. In practice, this doesn’t cause too much trouble but there is another version of ReLu called leaky ReLu that attempts to solve the dying ReLU problem. The leaky ReLu is

\[R(z)_{Leaky}=\begin{cases} \begin{array}{c} z\\ az \end{array} & \begin{array}{c} z\geq0\\ z<0 \end{array}\end{cases}\]

Instead of being 0 when z is negative, it adds a slight slope such as \(a=0.01\) as shown in figure 12.6 (can you see the leaky part there? : ).

Rectified Linear Unit Function

FIGURE 12.6: Rectified Linear Unit Function

You may notice that all activation functions are non-linear. Since the composition of two linear functions is still linear, using a linear activation function doesn’t help to capture more information. That is why you don’t see people use a linear activation function in the intermediate layer. One exception is when the output \(y\) is continuous, you may use linear activation function at the output layer. To sum up, for intermediate layers:

  • ReLU is usually a good choice. If you don’t know what to choose, then start with ReLU. Leaky ReLu usually works better than the ReLU but it is not used as much in practice. Either one works fine. Also, people usually use a=0.01 as the slope for leaky ReLU. You can try different parameters but most of the people a = 0.01.
  • tanh is used sometimes especially in recurrent neural network. But you nearly never see people use sigmoid function as intermediate layer activation function.

For the output layer:

  • When it is binary classification, use sigmoid with binary cross-entropy as loss function
  • When there are multiple classes, use softmax function with categorical cross-entropy as loss function
  • When the response is continuous, use identity function (i.e. y = x)

12.2.5 Deal with Overfitting

The biggest problem for deep learning is overfitting.

12.2.5.1 Regularization

For logistic regression,

\[\underset{w,b}{min}J(w,b)= \frac{1}{m} \Sigma_{i=1}^{m}L(\hat{y}^{(i)}, y^{(i)}) + penalty\]

Common penalties are L1 or L2 as follows:

\[L_2\ penalty=\frac{\lambda}{2m}\parallel w \parallel_2^2 = \frac{\lambda}{2m}\Sigma_{i=1}^{n_x}w_i^2\]

\[L_1\ penalty = \frac{\lambda}{m}\Sigma_{i=1}^{n_x}|w|\]

For neural network,

\[J(w^{[1]},b^{[1]},\dots,w^{[L]},b^{[L]})=\frac{1}{m}\Sigma_{i=1}^{m}L(\hat{y}^{(i)},y^{(i)}) + \frac{\lambda}{2m}\Sigma_{l=1}^{L} \parallel w^{[l]} \parallel^2_F\]

where

\[\parallel w^{[l]} \parallel^2_F = \Sigma_{i=1}^{l}\Sigma_{j=1}^{l-1} (w^{[l]}_{ij})^2\]

Many people call it “Frobenius Norm” instead of L2-norm.

12.2.5.2 Dropout

12.2.6 Optimization

12.2.6.1 Batch, Mini-batch, Stochastic Gradient Descent

\[\begin{array}{ccc} x= & [\underbrace{x^{(1)},x^{(2)},\cdots,x^{(1000)}}/ & \cdots/\cdots x^{(m)}]\\ (n_{x},m) & mini-batch\ 1 \end{array}\]

\[\begin{array}{ccc} y= & [\underbrace{y^{(1)},y^{(2)},\cdots,y^{(1000)}}/ & \cdots/\cdots y^{(m)}]\\ (1,m) & mini-batch\ 1 \end{array}\]

  • Mini-batch size = m: batch gradient descent, too long per iteration
  • Mini-batch size = 1: stochastic gradient descent, lose speed from vectorization
  • Mini-batch size in between: mini-batch gradient descent, make progress without processing all training set, typical batch sizes are \(2^6=64\), \(2^7=128\), \(2^7=256\), \(2^8=512\)

12.2.6.2 Optimization Algorithms

In the history of deep learning, researchers proposed different optimization algorithms and showed that they worked well in a specific scenario. But the optimization algorithms didn’t generalize well to a wide range of neural networks. So you will need to try different optimizers in your application. We will introduce three commonly used optimizers here.

Exponentially Weighted Averages

12.2.7 Image Recognition Using FFNN

In this section, we will walk through a toy example of image classification problem using keras package. We use R in the section to illustrate the process and also provide the python notebook on the book website. Please check the keras R package website for the most recent development. We are using the Databrick community edition with the following consideration:

  • Minimum language barrier in coding for most users
  • Zero setup to save time using cloud environment
  • Help you get familiar with current trend of cloud computing in corporate setup

Refer to section 4.3 for how to set up an account, create a notebook (R or Python) and start a cluster.

What is an image as data? You can consider a digital image as a set of points on 2-d or 3-d space. Each each point has a value between 0 to 255 which is considered as a pixel. Figure 12.7 shows an example of grayscale image. It is a set of pixels on 2-d space and each pixel has a value between 0 to 255. You can process the image as a 2-d array input if you use a Convolutional Neural Network(CNN). Or, you can vectorize the array as the input for FFNN as shown in the figure.

Grayscale image is a set of pixels on 2-d space. Each pixel has a value range from 0 to 255.

FIGURE 12.7: Grayscale image is a set of pixels on 2-d space. Each pixel has a value range from 0 to 255.

A color image is a set of pixels on 3-d space and each pixel has a value between 0 to 255. There are three 2-d panels which represent the color red, blue and green accordingly. Similarly, You can process the image as a 3-d array. Or you can vectorize the array as shown in figure 12.8.

Color image is a set of pixels on 3-d space. Each pixel has a value range from 0 to 255.

FIGURE 12.8: Color image is a set of pixels on 3-d space. Each pixel has a value range from 0 to 255.

Let’s look at how to use the keras R package for a toy example in deep learning with the handwritten digits image dataset (i.e. MNIST). keras has many dependent packages, so it takes a few minutes to install. Be patient! In a production cloud environment such as the paid version of Databricks, you can save what you have and resume from where you left.

As keras is just an interface to popular deep learning frameworks, we have to install the deep learning backend. The default and recommended backend is TensorFlow. By calling install_keras(), it will install all the needed dependencies for TensorFlow.

Now we are all set to explore deep learning! As simple as three lines of R code, but there are quite a lot going on behind the scene. If you are using a cloud environment, you do not need to worry about these behind scene setup and maintenance.

We will use the widely used MNIST handwritten digit image dataset. More information about the dataset and benchmark results from various machine learning methods can be found at http://yann.lecun.com/exdb/mnist/ and https://en.wikipedia.org/wiki/MNIST_database.

This dataset is already included in the keras/TensorFlow installation and we can simply load the dataset as described in the following cell. It takes less than a minute to load the dataset.

The data structure of the MNIST dataset is straight forward and well prepared for R, which has two pieces:

  1. training set: x (i.e. features): 60000x28x28 tensor which corresponds to 60000 28x28 pixel greyscale images (i.e. all the values are integers between 0 and 255 in each 28x28 matrix), and y (i.e. responses): a length 60000 vector which contains the corresponding digits with integer values between 0 and 9.

  2. testing set: same as the training set, but with only 10000 images and responses. The detailed structure for the dataset can be seen with str(mnist) below.

Now we prepare the features (x) and the response variable (y) for both the training and testing dataset, and we can check the structure of the x_train and y_train using str() function.

Now let’s plot a chosen 28x28 matrix as an image using R’s image() function. In R’s image() function, the way of showing an image is rotated 90 degree from the matrix representation. So there is additional steps to rearrange the matrix such that we can use image() function to show it in the actual orientation.

Here is the original 28x28 matrix for the above image:

There are multiple deep learning methods to solve the handwritten digits problem and we will start from the simple and generic one, feedforward neural network (FFNN). FFNN contains a few fully connected layers and information is flowing from a front layer to a back layer without any feedback loop from the back layer to the front layer. It is the most common deep learning models to start with.

12.2.7.1 Data preprocessing

In this section, we will walk through the needed steps of data preprocessing. For the MNIST dataset that we just loaded, some preprocessing is already done. So we have a relatively “clean” data, but before we feed the data into FFNN, we still need to do some additional preparations.

First, for each digits, we have a scalar response and a 28x28 integer matrix with value between 0 and 255. To use the out of box DNN functions, for each response, all the features are one row of all features. For an image in MNIST dataet, the input for one response y is a 28x28 matrix, not a single row of many columns and we need to convet the 28x28 matrix into a single row by appending every row of the matrix to the first row using reshape() function.

In addition, we also need to scale all features to be between (0, 1) or (-1, 1) or close to (-1, 1) range. Scale or normalize every feature will improve numerical stability in the optimization procedure as there are a lot of parameters to be optimized.

We first reshape the 28x28 image for each digit (i.e each row) into 784 columns (i.e. features), and then rescale the value to be between 0 and 1 by dividing the original pixel value by 255, as described in the cell below.

And here is the structure of the reshaped and rescaled features for training and testing dataset. Now for each digit, there are 784 columns of features.

In this example, though the response variable is an integer (i.e. the corresponding digits for an image), there is no order or rank for these integers and they are just an indication of one of the 10 categories. So we also convert the response variable y to be categorical.

12.2.7.2 Fit model

Now we are ready to fit the model. It is straight forward to build a deep neural network using keras. For this example, the number of input features is 784 (i.e. scaled value of each pixel in the 28x28 image) and the number of class for the output is 10 (i.e. one of the ten categories). So the input size for the first layer is 784 and the output size for the last layer is 10. And we can add any number of compatible layers in between.

In keras, it is easy to define a DNN model: (1) use keras_model_sequential() to initiate a model placeholder and all model structures are attached to this model object, (2) layers are added in sequence by calling the layer_dense() function, (3) add arbitrary layers to the model based on the sequence of calling layer_dense(). For a dense layer, all the nodes from the previous layer are connected with each and every node to the next layer. In layer_dense() function, we can define how many nodes in that layer through the units parameter. The activation function can be defined through the activation parameter. For the first layer, we also need to define the input features’ dimension through input_shape parameter. For our preprocessed MNIST dataset, there are 784 columns in the input data. A common way to reduce overfitting is to use the dropout method, which randomly drops a proportion of the nodes in a layer. We can define the dropout proportion through layer_dropout() function immediately after the layer_dense() function.

The above dnn_model has 4 layers with first layer 256 nodes, 2nd layer 128 nodes, 3rd layer 64 nodes, and last layer 10 nodes. The activation function for the first 3 layers is relu and the activation function for the last layer is softmax which is typical for classification problems. The model detail can be obtained through summary() function. The number of parameter of each layer can be calculated as: (number of input features +1) times (numbe of nodes in the layer). For example, the first layer has (784+1)x256=200960 parameters; the 2nd layer has (256+1)x128=32896 parameters. Please note, dropout only randomly drop certain proportion of parameters for each batch, it will not reduce the number of parameters in the model. The total number of parameters for the dnn_model we just defined has 242762 parameters to be estimated.

Once a model is defined, we need to compile the model with a few other hyper-parameters including (1) loss function, (2) optimizer, and (3) performance metrics. For multi-class classification problems, people usually use the categorical_crossentropy loss function and optimizer_rmsprop() as the optimizer which performs batch gradient descent.

Now we can feed data (x and y) into the neural network that we just built to estimate all the parameters in the model. Here we define three hyperparameters: epochs, batch_size, and validation_split, for this model. It just takes a couple of minutes to finish.

There is some useful information stored in the output object dnn_history and the details can be shown by using str(). We can plot the training and validation accuracy and loss as function of epoch by simply calling plot(dnn_history).

12.2.7.3 Prediction

Let’s check a few misclassified images. A number of misclassified images can be found using the following code. And we can plot these misclassified images to see whether a human can correctly read it out.

Now we finish this simple tutorial of using deep neural networks for handwritten digit recognition using the MNIST dataset. We illustrate how to reshape the original data into the right format and scaling; how to define a deep neural network with arbitrary number of layers; how to choose activation function, optimizer, and loss function; how to use dropout to limit overfitting; how to setup hyperparameters; and how to fit the model and using a fitted model to predict. Finally, we illustrate how to plot the accuracy/loss as functions of the epoch. It shows the end-to-end cycle of how to fit a deep neural network model.

On the other hand, the image can be better dealt with Convolutional Neural Network (CNN) and we are going to walk through the exact same problem using CNN in the next section.


  1. “The tanh function is almost always strictly superior.” —- by Andrew Ng from his coursera course “Neural Networks and Deep Learning”