 # A Quick Overview on Implementing Neural Networks Using TensorFlow ## A Quick Overview on Implementing Neural Networks Using TensorFlow

Neural Networks replicate the model in which the neurons in the human brain function. It’s easy for us, humans, to recognize numbers, shapes, images and languages, as meaningful pieces of information. To recognize a number, we interpret the lines, loops, etc. and their arrangement. Similarly, our brain can identify patterns of speech and music among other noises.

Can a computer network do the same? How about a computer recognizing handwritten text, identifying objects in an image and understanding speech? What are the varied applications of these technologies? In machine learning, machines simulate the way the human brain learns. This gets accomplished in machines with the help of “Big Data”.

Deep Learning Neural Networks function similar to the way neurons in the human brain function. But how will a neural network tell between a “3 and 8”, that look similar, or a “dog and lion?” To achieve this, neural networks are taught using data sets as MNIST. This data set consists of 60000 examples of 28×28 handwritten images of numbers.

Implementing Neural Networks Using TensorFlow:

We will also use TensorFlow to enable the neural network to learn to identify the numbers. TensorFlow is an open source library, originally developed by Google, to train and develop neural networks. To install TensorFlow, you can follow the instructions by clicking this link. You can also install Python, as Python is the best supported language in the TensorFlow library. We will also be using numpy, as it is useful in handling data in large multi-dimensional arrays.

After installing all the above programs, you will have to import TensorFlow with:

import tensorflow as tf

import numpy as np

from tensorflow.examples.tutorials.mnist import input_data

With TensorFlow, let’s now perform a simple calculation (addition) of two numbers:

# import tensorflow

import tensorflow as tf

# build computational graph

a = tf.placeholder(tf.int16)

b = tf.placeholder(tf.int16)

# initialize variables

init = tf.initialize_all_variables()

# create session and run the graph

with tf.Session() as sess:

sess.run(init)

print “Addition: %i” % sess.run(addition, feed_dict={a: 1, b: 2})

# close session

sess.close()

Now, let’s implement a Neural Network using TensorFlow:

To make a Neural Network using MNIST data of 28×28 images, load the data with:

from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets(“MNIST_data/”, one_hot=True)

Setting Up Hyper-Parameters:

Set up the hyper-parameters with:

training_epochs = 10

Batch_size = 100

Learning_rate = 0.1

One Training Epoch marks the completion of running the whole training data once. You can arrive at an ideal number of epochs based on whether you are using simple or complex data sets.

Batch size is the number of examples that will be passed in one iteration.

Learning Rate is the change in weight for each run of data that reduces the loss and cost functions to minimize errors. The rate can be either higher or lower depending upon the data’s complexity.

Setting Up Placeholders:

Now, set up the placeholder to allow input of the training data:

x = tf.placeholder(tf.float32, [None, 784])

y = tf.placeholder(tf.float32, [None, 10])

Placeholder is similar to a variable that can use different values each time. It allows input data into the graph during the runtime.

784=28×28 is the number of pixels or the number of input units. 10 is the number of output layers, as we have ten digits (0-9) in the MNIST data.

Weights and Biases

Weight is a value that multiplies the input associated with the synapse. The bias gets added to the output value to reduce the cost/loss function. Set up the weight and bias variables between the input and the hidden layer with:

W1 = tf.Variable(tf.random_normal([784, 600]), mean=0 ,stddev=0.01), name=’W1′)

b1 = tf.Variable(tf.random_normal(), name=’b1′)

Set up also the weights and biases between the hidden layer and the output layer:

W2 = tf.Variable(tf.random_normal([600, 10], mean=0 ,stddev=0.01), name=’W2′)

b2 = tf.Variable(tf.random_normal(), name=’b2′)

We will have 784 nodes or units in the input layer, 600 nodes in the hidden layers and 10 nodes in the output layers. The mean is 0 with a standard deviation of 0.01.

Start an Activation Function

Then, we will start an activation function to calculate the output of the hidden layer:

hidden_out = tf.add(tf.matmul(x, W1), b1)

hidden_out = tf.nn.relu(hidden_out)

We’ll use a softmax activation for the output layer:

y_ = tf.nn.softmax(tf.add(tf.matmul(hidden_out, W2), b2))

Cost Function and Backpropagation

After this, we need to set up a cost function for backpropagation to minimize the cost:

y_clipped = tf.clip_by_value(y_, 1e-10, 0.9999999)

cross_entropy = -tf.reduce_mean(tf.reduce_sum(y * tf.log(y_clipped)

+ (1 – y) * tf.log(1 – y_clipped), axis=1))

The cost function gets completed with:

We have used TensorFlow’s Gradient Descent Optimizer for the backpropagation to minimize the cost. You can also use many of the other training optimizers by TensorFlow.

We also need to initialize all the variables using:

init_op = tf.global_variables_initializer()

Calculating Accuracy

Also set up accuracy calculation that compares the total predictions with the whole data to arrive at accuracy of prediction:

correct_prediction = tf.equal(tf.argmax(a, 1), tf.argmax(Y, 1))

accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name=”Accuracy”)

Running the Session

Now, with everything ready, we are going to run the session with:

with tf.Session() as sess:

# initialise the variables

sess.run(init_op)

total_batch = int(len(mnist.train.labels) / batch_size)

for epoch in range(epochs):

avg_cost = 0

for i in range(total_batch):

batch_x, batch_y = mnist.train.next_batch(batch_size=batch_size)

_, c = sess.run([optimiser, cross_entropy],

feed_dict={x: batch_x, y: batch_y})

avg_cost += c / total_batch

print(“Epoch:”, (epoch + 1), “cost =”, “{:.3f}”.format(avg_cost))

print(sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels}))

Finally, after running the session, we arrive at the average cost, accuracy and output as below:

Epoch: 1 cost = 0.586

Epoch: 2 cost = 0.213

Epoch: 3 cost = 0.150

Epoch: 4 cost = 0.113

Epoch: 5 cost = 0.094

Epoch: 6 cost = 0.073

Epoch: 7 cost = 0.058

Epoch: 8 cost = 0.045

Epoch: 9 cost = 0.036

Epoch: 10 cost = 0.027

Training complete!

0.9787

So, this is how you implement and train a neural network with TensorFlow. As a quick recap, we saw that placeholders allow input of data that get transformed in the hidden layers and the output is realized via the output layers. You can increase accuracy and minimize cost during training by changing the weights and biases.

Visit us at: www.twilightitsolutions.com

Or 