In data to check for accuracy of the

In my experiment, I train a multilayer CNNfor street view house numbers recognition and check the accuracy of test data.The coding is done in python using Tensorflow, a powerful library forimplementation and training deep neural networks. The central unit of data inTensorFlow is the tensor. A tensor consists of a set of primitive values shapedinto an array of any number of dimensions. A tensor’s rank is its number ofdimensions. 20 Along with TensorFlow used some other library function such asNumpy, Mathplotlib, SciPy etc.Firstly, as I have technical resourcelimitation I perform my analysis only using the train and test dataset. Andomit extra dataset which is 2.

7GB. Secondly, to make the analysis simpler Ifind and delete all those data points which have more than 5 digits in theimage. For the implementation, I randomly shuffle valid dataset I have used thepickle file svhn_multi which I created by preprocessing the data from theoriginal SVHN dataset. Then used the pickle file and train a 7-layer ConvolutedNeural Network. Finally, I cast off the test data to check for accuracy of thetrained model to detect number from street house number image.               Atthe very beginning of my experiment, first convolution layer I used 16 featuremaps with 5×5 filters, and originate 28x28x16 output. A few ReLU layers arealso added after each layer to add more non-linearity to the decision-makingprocess.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

After first sub-sampling the output size decrease in 14x14x10. Thesecond convolution has 512 feature maps with 5×5 filters and produces 10x10x32output. In this moment applied sub-sampling second time and shrink the outputsize to 5x5x32.

Finally, the third convolution has 2048 feature maps with samefilter size. It is mentionable that the stride size =1 in my experiment alongwith this zero padding also used here. During my experiment, I used dropouttechnique to reduce the overfitting. Finally, the last layer is SoftMaxregression layer. Weights are initialized randomly using Xavier initializationwhich keeps the weights in the right range. It automatically scales theinitialization based on the number of output and input neurons. Now I train thenetwork and log the accuracy, loss and validation accuracy in steps of 500.Initially, I used a static learning rate of0.

01 but later switched to exponential decay learning rate with an initiallearning rate of 0.05 which decays every 10000 steps with a base of 0.95. Tominimize the loss, I used Adagrad Optimizer. When I reached a satisfactoryaccuracy level for the test dataset then stop the learning and save thehyperparameters in the cnn_multi checkpoint file. When I need to perform the detection,it will load that time without train the model again.

Initially, the model produced an accuracy of89% with just 15000 steps. It’s a great starting point and certainly, after afew hours of training the accuracy will reach my benchmark of 90%. However, I addedsome simple improvements to further increase the accuracy of few number oflearning steps. First, added a dropout layer after the third convolution layerjust before fully connected layer. This allows the network to become morerobust and prevents overfitting.

Secondly, introduced exponential decay tolearning rate instead of keeping it constant. This helps the network to takebigger steps at first so that it learns fast but over time as we move closer tothe global minimum, take smaller noisier steps. With these changes, the modelis now able to produce an accuracy of 92.

9% on the test set with 15000 steps.Since there are a large training set and about 13068 images in the test set,there is a chance of more improvement if we trained the model for a longer duration.