This is a model made out of Neural Network specifically a Convolutional Neural Network model

Overview

Hand Written Digits Recognizer

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

Table of contents:

  1. Importing Libraries

  2. Loading the data

  3. Making the model

  4. Compiling and training the model

  5. Evaluating the model

  6. Testing the model by doing predictions!!

  7. How can you try this data on your custom input?

                             

Importing Libraries

Modules used in creating this model are numpy , os , matplotlib , tensorflow , keras , cv2

import os
import cv2
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from keras.layers import Dense,Flatten,Conv2D, MaxPooling2D

Loading the data

Mnist, a built-in dataset from keras, is used for this model.

mnist = tf.keras.datasets.mnist

                                    (image Source: Kaggle.com)

The data is actually loaded in the form of a numpy array. The entire image is 28x28 pixels in size. When we plot it with matplotlib, we get this image.

The data is being divided into train labels, train images, test labels, and test images.

(train_x,train_y),(test_x,test_y) = mnist.load_data()

Now, the colours in this image are divided into three channels, and we don't need to extract their attributes based on colour, from the image. Our model will focus on the archs and lines used in their creation. Furthermore, any image that we consider is presented in the RGB(0-255) by default to our model. To be more specific ,according to the activation of each pixel in the image, the numpy array has values ranging from 0-255. As a result, our model takes a long time to analyse. So to tackel this, we will noramlize the matrix and then extract the featurse to feed our model. which will require less time to master. As a result, once we've normalised our data, our model will see the image as

Our image is now an array with values ranging from 0 to 1, which is a smart thing to do before feeding it to our model. Now apply the same logic to our entire 60,000-image dataset.

Before normalization:

After normalization:

Now that we have our data, all we need to do is create a model to feed it. to anticipate our next inputs.

Making the Model

Now, one of the most important aspects of our model to consider is the layers and how they are organised. So, for my model, I utilised three convolutional layers and a maxpooling layer after each one. After that, I flattened the convolutional model and connected it to the Fully Connected layer.

The below image is the summary of The model .

To comprehend the CNN employed in this model The following photograph, which I obtained after a lot of online surfing, will be useful.!

( Image credits: analyticsindiamag.com )

The image above shows a standard Convolution layer, and the white boxes around the feature map are our image padding, which is usually not required in a model. So that's why I've ruled it out as well.

Compiling and Training Our Model

Now that we've finished building our model, it's time to teach it the numbers. People in this world are incredibly lethargic when it comes to maintaining a decent handwriting. So that's why ,we need to teach the model the most possible methods to write a digit T_T.

This isn't a one-time activity where our model will understand how things operate soon after we show it all the images. Even ,we humans need need some revisions in order to remember things. Similarly, our model must be taught the photos several times, which is referred to as Epochs in deep learning. The greater the number of epochs, the lower the loss while forecasting the image.

Always keep in mind that a NN strives to minimise the loss for each epoch; it does not increase accuracy; rather, it reduces losses, which increases accuracy.

Now , to complie our model we are using adam optimizer

model.compile(
loss = 'sparse_categorical_crossentropy',
optimizer= 'adam',
metrics = ['accuracy']
)

while feeding our model i've used 5 epochs and validated the data with a split of 30% of the training data. we don't want overfitting cases to our data so that's why i choose 5, which is pretty decent regarding my model.

model.fit(
train_x_r,train_y,
epochs = 5,
validation_split = 0.3
)

Evaluating the Model

I obtained 98.12 percent accuracy with a loss of 0.069 while evaluating this model, which is a very good result for a CNN model. but i'll surely be working on 'decreasing the loss' ( you know what i mean!!).

Predicting the digits using our model

testing the model with the prbuilt test dataset provied

Lets demonstrate the model, now lets take a label from our test labels lets say, 63.

Now lets see the coorresponding image in test_x which contains the image arrays of the hand written numbers.

Now here is the prediction time! let's see what our model predicts

Here, 'p' is the array which contains all the predictions of the test images, and p[63] is the predicted label for test_y[63] image. Hope this completely makes sense.

Overview of the Model

Finally, it takes the image as input, normalises the image array, predicts all the likelihoods of being each digit using the softmax expression, and finally, this model returns the argumental maximun of that prediction array for that image.

How can you try this data on your custom input?

Well here comes the exiting part, for this version of model all you need is the path of the image. and just follow these three simple steps.

PS: clone it, or download the zip, which ever method you find relevant and then strat following the below steps


Step-1:-

draw you digit in you local machine using any simple art tool! how much time its gonna take tho. just make sure you draw the digit with a lighter shade on a darker background to get more accurate result. what i mean is

                        (fig - 1)                                        (fig-2)

in the above figures fig-1 will give more accurate results than fig-2.

Step-2:-

Copy the path to where you saved the image in any format you want (png, jpg, etc.). It will be easier if you save the image in the same folder as the 'hands-on.py' script.

Step-3:-

run the hands-on.py script and paste your image-path over there and TADA! you're job is done. all you need to check is the result and praise the model and most importantly star this repo staright after that 🌚 !


Trail

This is the procedure that must be followed. So I used MS Paint to create this digit. and this is how it appears (please don't judge!! :-))

                (eight.png)

and now lets run the program hands-on.py and here's how it works

And that's how it ends!

If any necessary commits are required to increase the elegance of this model! i'm always open for a PR.

Happy coding! i🖖🏾

I explore rock vs. mine prediction using a SONAR dataset

I explore rock vs. mine prediction using a SONAR dataset. Using a Logistic Regression Model for my prediction algorithm, I intend on predicting what an object is based on supervised learning.

Jeff Shen 1 Jan 11, 2022
Implementation of gaze tracking and demo

Predicting Customer Demand by Using Gaze Detecting and Object Tracking This project is the integration of gaze detecting and object tracking. Predict

2 Oct 20, 2022
Internship Assessment Task for BaggageAI.

BaggageAI Internship Task Problem Statement: You are given two sets of images:- background and threat objects. Background images are the background x-

Arya Shah 10 Nov 14, 2022
Interactive Image Generation via Generative Adversarial Networks

iGAN: Interactive Image Generation via Generative Adversarial Networks Project | Youtube | Paper Recent projects: [pix2pix]: Torch implementation for

Jun-Yan Zhu 3.9k Dec 23, 2022
This repo is a PyTorch implementation for Paper "Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds"

Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds This repository is a PyTorch implementation for paper: Uns

Kaizhi Yang 42 Dec 09, 2022
Skipgram Negative Sampling in PyTorch

PyTorch SGNS Word2Vec's SkipGramNegativeSampling in Python. Yet another but quite general negative sampling loss implemented in PyTorch. It can be use

Jamie J. Seol 287 Dec 14, 2022
Implicit Model Specialization through DAG-based Decentralized Federated Learning

Federated Learning DAG Experiments This repository contains software artifacts to reproduce the experiments presented in the Middleware '21 paper "Imp

Operating Systems and Middleware Group 5 Oct 16, 2022
Full-featured Decision Trees and Random Forests learner.

CID3 This is a full-featured Decision Trees and Random Forests learner. It can save trees or forests to disk for later use. It is possible to query tr

Alejandro Penate-Diaz 3 Aug 15, 2022
Torch implementation of various types of GAN (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN, LSGAN)

gans-collection.torch Torch implementation of various types of GANs (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN). Note that EBGAN and

Minchul Shin 53 Jan 22, 2022
DL & CV-based indicator toolset for the vehicle drivers via live dash-cam footage.

Vehicle Indicator Toolset Deep Learning and Computer Vision based indicator toolset for vehicle drivers using live dash-cam footages. Tracking of vehi

Alex Xu 12 Dec 28, 2021
[ICLR'21] Counterfactual Generative Networks

This repository contains the code for the ICLR 2021 paper "Counterfactual Generative Networks" by Axel Sauer and Andreas Geiger. If you want to take the CGN for a spin and generate counterfactual ima

88 Jan 02, 2023
Tensorflow-Project-Template - A best practice for tensorflow project template architecture.

Tensorflow Project Template A simple and well designed structure is essential for any Deep Learning project, so after a lot of practice and contributi

Mahmoud G. Salem 3.6k Dec 22, 2022
Patch-Diffusion Code (AAAI2022)

Patch-Diffusion This is an official PyTorch implementation of "Patch Diffusion: A General Module for Face Manipulation Detection" in AAAI2022. Require

H 7 Nov 02, 2022
Unofficial implementation of the ImageNet, CIFAR 10 and SVHN Augmentation Policies learned by AutoAugment using pillow

AutoAugment - Learning Augmentation Policies from Data Unofficial implementation of the ImageNet, CIFAR10 and SVHN Augmentation Policies learned by Au

Philip Popien 1.3k Jan 02, 2023
TabNet for fastai

TabNet for fastai This is an adaptation of TabNet (Attention-based network for tabular data) for fastai (=2.0) library. The original paper https://ar

Mikhail Grankin 116 Oct 21, 2022
DVG-Face: Dual Variational Generation for Heterogeneous Face Recognition, TPAMI 2021

DVG-Face: Dual Variational Generation for HFR This repo is a PyTorch implementation of DVG-Face: Dual Variational Generation for Heterogeneous Face Re

52 Dec 30, 2022
Unofficial Implementation of RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019)

RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019) This repository contains python (3.5.2) implementation of

Doyup Lee 222 Dec 21, 2022
Optimizers-visualized - Visualization of different optimizers on local minimas and saddle points.

Optimizers Visualized Visualization of how different optimizers handle mathematical functions for optimization. Contents Installation Usage Functions

Gautam J 1 Jan 01, 2022
Einshape: DSL-based reshaping library for JAX and other frameworks.

Einshape: DSL-based reshaping library for JAX and other frameworks. The jnp.einsum op provides a DSL-based unified interface to matmul and tensordot o

DeepMind 62 Nov 30, 2022
Large-scale Hyperspectral Image Clustering Using Contrastive Learning, CIKM 21 Workshop

Spectral-spatial contrastive clustering (SSCC) Yaoming Cai, Yan Liu, Zijia Zhang, Zhihua Cai, and Xiaobo Liu, Large-scale Hyperspectral Image Clusteri

Yaoming Cai 4 Nov 02, 2022