This is a model made out of Neural Network specifically a Convolutional Neural Network model

Overview

Hand Written Digits Recognizer

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

Table of contents:

  1. Importing Libraries

  2. Loading the data

  3. Making the model

  4. Compiling and training the model

  5. Evaluating the model

  6. Testing the model by doing predictions!!

  7. How can you try this data on your custom input?

                             

Importing Libraries

Modules used in creating this model are numpy , os , matplotlib , tensorflow , keras , cv2

import os
import cv2
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from keras.layers import Dense,Flatten,Conv2D, MaxPooling2D

Loading the data

Mnist, a built-in dataset from keras, is used for this model.

mnist = tf.keras.datasets.mnist

                                    (image Source: Kaggle.com)

The data is actually loaded in the form of a numpy array. The entire image is 28x28 pixels in size. When we plot it with matplotlib, we get this image.

The data is being divided into train labels, train images, test labels, and test images.

(train_x,train_y),(test_x,test_y) = mnist.load_data()

Now, the colours in this image are divided into three channels, and we don't need to extract their attributes based on colour, from the image. Our model will focus on the archs and lines used in their creation. Furthermore, any image that we consider is presented in the RGB(0-255) by default to our model. To be more specific ,according to the activation of each pixel in the image, the numpy array has values ranging from 0-255. As a result, our model takes a long time to analyse. So to tackel this, we will noramlize the matrix and then extract the featurse to feed our model. which will require less time to master. As a result, once we've normalised our data, our model will see the image as

Our image is now an array with values ranging from 0 to 1, which is a smart thing to do before feeding it to our model. Now apply the same logic to our entire 60,000-image dataset.

Before normalization:

After normalization:

Now that we have our data, all we need to do is create a model to feed it. to anticipate our next inputs.

Making the Model

Now, one of the most important aspects of our model to consider is the layers and how they are organised. So, for my model, I utilised three convolutional layers and a maxpooling layer after each one. After that, I flattened the convolutional model and connected it to the Fully Connected layer.

The below image is the summary of The model .

To comprehend the CNN employed in this model The following photograph, which I obtained after a lot of online surfing, will be useful.!

( Image credits: analyticsindiamag.com )

The image above shows a standard Convolution layer, and the white boxes around the feature map are our image padding, which is usually not required in a model. So that's why I've ruled it out as well.

Compiling and Training Our Model

Now that we've finished building our model, it's time to teach it the numbers. People in this world are incredibly lethargic when it comes to maintaining a decent handwriting. So that's why ,we need to teach the model the most possible methods to write a digit T_T.

This isn't a one-time activity where our model will understand how things operate soon after we show it all the images. Even ,we humans need need some revisions in order to remember things. Similarly, our model must be taught the photos several times, which is referred to as Epochs in deep learning. The greater the number of epochs, the lower the loss while forecasting the image.

Always keep in mind that a NN strives to minimise the loss for each epoch; it does not increase accuracy; rather, it reduces losses, which increases accuracy.

Now , to complie our model we are using adam optimizer

model.compile(
loss = 'sparse_categorical_crossentropy',
optimizer= 'adam',
metrics = ['accuracy']
)

while feeding our model i've used 5 epochs and validated the data with a split of 30% of the training data. we don't want overfitting cases to our data so that's why i choose 5, which is pretty decent regarding my model.

model.fit(
train_x_r,train_y,
epochs = 5,
validation_split = 0.3
)

Evaluating the Model

I obtained 98.12 percent accuracy with a loss of 0.069 while evaluating this model, which is a very good result for a CNN model. but i'll surely be working on 'decreasing the loss' ( you know what i mean!!).

Predicting the digits using our model

testing the model with the prbuilt test dataset provied

Lets demonstrate the model, now lets take a label from our test labels lets say, 63.

Now lets see the coorresponding image in test_x which contains the image arrays of the hand written numbers.

Now here is the prediction time! let's see what our model predicts

Here, 'p' is the array which contains all the predictions of the test images, and p[63] is the predicted label for test_y[63] image. Hope this completely makes sense.

Overview of the Model

Finally, it takes the image as input, normalises the image array, predicts all the likelihoods of being each digit using the softmax expression, and finally, this model returns the argumental maximun of that prediction array for that image.

How can you try this data on your custom input?

Well here comes the exiting part, for this version of model all you need is the path of the image. and just follow these three simple steps.

PS: clone it, or download the zip, which ever method you find relevant and then strat following the below steps


Step-1:-

draw you digit in you local machine using any simple art tool! how much time its gonna take tho. just make sure you draw the digit with a lighter shade on a darker background to get more accurate result. what i mean is

                        (fig - 1)                                        (fig-2)

in the above figures fig-1 will give more accurate results than fig-2.

Step-2:-

Copy the path to where you saved the image in any format you want (png, jpg, etc.). It will be easier if you save the image in the same folder as the 'hands-on.py' script.

Step-3:-

run the hands-on.py script and paste your image-path over there and TADA! you're job is done. all you need to check is the result and praise the model and most importantly star this repo staright after that 🌚 !


Trail

This is the procedure that must be followed. So I used MS Paint to create this digit. and this is how it appears (please don't judge!! :-))

                (eight.png)

and now lets run the program hands-on.py and here's how it works

And that's how it ends!

If any necessary commits are required to increase the elegance of this model! i'm always open for a PR.

Happy coding! i🖖🏾

Torchlight2 lan game server tool - A message forwarding tool for Torchlight 2 lan game

Torchlight 2 Lan Game Server Tool A message forwarding tool for Torchlight 2 lan

Huaijun Jiang 3 Nov 01, 2022
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs

Context Encoders: Feature Learning by Inpainting CVPR 2016 [Project Website] [Imagenet Results] Sample results on held-out images: This is the trainin

Deepak Pathak 829 Dec 31, 2022
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
Codes for TIM2021 paper "Anchor-Based Spatio-Temporal Attention 3-D Convolutional Networks for Dynamic 3-D Point Cloud Sequences"

Codes for TIM2021 paper "Anchor-Based Spatio-Temporal Attention 3-D Convolutional Networks for Dynamic 3-D Point Cloud Sequences"

Intelligent Robotics and Machine Vision Lab 4 Jul 19, 2022
AttGAN: Facial Attribute Editing by Only Changing What You Want (IEEE TIP 2019)

News 11 Jan 2020: We clean up the code to make it more readable! The old version is here: v1. AttGAN TIP Nov. 2019, arXiv Nov. 2017 TensorFlow impleme

Zhenliang He 568 Dec 14, 2022
Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style disentanglement in image generation and translation" (ICCV 2021)

DiagonalGAN Official Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style Disentanglement in Image Generation and Trans

32 Dec 06, 2022
PyTorch implementation of Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network

hierarchical-multi-label-text-classification-pytorch Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach This

Mingu Kang 17 Dec 13, 2022
Bayesian Deep Learning and Deep Reinforcement Learning for Object Shape Error Response and Correction of Manufacturing Systems

Bayesian Deep Learning for Manufacturing 2.0 (dlmfg) Object Shape Error Response (OSER) Digital Lifecycle Management - In Process Quality Improvement

Sumit Sinha 30 Oct 31, 2022
Algorithmic trading with deep learning experiments

Deep-Trading Algorithmic trading with deep learning experiments. Now released part one - simple time series forecasting. I plan to implement more soph

Alex Honchar 1.4k Jan 02, 2023
Apache Spark - A unified analytics engine for large-scale data processing

Apache Spark Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an op

The Apache Software Foundation 34.7k Jan 04, 2023
MolRep: A Deep Representation Learning Library for Molecular Property Prediction

MolRep: A Deep Representation Learning Library for Molecular Property Prediction Summary MolRep is a Python package for fairly measuring algorithmic p

AI-Health @NSCC-gz 83 Dec 24, 2022
本步态识别系统主要基于GaitSet模型进行实现

本步态识别系统主要基于GaitSet模型进行实现。在尝试部署本系统之前,建立理解GaitSet模型的网络结构、训练和推理方法。 系统的实现效果如视频所示: 演示视频 由于模型较大,部分模型文件存储在百度云盘。 链接提取码:33mb 具体部署过程 1.下载代码 2.安装requirements.txt

16 Oct 22, 2022
INSPIRED: A Transparent Dialogue Dataset for Interactive Semantic Parsing

INSPIRED: A Transparent Dialogue Dataset for Interactive Semantic Parsing Existing studies on semantic parsing focus primarily on mapping a natural-la

7 Aug 22, 2022
Automatic Idiomatic Expression Detection

IDentifier of Idiomatic Expressions via Semantic Compatibility (DISC) An Idiomatic identifier that detects the presence and span of idiomatic expressi

5 Jun 09, 2022
Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search

CLIP-GLaSS Repository for the paper Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search An in-browser demo is

Federico Galatolo 172 Dec 22, 2022
Meandering In Networks of Entities to Reach Verisimilar Answers

MINERVA Meandering In Networks of Entities to Reach Verisimilar Answers Code and models for the paper Go for a Walk and Arrive at the Answer - Reasoni

Shehzaad Dhuliawala 271 Dec 13, 2022
Demos of essentia classifiers hosted on replicate.ai

essentia-replicate-demos Demos of Essentia models hosted on replicate.ai's MTG site. The models Check our site for a complete list of the models avail

Music Technology Group - Universitat Pompeu Fabra 12 Nov 14, 2022
make ASCII Art by Deep Learning

DeepAA This is convolutional neural networks generating ASCII art. This repository is under construction. This work is accepted by NIPS 2017 Workshop,

OsciiArt 1.4k Dec 28, 2022
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators [Project Website] [Replicate.ai Project] StyleGAN-NADA: CLIP-Guided Domain Adaptation

992 Dec 30, 2022
Guiding evolutionary strategies by (inaccurate) differentiable robot simulators @ NeurIPS, 4th Robot Learning Workshop

Guiding Evolutionary Strategies by Differentiable Robot Simulators In recent years, Evolutionary Strategies were actively explored in robotic tasks fo

Vladislav Kurenkov 4 Dec 14, 2021