hen8001's picture
Update README.md
dc5b7fa verified
metadata
license: other
language:
  - en
metrics:
  - accuracy
  - code_eval
library_name: keras
pipeline_tag: image-classification
tags:
  - image
  - agriculture

Cotton crop identification system

Abstract

Technology is extensively used in the agricultural area. One such example is depicted in this repository. This project aims to assist the inexperienced to identify whether a crop is cotton or not. By taking a picture and uploading it to the platform the user can know whether it’s a cotton crop or not.

Introduction

Agriculture has a lot of areas where the assistance of technology might be uncalled for but can be used to train the young enthusiastic minds who are interested. Many times it happens that just by looking at a plant we can’t identify its name. This project aims to assist the learners to identify the characteristics of a plant. The user will upload the picture and the deep learning model in the backend will run and it will generate an output which can be interpreted as either a cotton crop or not a cotton crop.

Data Collection

Ready-made datasets were not available in plenty, hence data had to be extracted from various sources on the web. Various public domain datasets were used. The model created for this project was trained on a total of 6962 images. The class label distribution was balanced, meaning there were equal amount of cotton crop images, and images that were not of cotton crops.

Data collected was carefully observed since there were many duplicate images. This resulted in data redundancy and the accuracy was compromised. For example, most of the images had the colour green in them, considering they were leaves. Now, if we had 50 copies of the same image, then the model would assign any image that had a colour green in them as a ‘cotton crop’. However, the above issue wasn't entirely resolved since the cotton plants and the leaves are of green colour. This is the reason that forces us to use Transfer Learning. There are many pre-trained models that can be used. This project uses the Inception V3 model.

Inception V3

The project was initially performed with the simple methodology of Convolutional Neural Network. Accuracy and loss were observed for many permutations and combinations of the layers. However, the best accuracy was provided by a pre-trained model, developed by the GoogleNet team for the ImageNet challenge, called InceptionV3. Inception V3 model has 48 deep layers, consisting of convolution layers, pooling layers, global average pooling layers, concatenation layers, auxiliary layers, dropout layers and regularizers like softmax. Inception Networks have proved to be more computationally efficient, both in terms of the number of parameters generated by the network and the economical cost incurred (memory and other resources).