Seep Perdition using Image-Recognition based on Deep Learning

Seep Perdition using Image-Recognition based on Deep Learning

Abstract:

Most commercial optical sorting systems are designed to achieve high throughput, so they use a naive low latency image processing for object identification. These naïve low-latency algorithms have difficulty in accurately identifying objects with various shapes, textures, sizes, and colors, so the purity of sorted objects is degraded. Current deep learning technology enables robust image detection and classification, but its inference latency requires several milliseconds; thus, deep learning cannot be directly applied to such real-time high throughput applications. We therefore developed a super-high purity seed sorting system that uses a low-latency image-recognition based on a deep neural network and removes the seeds of noxious weeds from mixed seed product at high throughput with accuracy. The proposed system partitions the detection task into localization and classification, and applies batch inference only once strategy; it achieved 500-fps throughput image-recognition including detection and tracking. Based on the classified and tracked results, air ejectors expel the unwanted seeds. This proposed system eliminates almost the whole weeds with small loss of desired seeds, and is superior to current commercial optical sorting systems.

Existing System:

Seed identification by an image-recognition algorithm is the most challenging problem because seeds rotate during free fall and have various colors, shapes, textures, and sizes.

When the image processing determines that a seed is unwanted, the air ejector blows it into the reject box.

Recently, image classification and detection using deep learning have achieved remarkably high accuracy. A convolutional neural network (CNN) can extract features from images autonomously; this ability enables classification with high accuracy. Despite the good accuracy of deep learning, it cannot be directly applied to such a high-throughput optical sorting systems because most CNNs have high computation load, so they require large inference latency, which prevents achievement of high throughput.

Disadvantage:

A CNN-based classifier cannot be directly applied to the 500 fps throughput applications.

Inference time of a CNN depends on several factors including input size, the number of weights, and GPU capability. CNNs with millions of weights require >5 ms of GPU inference time (based on GTX 1080Ti model).

Proposed System:

We propose a real-time image-recognition method that has high accuracy and low latency. The method uses a CNN-based image classifier and a high-throughput inference strategy which is called batch inference only once strategy. The method partitions the object-detection task into localization and classification to decrease the inference latency while preserving classification accuracy.