5–7 Jul 2023
Dipartimento di Scienze del Suolo, della Pianta e degli Alimenti
Europe/Rome timezone

Fruit detection by collaborative robots and machine learning methods. A case study on pomegranate

7 Jul 2023, 10:30
25m
Aula Magna (Dipartimento di Scienze del Suolo, della Pianta e degli Alimenti)

Aula Magna

Dipartimento di Scienze del Suolo, della Pianta e degli Alimenti

Speaker

Simone Pascuzzi

Description

The challenges, costs and complexities of agricultural work require the development and adoption of alternative techniques. In this context, collaborative robots can represent a key factor for the development of agriculture. These are autonomous equipment developed to perform different tasks, make decisions and act in real time without human intervention. Furthermore, robots are also very useful for the sustainable management of the territory, as they guarantee the acquisition of information useful for limiting inputs and the related environmental pollution. Collaborative robots can be equipped with visual sensors for acquiring data which, when properly processed, can be of support to farm management, such as fruit counting or crop growth and health monitoring, etc. A home-built crawler robot equipped with low performance camera (Intel RealSense D435) was used to capture images in a pomegranate orchard. These images have been then processed by a deep learning segmentation framework for separating the fruits (pomegranates) from the surrounding areas. The multi-stage transfer learning technique has been used. At first, a pre-trained network (DeepLabv3+) has been tuned to pomegranate images acquired under controlled conditions and then progressively enhanced to segment the images in the field. In particular, the images of fruits arranged on a horizontal surface with a neutral background were acquired under both natural and controlled lighting conditions. Images with uniform illumination were labelled based on colour thresholds in the RGB space followed by morphological operations and used to transfer learning from the pre-trained architecture. The improved network has been later applied to produce accurate labels for images captured in the presence of shadows. These labels were used to retrain the network, which has been finally applied to segment the images captured by the robot in the pomegranate orchard. The results obtained from this procedure have pointed out that, despite the low quality of the field images, the segmentation of these images has been efficient with high values of the adopted metrics. In particular, the F1-score of 86.42% and IoU (Intersection of Union) of 97.94% has been achieved.

Author

Simone Pascuzzi

Presentation materials

There are no materials yet.