Speaker
Description
Distributed monitoring systems often rely on low-power wireless sensor networks that collect scalar environmental parameters (e.g., temperature, humidity, pressure), providing only indirect indications of system conditions. This work explores the integration of Edge AI–based visual analysis directly within resource-constrained sensor nodes, enabling real-time condition monitoring while minimizing bandwidth, power consumption, and computational requirements.
We present a four-stage inference pipeline deployed on an ESP32-S3 microcontroller with an OV3660 camera module. Images captured at VGA resolution are processed locally using a lightweight YOLO-based detector (ESPDet-Pico, 478 KB INT8) to localize objects of interest, followed by a quantized MobileNetV2 classifier (2.27 MB INT8). The system was demonstrated on grape leaf disease detection, achieving 56.4% mAP@50 for leaf detection and 99.8 ± 0.065% classification accuracy on individual leaf patches. Frame-level diagnoses are obtained through a weighted aggregation of multiple detections and transmitted via LoRaWAN, enabling efficient data reduction. Operating in a duty-cycled mode with deep sleep between acquisitions, the system achieves an estimated 10-month lifetime on a 2400 mAh battery.
Beyond the agricultural use case, the proposed architecture provides a general framework for embedded visual monitoring in scientific infrastructures. Similar Edge AI pipelines could support autonomous inspection of HEP experimental facilities, such as those explored in ECFA-DRD8 WP1.2, where compact models could detect leaks, cable damage, or structural anomalies directly on mobile robotic platforms, as well as medical physics applications, such as automated quality assurance in radiotherapy systems, monitoring of detector responses in medical imaging instrumentation, and analysis of phantom-based dosimetry measurements.