High-throughput spike detection in greenhouse cultivated grain crops with attention mechanisms based deep learning models

Investor logo

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Science. Official publication website can be found on muni.cz.
Authors

ULLAH Sajid PANZAROVÁ Klára TRTÍLEK Martin LEXA Matej MÁČALA Vojtěch ALTMANN Thomas NEUMANN Kerstin HEJÁTKO Jan PERNISOVÁ Markéta GLADILIN Evgeny

Year of publication 2024
Type Article in Periodical
Magazine / Source Plant Phenomics
MU Faculty or unit

Faculty of Science

Citation
Web https://spj.science.org/doi/10.34133/plantphenomics.0155
Doi http://dx.doi.org/10.34133/plantphenomics.0155
Keywords spike detection; high-throughput image analysis; Attention networks; Deep Neural Networks
Description Detection of spikes is the first important step towards image-based quantitative assessment of crop yield. However, spikes of grain plants occupy only a tiny fraction of the image area and often emerge in the middle of the mass of plant leaves that exhibit similar colors as spike regions. Consequently, accurate detection of grain spikes renders, in general, a non-trivial task even for advanced, state-of-the-art deep learning neural networks (DNNs). To improve pattern detection in spikes, we propose architectural changes to Faster-RCNN (FRCNN) by reducing feature extraction layers and introducing a global attention module. The performance of our extended FRCNN-A vs. conventional FRCNN was compared on images of different European wheat cultivars, including ’difficult’ bushy phenotypes from two different phenotyping facilities and optical setups. Our experimental results show that introduced architectural adaptations in FRCNN-A helped to improve spike detection accuracy in inner regions. The mAP of FRCNN and FRCNN-A on inner spikes is 76.0% and 81.0%, respectively, while on the state-of-the-art detection DNNs, Swin Transformer mAP is 83.0%. As a lightweight network, FRCNN-A is faster than FRCNN and Swin Transformer on both baseline and augmented training datasets. On the FastGAN augmented dataset, FRCNN achieved mAP of 84.24%, FRCNN-A 85.0%, and the Swin Transformer 89.45%. The increase in mAP of DNNs on the augmented datasets is proportional to the amount of the IPK original and augmented images. Overall, this study indicates a superior performance of attention mechanisms-based deep learning models in detecting small and subtle features of grain spikes.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info