Efficient identification, localization and quantification of grapevine inflorescences and flowers in unprepared field images using Fully Convolutional Networks

Authors

  • R. Rudolph Department of Computer Science IV, University of Bonn, Bonn, Germany
  • K. Herzog Institute for Grapevine Breeding Geilweilerhof, Julius Kühn-Institut (JKI), Federal Research Centre for Cultivated Plants, Siebeldingen, Germany
  • R. Töpfer Institute for Grapevine Breeding Geilweilerhof, Julius Kühn-Institut (JKI), Federal Research Centre for Cultivated Plants, Siebeldingen, Germany
  • V. Steinhage Department of Computer Science IV, University of Bonn, Bonn, Germany

DOI:

https://doi.org/10.5073/vitis.2019.58.95-104

Keywords:

Vitis vinifera ssp. vinifera; BBCH 59; Convolutional Neural Network (CNN); computer-based phenotyping; semantic segmentation

Abstract

Yield and its prediction is one of the most important tasks in grapevine breeding purposes and vineyard management. Commonly, this trait is estimated manually right before harvest by extrapolation, which mostly is labor-intensive, destructive and inaccurate. In the present study an automated image-based workflow was developed for quantifying inflorescences and single flowers in unprepared field images of grapevines, i.e. no artificial background or light was applied. It is a novel approach for non-invasive, inexpensive and objective phenotyping with high-throughput.
First, image regions depicting inflorescences were identified and localized. This was done by segmenting the images into the classes "inflorescence" and "non-inflorescence" using a Fully Convolutional Network (FCN). Efficient image segmentation hereby is the most challenging step regarding the small geometry and dense distribution of single flowers (several hundred single flowers per inflorescence), similar color of all plant organs in the fore- and background as well as the circumstance that only approximately 5 % of an image show inflorescences. The trained FCN achieved a mean Intersection Over Union (IOU) of 87.6 % on the test data set. Finally, single flowers were extracted from the "inflorescence"-areas using Circular Hough Transform. The flower extraction achieved a recall of 80.3 % and a precision of 70.7 % using the segmentation derived by the trained FCN model.
Summarized, the presented approach is a promising strategy in order to predict yield potential automatically in the earliest stage of grapevine development which is applicable for objective monitoring and evaluations of breeding material, genetic repositories or commercial vineyards.

Downloads

Published

2019-08-06

Issue

Section

Article