VITIS HeaderLogo
Home / Archives / VITIS: Vol. 63, Art. 3, 7 pp. (2024) / Original Article

VITIS: Vol. 63, Art. 3, 7 pp. (2024) | DOI: 10.5073/vitis.2024.63.03 | Huber et al.

Original Article
Florian Huber1*, Benedikt Hofmann2, Hannes Engler3, Pascal Gauweiler2, Benedikt Fischer2, Katja Herzog3, Anna Kicherer3, Reinhard Töpfer3, Robin Gruna2, Volker Steinhage1

A Concept Study for Feature Extraction and Modeling for Grapevine Yield Prediction

Affiliations
1 Department of Computer Science IV, University of Bonn, Bonn, Germany
2 Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (IOSB), Karlsruhe, Germany
3 Julius Kühn-Institut, Federal Research Centre of Cultivated Plants, Institute for Grapevine Breeding, Geilweilerhof, Siebeldingen, Germany
Correspondence
*Florian Huber: huber@cs.uni-bonn.de
(c) The author(s) 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/deed.en).
 
Submitted/accepted for publication: August 22, 2023/April 19, 2024

Summary

Yield prediction in viticulture is an especially challenging research direction within the field of yield prediction. The characteristics that determine annual grapevine yields are plentiful, difficult to obtain, and must be captured multiple times throughout the year. The processes currently used in grapevine yield prediction are based mainly on manually captured data and rigid statistical measures derived from historical insights. Experts for data acquisition are scarce, and statistical models cannot meet the requirements of a changing environment, especially in times of climate change. This paper contributes a concept on how to overcome those drawbacks, by (1) proposing a deep learning driven approach for feature recognition and (2) explaining how Extreme Gradient Boosting (XGBoost) can be utilized for yield prediction based on those features, while being explainable and computationally inexpensive. The methods developed will be influential for the future of yield prediction in viticulture.

Keywords

Yield forecasting, viticulture, Deep Learning, Extreme Gradient Boosting, XGBoost

Introduction

The use of a grapevine yield prediction is two-fold: First, from the point of view of grapevine breeding, where the creation of new varieties is a process based on the breeding of thousands of grapevines. A key factor in the selection of a variety later in the cultivation process is the annual yield per grapevine. Determining the expected yield automatically will increase efficiency and reduce costs of breeding new varieties, as varieties that are not predicted to achieve competitive yields can be discarded. Second, we can use a grapevine yield prediction in viticulture to determine yields early in the year. This allows for advanced logistic planning of harvest and subsequent processing from grape to wine. A good prediction can lower the financial risk for winegrowers and can even help to select the correct processes, such as, for example, grape splitting to increase the quality of the wine while simultaneously maximizing the yields within the bounds of the German law that restricts the amount of grape bunches each wine grower is allowed to process. In this concept paper, we present our advances in automated grapevine yield prediction based on captured in-field data. An image-based evaluation of yield-relevant phenotypic traits based on deep learning and rule-based feature extraction is proposed. The extracted features are used to predict the expected yield via Extreme Gradient Boosting (XGBoost). Furthermore, this study provides a proof of concept for this modeling approach by predicting grapevine yields for several grape varieties based on features acquired manually throughout the year. The features are similar to the ones that will be extracted from the deep learning pipeline in the future, and hence allow us to already evaluate the possible success for a fully automated grapevine yield prediction.

The development of resilient estimation presents scientific challenges related to the quality, heterogeneity and low number of site-specific yield-, climate data (Laurent et al., 2021). Recent practice in grapevine yield prediction often requires that features be manually collected (Linares Torres et al., 2015). As an alternative to traditional methods, computer vision and image processing are the most utilized techniques for attempting an early yield estimation (Barriguinha et al., 2021). The most influential research on grapevine yield prediction so far focuses on plot-level predictions. For example, Sirsal et al. (2019), use manually acquired phenological information and random forest ensemble techniques to predict grapevine yields on the plot level. Araya-Alman et al. (2019) used historical yield data to improve yield estimation for the current season. They furthermore showed that reliable sampling points and sampling distribution are required for better estimation of grapevine yields. In addition to the sampling distribution in the field, the development stage in the growing season also plays an important role in the prediction accuracy of an early yield estimation (La Fuente et al., 2015). Additional to historical yield data, climate data such as thermal/hydric conditions from the current year can be used for the estimation by comparing them with the climate conditions from high and low production years (Fraga and Santos, 2017). The recent work of Barriguinha et al. (2022) applies a long-short-term memory (LSTM) network to predict grapevine yields at the plot level based on remote sensing data in Portugal. In addition, Ballesteros et al. (2020) used vegetation indices to identify the health of the vines and combined them with the vegetated fraction cover, as a measure of plant vigor, for yield prediction. Furthermore, Cunha et al. (2016) achieve great results for the prediction of grapevine yield by measuring the concentration of airborne pollen within the vineyards and applying statistical modeling. To increase accuracy and certainty, yield estimation should take place at several points in the growing season, depending on the development of yield components in highly influential periods, like bud break, blooming, fruit set and veraison (Laurent et al., 2021).

Material and Methods

For an automatic grapevine yield prediction, the proposed method replaces manual counting and measurements in vineyards by using non-destructive image-based evaluation of phenotypic traits. Yield-relevant features are extracted from the acquired images and serve as input for the subsequent single-vine yield prediction.

Phenotyping Platform and Data Acquisition

To acquire the images for the in-field phenotyping, a mobile embedded vision platform has been developed that captures vine images row by row. The system provides additional position information for each acquisition, allowing to match images to single vines. Engler et al. (2023) describe the utilization of an All-Terrain Vehicle (ATV) as a sensor carrier for the system. Images are recorded with a resolution of 2048 × 1536 px and two flash-equipped LED bars ensure consistent lighting conditions. With this setup, the images can be acquired at 10 frames per second with a driving speed of 5 km h-1. Further details on the imaging process are explained in the reference. The complete platform, referred to as the PHENOquad, is shown in Fig. 1.

Fig. 1: Mobile Embedded Vision Platform in use as PHENOquad.

Fig. 1: Mobile Embedded Vision Platform in use as PHENOquad.

To assign images to corresponding vines, the images are first analyzed to identify individual vine instances. Images featuring multiple grapevines will be disregarded, as the features extracted from them cannot be definitively attributed to a single vine. The remaining images displaying only a single grapevine are assigned to the respective vines using the geo-information of the image acquisition and vine reference coordinates. This georeferencing capability allows extracted features to be attributed to individual vines, providing a detailed and granular basis for the yield prediction.

Image Segmentation and Feature Extraction

After georeferencing, the images are processed through a feature extraction pipeline to generate the necessary input data for the yield prediction. This pipeline consists of two main parts. First, the image is segmented using a data-driven deep learning approach to extract phenotypic traits such as grapes, leaves and shoots. Then, yield-relevant features are extracted from this segmentation result using a subsequent, rule-based approach. In this step, numeric features such as the count of instances or the area of specific segmented classes are generated. Later, this data is used as input for the yield prediction. The complete phenotyping workflow for the yield prediction is shown in Fig. 2. The left branch shows the iterative training approach of the data-driven segmentation. A subset of the images is selected and manually annotated with the desired segmentation result. A deep learning segmentation model is then trained and validated with this data serving as the ground-truth. The right branch shows the use of this trained model for the evaluation of the phenotypic traits. First, the images are filtered and attributed to their corresponding vines (Georeferencing). These images are then segmented using the trained model from the left branch. The result of this segmentation is fed into the rule-based feature extraction to generate the numeric input data for the yield prediction. Fig. 3 shows the result of the first evaluation step to segment grape bunches in the image.

Fig. 2: Overview of Phenotyping Workflow.

Fig. 2: Overview of Phenotyping Workflow.

Fig. 3: Segmentation results for evaluating grape bunches. From these results, the number of bunches (8), total bunch area (5.45%) and average bunch area (0.68%) can be calculated. Table 1 shows a full overview of all numeric features derived from these and other segmentation results.

Fig. 3: Segmentation results for evaluating grape bunches. From these results, the number of bunches (8), total bunch area (5.45%) and average bunch area (0.68%) can be calculated. Table 1 shows a full overview of all numeric features derived from these and other segmentation results.

For this, a Mask-RCNN instance segmentation model proposed by He et al. (2017) was trained using manual annotations of the grape bunches of the relevant vine of the foreground. By deliberately excluding the visible objects in the background from the annotation, the model generalizes to ignore the rows in the background. This eliminates the need to specifically consider background rows and other irrelevant objects. From this segmentation result, numeric features such as the count of bunches per vine or the average bunch area can be calculated in the subsequent feature extraction. In Table 1, the full set of numeric features is presented that can be computed using different models for segmentations. These features are similar to the manually acquired ones listed in Table 3 but also include additional entries such as bunch and leaf areas. As these are easy to calculate from segmentation results but difficult to obtain manually, they have not yet been incorporated into the yield prediction model described in section 3. However, these additional features hold potential for extending the yield prediction by providing more extensive input data in future developments.

Table 1: Overview of the numeric features extracted from the images using instance segmentations.

Feature

Required Segmentation

Unit

Number of shoots per vine

Shoot instances

count

Average of inflorescences per shoot

Inflorescence and shoot instances

count

Number of inflorescences per vine

Inflorescence instances

count

Average of bunches per shoot

Bunch and shoot instances

count

Number of bunches per vine

Bunch instances

count

Average bunch area

Bunch instances

% of image area

Average and total leaf area

Leaf instances

% of image area

Number of berries

Berry midpoints

count

as described by Engler et al. (2023)

Table 3: Overview of the features extracted via manual plant appraisal used to prove our concept together with the date the appraisal has taken place in 2021.

Feature

Capturing Dates (DD.MM.)

Data Range

Number of shoots

03.05., 12.05., 02.06., 20.07.

1 – 22 shoots

Number of shoots with inflorescences

02.06.

1 – 22 shoots

Average inflorescences per shoot

02.06.

0.23 – 3.2 inflorescences

Number of inflorescences per vine

02.06., 16.06.

1 – 44 inflorescences

Number of shoots with bunches

20.07.

1 – 22 shoots

Average bunches per shoot

20.07.

1 – 3.9 bunches

Number of bunches per vine

01.07., 07.07., 20.07.

1 – 71 bunches

To independently validate the data-driven segmentation and rule-based feature-extraction steps, the manually annotated ground-truth images are used. These annotations reflect the expected result of the segmentation, enabling the development and validation of the rule-based feature-extraction without relying on the actual results of the data-driven segmentation. This allows for parallel development of both pipeline steps and enables the validation of the rule-based approach without the need to consider possible errors in the input data. This usage of ground-truth data to develop and validate the feature-extraction is indicated in Fig. 2 with the gray dashed arrow.

Due to the separation into two branches, the pipeline can be easily adapted to add more features or improve the segmentation results. The two-fold feature extraction allows independent improvement and adaptation of both the data-driven segmentation and the rule-based evaluation, ensuring a flexible basis for future extensions of the approach.

Data Acquired via Manual Plant Appraisal

While the pipeline for automated feature extraction is not yet finished, the final yield prediction of our automated grapevine yield prediction pipeline can already be evaluated. To showcase the potential of the final yield prediction model, field tests were conducted in the year 2021 in two experimental vineyard plots at the JKI, institute for grapevine breeding Geilweilerhof located in Siebeldingen, Germany (49°13'07.0''N 8°02'45.0''E). Rows were planted in north-south direction and vines were cultivated in a vertical shoot positioned trellis system with one cane and around 10 buds per vine for both plots. The data set includes information on four well-established grape varieties, namely ‘Dornfelder’, ‘Pinot noir’, ‘Pinot blanc’, and ‘Riesling’, as well as seven elite breeding lines out of the intermediate testing phase (Töpfer and Trapp, 2022). Plants are grafted on SO4 root stocks with an inter-row distance of 2 m and grapevine spacing with 1.1 m for both plots. A complete overview of the grapevines used for data acquisition can be found in Table 2. To mimic the output data that will be collected by extracting features in the field, as explained in section 2, we use the data acquired by manual plant observations. Viticulture experts made optical observations seven times during the growing season. This leads to some features being present multiple times, showcasing the development of the plant. The number of shoots for example is captured 4 times, as it is an integral feature when measuring the grapevine. The yields of the 400 grapevines are manually weighted to serve as a ground truth for our predictions. Table 3 shows the dates at which the features are extracted, together with a description of each feature. For the predictions we need to remove 30 data points due to missing values. The remaining data points have an average yield of 1.34 kg per grapevine.

Table 2: Overview of the different grapevine varieties used for our experiments. The abbreviation in the last column will be used again in Table 4.

Variety

VIVC

Accession number

Rows

Number of vines

Year of planting

Abbreviation

Dornfelder

3659

DEU098-2008-057

1

21

2008

Do

Pinot noir

9279

DEU098-2008-075

1

22

2008

PN

Pinot blanc

9272

DEU098-2008-072

1

22

2008

PB

Riesling

10077

DEU098-2008-080

1

24

2008

Ri

Gf.2010-011-0048

-

-

2

50

2015

BL1

Gf.2001-041-0004

-

-

2

46

2016

BL2

Gf.2001-041-0003

-

-

2

46

2016

BL3

Gf.2004-043-0010

-

-

2

46

2016

BL4

Gf.2004-043-0021

-

-

2

45

2016

BL5

Gf.2004-043-0034

-

-

2

40

2018

BL6

Gf.2000-305-0081

-

-

2

38

2019

BL7

Results

Grapevine yield prediction is a challenging research area due to the difficulties in obtaining data and the high variability within the ground truth values, based on a variety of different factors. In this section, we examine Extreme Gradient Boosting (XGBoost) for grapevine yield prediction.

Extreme Gradient Boosting for Yield Prediction

A unique challenge of using machine learning for yield prediction tasks is that new data points can only be acquired at harvest, once a year. Therefore, an algorithm suitable to build a regression model to predict yields should be able to achieve good accuracy with limited amounts of training data. Furthermore, we want to retrain the model every year with the new data to increase accuracy and make the model adaptable to deal with an ever-changing environment, especially in times of climate change. Therefore, the selected machine learning algorithm should be quickly re-trained. A solution that checks both boxes can be given by ensemble methods that combine many low-complexity machine learning models into one model capable of solving the challenging task of yield prediction. We found that XGBoost is well suited for yield prediction scenarios (Huber et al., 2022). Within XGBoost we use regression trees as individual learners, making the final model a random forest for regression. A regression tree can be interpreted as a set of cascading questions and is therefore well suited to be explained in human terms. To obtain a prediction from a regression tree for a data point, each of the questions asked is whether a specific feature's value exceeds a threshold. Which feature to choose and the value of the threshold are learned from the training data set to give the best accuracy. Traversing the tree this way determines the yield prediction of this tree. This procedure is repeated for every tree in the forest, and the results are added together.

Yield Prediction on Manual Plant Appraisal Data

To obtain a proof of concept on the described data acquired through manual appraisal, we trained both, a random forest for regression through XGBoost and a linear regression on the same feature set. We evaluate two main metrics, the Root Mean Squared Error (RMSE) and the Mean Absolute Error (MAE). Both are standard measures in machine learning and are widely used to compare results. The RMSE and the MAE are defined as follows:

17231_vitis_2024_huber_et_al_formel_1.jpg

Here, 17231_vitis_2024_huber_et_al_formel_2.jpg represents the ground truth values and 17231_vitis_2024_huber_et_al_formel_3.jpg, the prediction values. 17231_vitis_2024_huber_et_al_formel_4.jpg is the number of data points that are used for testing. The error metrics behave differently when handling larger differences between the predictions and the ground truth values, with the RMSE squaring the differences.

To emulate real-world conditions in our experiments, even with the small data set, we created 11 testing scenarios, one for each grape variety within our data. Hyperparameters are tuned by optimizing the RMSE to achieve the best performance in a 5-fold cross-validation on the remaining training data. The average RMSE for the eleven varieties is 0.68 kg for XGBoost and 0.76 kg for linear regression, showing an improvement of circa 11%. Regarding the MAE we measure an average of 0.55 kg for XGBoost and 0.63 kg for the linear regression, showing an improvement of ca. 13%. The concept provided to use XGBoost for grapevine yield prediction shows its capabilities even in a small data set. A full breakdown of the results of the individual varieties is shown in Table 4. The improvements of XGBoost over the baseline approach are predicted to increase with an increase in training data, since the more powerful model XGBoost will be able to learn more complex relations that determine the yearly yields. Our test scenario is close to the real-world use case, with the grape varieties tested missing from the training data. Therefore, our reported results hint towards XGBoost as the best choice to solve automated grapevine yield prediction in the future.

Table 4: Experimental results for yield prediction on data captured via manual plant appraisal. The table shows the RMSE and MAE error value when different varieties are used as test data comparing XGBoost and Linear Regression for yield prediction. The best result for each column and each metric is highlighted blue.

 

 

Variety

 

 

Do

PN

PB

Ri

BL1

BL2

BL3

BL4

BL5

BL6

BL7

AVG

XGB

RMSE (kg)

1.17

0.47

0.80

0.63

1.03

0.52

0.50

0.68

0.68

0.46

0.54

0.68

MAE (kg)

1.02

0.38

0.58

0.44

0.86

0.43

0.39

0.57

0.62

0.34

0.44

0.55

Linear

RMSE (kg)

1.19

0.42

1.25

0.63

1.03

0.58

0.47

0.63

0.91

0.70

0.54

0.76

Reg.

MAE (kg)

1.06

0.33

0.99

0.44

0.88

0.51

0.34

0.54

0.86

0.58

0.44

0.63

Discussion

The results of our work must be taken into the context of limited input data and the difficult data set, which contains varieties that are not yet fully explored regarding their possible yield quantities and are prone to higher variations. For example, Fuente et al. (2015) report slightly better results when predicting well-established varieties by statistical analysis with an RMSE of down to 0.46 kg. However, Fig. 4 indicates that, in general, we already see a positive correlation between the predicted and actual yield. At the same time, we also see that our predictions are spread out and can be improved. Furthermore, Fig. 4 shows that our models are biased towards predicting smaller yield quantities. For ground-truth yields that are higher than 2.5 kg, we always predict smaller yield quantities. This problem arises because data points with a ground-truth yield greater than 2.5 kg are underrepresented during model training. Statistical models for grapevine yield prediction are very prevalent, due to their ability to create good predictions, with very few reference data points. Machine learning, on the other hand, needs access to more training data to improve predictions. That being said, the expected maximum performance for machine learning-driven yield prediction is higher than that for statistical approaches, since they are able to model even the most complex relations in the data, when enough training data points are available. Furthermore, machine learning models are adaptive to changing conditions with regard to grapevines. In times of climate change, it is possible that statistical models need to be calibrated by experts, while a machine learning model can simply include new samples in the training data to adapt to the new situation. For this, the automated data acquisition pipeline presented in section 2 will be crucial. In contrast to the current time-consuming manual plant appraisal, this pipeline will enable the acquisition of large amounts of data based on objective, rule-based criteria. Many individual vines can be evaluated in a short amount of time, which will significantly increase the future data quantity. This will provide the basis for leveraging the advantages of a machine learning model for yield prediction. We expect our future data quantity to be at least 10 times that we used within this study, allowing us to substantially improve our results. That being said, focusing on the yield predictions of new breeding lines is important in recognizing new varieties capable of producing yields in the desired ranges. For this use case, even in the future, we cannot always assume, that the new varieties are sufficiently supported by our training data. However, the results of our work indicate, that machine learning models are able to generalize patterns between different grape varieties and therefore acquiring more training data, even from other varieties will help our prediction in the future.

Fig. 4: Ground Truth yield values vs. predicted yield values. Each point is an individual grapevine.

Fig. 4: Ground Truth yield values vs. predicted yield values. Each point is an individual grapevine.

In the early stages of our project, the architecture of the pipeline as a whole has a high priority. Therefore, Mask-RCNN was chosen as a well-established architecture for image segmentation and detection for various objects in agriculture (Santos et al., 2020, Jia et al., 2020, Wang et al., 2023). This allowed an easy integration of the model into the workflow to get first results as quickly as possible. However, newer model architectures such as YOLOv8 now outperform this model both in terms of speed and accuracy as shown by Sapkota et al. (2023). This provides the opportunity to improve the image segmentation by exchanging the model architecture in future iterations of the system. Given the modular design of the pipeline and the existing training data, this extension is easily possible.

Conclusion

In this paper, we presented a concept for a fully automated grapevine yield prediction based on two main contributing factors. First, the automated acquisition of in-field data based on deep learning, and second, the usage of these data as input for a regression forest created with the XGBoost algorithm. The developed mobile phenotyping platform allows the automatic acquisition of in-field data matched to individual vines. A two-step feature extraction pipeline is used to evaluate yield-relevant traits and generate numeric features as input for the subsequent yield prediction. The XGBoost based modeling showed its capabilities for the task at hand based on experiments performed on manually acquired data, which mimics the expected output from the automated in-field data acquisition. The next steps are naturally given by plugging the two steps of the proposed pipeline together and experimenting with the then fully automated yield prediction. This will also allow us to increase the amount of training and testing data to include more varieties and data that are captured over multiple years to further evaluate the modeling approach.

Conflicts of interest

The authors declare that they do not have any conflicts of interest.

Acknowledgements

This work was partially done within the project “Artificial Intelligence for innovative Yield Prediction of Grapevine” (KI-iREPro). The project is supported by funds of the Federal Ministry of Food and Agriculture (BMEL) based on a decision of the Parliament of the Federal Republic of Germany. The Federal Office for Agriculture and Food (BLE) provides coordinating support for artificial intelligence (AI) in agriculture as funding organisation, grant number FKZ 28DK128B20.

References

Araya-Alman, M., Leroux, C., Acevedo-Opazo, C., Guillaume, S., Valdés-Gómez, H., Verdugo-Vásquez, N., La Pañitrur-De Fuente, C., Tisseyre, B., 2019: A new localized sampling method to improve grape yield estimation of the current season using yield historical data. Precision Agriculture 20 (2), 445–459, DOI: 10.1007/s11119-019-09644-y.

Ballesteros, R., Intrigliolo, D. S., Ortega, J. F., Ramírez-Cuesta, J. M., Buesa, I., Moreno, M. A., 2020: Vineyard yield estimation by combining remote sensing, computer vision and artificial neural network techniques. Precision Agriculture 21 (6), 1242–1262, DOI: 10.1007/s11119-020-09717-3.

Barriguinha, A., Castro Neto, M. de, Gil, A., 2021: Vineyard yield estimation, prediction, and forecasting: a systematic literature review. Agronomy 11 (9), 1789, DOI: 10.3390/agronomy11091789.

Barriguinha, A., Jardim, B., Castro Neto, M. de, Gil, A., 2022: Using NDVI, climate data and machine learning to estimate yield in the Douro wine region. International Journal of Applied Earth Observation and Geoinformation 114, 103069.

Cunha, M., Ribeiro, H., Abreu, I., 2016: Pollen-based predictive modelling of wine production: application to an arid region. European Journal of Agronomy 73, 42–54.

Engler, H., Gauweiler, P., Huber, F., Kraus, J., Fischer, B., Hofmann, B., Petra, S., Yushchenko, A., Gruna, R., Steinhage, V., Herzog, K., Töpfer, R., Kicherer, A., 2023: PHENOquad: A new multi sensor platform for field phenotyping and screening of yield relevant characteristics within grapevine breeding research. Vitis 2023 (62), 41–48.

Fraga, H., Santos, J. A., 2017: Daily prediction of seasonal grapevine production in the Douro wine region based on favourable meteorological conditions. Australian Journal of Grape and Wine Research 23 (2), 296–304, DOI: 10.1111/ajgw.12278.

He, K., Gkioxari, G., Dollar, P., Girshick, R., 2017: Mask R-CNN. 2017 IEEE International Conference on Computer Vision (ICCV), 22.10.2017 – 29.10.2017, Venice, 2980–2988, DOI: 10.1109/ICCV.2017.322.

Huber, F., Yushchenko, A., Stratmann, B., Steinhage, V., 2022: Extreme gradient boosting for yield estimation compared with deep learning approaches. Computers and Electronics in Agriculture 202, 107346.

Jia, W., Tian, Y., Luo, R., Zhang, Z., Lian, J., Zheng, Y., 2020: Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot. Computers and Electronics in Agriculture 172, 105380, DOI: 10.1016/j.compag.2020.105380.

La Fuente, M. de, Linares, R., Baeza, P., Miranda, C., Lissarrague, J. R., 2015: Comparison of different methods of grapevine yield prediction in the time window between fruitset and veraison. Journal International des Sciences de la Vigne et du Vin (49), 27–35.

Laurent, C., Oger, B., Taylor, J. A., Scholasch, T., Metay, A., Tisseyre, B., 2021: A review of the issues, methods and perspectives for yield estimation, prediction and forecasting in viticulture. European Journal of Agronomy (130).

Santos, T. T., Souza, L. L. de, dos Santos, A. A., Avila, S., 2020: Grape detection, segmentation, and tracking using deep neural networks and three-dimensional association. Computers and Electronics in Agriculture 170, 105247, DOI: 10.1016/j.compag.2020.105247.

Sapkota, R., Ahmed, D., Karkee, M., 2023: Comparing YOLOv8 and Mask RCNN for object segmentation in complex orchard environments, DOI: 10.48550/arXiv.2312.07935.

Sirsat, M. S., Mendes-Moreira, J., Ferreira, C., Cunha, M., 2019: Machine learning predictive model of grapevine yield based on agroclimatic patterns. Engineering in Agriculture, Environment and Food 12 (4), 443–450.

Töpfer, R., Trapp, O., 2022: A cool climate perspective on grapevine breeding: climate change and sustainability are driving forces for changing varieties in a traditional market. TAG. Theoretical and applied genetics. Theoretische und angewandte Genetik 135 (11), 3947–3960, DOI: 10.1007/s00122-022-04077-0.

Wang, T., Zhang, K., Zhang, W., Wang, R., Wan, S., Rao, Y., Jiang, Z., Gu, L., 2023: Tea picking point detection and location based on Mask-RCNN. Information Processing in Agriculture 10 (2), 267–275, DOI: 10.1016/j.inpa.2021.12.004.

 

electronic ISSN 2367-4156
Impact Factor (2020): 1.339
Scopus Cite Score (2020):
2.9
Editorial Office
Dr. Werner Köglmeier
Julius Kühn-Institut
Institute for Grapevine
Breeding Geilweilerhof
76833 Siebeldingen
(Germany)
vitis@julius-kuehn.de
Layout/Technische Umsetzung
mediaTEXT Jena GmbH
mediaTEXT-Logo
Julius Kühn-Institut (JKI)
Bundesforschungsinstitut für Kulturpflanzen
 
Erwin-Baur-Str. 27
06484 Quedlinburg
Germany
Phone: 03946 47-0
Fax: 03946 47-255
Mail: poststelle@julius-kuehn.de
De-Mail: poststelle@julius-kuehn.de-mail.de
Impressum
Privacy Policy
 
This journal is published by the Julius Kühn-Institut.
JKI-Logo