4.7 Article

Field-scale crop yield prediction using multi-temporal WorldView-3 and PlanetScope satellite data and deep learning

Journal

ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING
Volume 174, Issue -, Pages 265-281

Publisher

ELSEVIER
DOI: 10.1016/j.isprsjprs.2021.02.008

Keywords

PlanetScope; WorldView-3; Deep learning; Convolutionneural network; ResNet; Artificial intelligence; Food security

Funding

  1. National Science Foundation [IIA-1355406, IIA-1430427]
  2. National Aeronautics and Space Administration [NNX15AK03H]

Ask authors/readers for more resources

This study successfully developed a deep learning approach based on raw satellite imagery for field-scale yield prediction, demonstrating that deep learning can predict yield directly and explain nearly 90% variance. The use of WV-3 imagery with RedEdge and SWIR bands outperformed multi-temporal PS data for yield prediction.
Agricultural management at field-scale is critical for improving yield to address global food security, as providing enough food for the world's growing population has become a wicked problem for both scientists and policy-makers. County- or regional-scale data do not provide meaningful information to farmers who are interested in field-scale yield forecasting for effective and timely field management. No studies directly utilized raw satellite imagery for field-scale yield prediction using deep learning. The objectives of this paper were twofold: (1) to develop a raw imagery-based deep learning approach for field-scale yield prediction, (2) investigate the contribution of in-season multitemporal imagery for grain yield prediction with hand-crafted features and WorldView-3 (WV) and PlanetScope (PS) imagery as the direct input, respectively. Four WV-3 and 25 PS imagery collected during the growing season of soybean were utilized. Both 2-dimensional (2D) and 3-dimensional (3D) convolution neural network (CNN) architectures were developed that integrated spectral, spatial, temporal information contained in the satellite data. For comparison, hundreds of carefully selected spectral, spatial, textural, and temporal features that are optimal for crop growth monitoring were extracted and fed into the same deep learning model. Our results demonstrated that (1) deep learning was able to predict yield directly using raw satellite imagery to the extent that was comparable to feature-fed deep learning approaches; (2) both 2D and 3D CNN models were able to explain nearly 90% variance in field-scale yield; (3) limited number of WV-3 outperformed multi-temporal PS data collected during entire growing season mainly attributed to RedEdge and SWIR bands available with WV-3; and (4) 3D CNN increased the prediction power of PS data compared to 2D CNN due to its ability to digest temporal features extracted from PS data.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available