You need to sign in or sign up before continuing. dismiss

Sourav Bhadra

and 3 more

Crop leaf angle is a crucial feature of plant architecture which influences photosynthetic efficiency and yield. Therefore high-throughput mapping of leaf angle is of extreme interest for both precision agriculture and crop breeding operations. In this study, we propose a UAV-based hybrid approach by combining a radiative transfer model (PROSAIL) and deep neural networks to estimate leaf angle from multi-angular hyperspectral and LiDAR data. PROSAIL can simulate canopy hyperspectral reflectance from a given list of parameters, where Average Leaf Inclination Angle (ALIA) is one of the canopy parameters. The goal is to develop a deep learning-based inversion function that takes UAV hyperspectral reflectance and other PROSAIL parameters as input and estimate ALIA of Maize as output. The other PROSAIL parameters will be estimated using several machine learning pipelines from UAV hyperspectral and LiDAR information. We also propose a multi-angular reflectance scheme where each image pixel will generate multiple simulated and observed reflectance from the overlapping regions using different angles (i.e., solar zenith angle, viewing zenith angle, and relative azimuth angle between the sun and sensor) involved in the PROSAIL simulation. An automated Python-based tool was developed that can calculate all three PROSAIL angles for a given hyperspectral data cube and generate the simulated reflectance for every vegetation pixels per experimental plot. Since the proposed method incorporates both crop information (i.e., PROSAIL) and a data-driven approach (i.e., deep learning), the method can be easily transferable for other study areas and crops, and it can rely on less ground truth data.

Supria Sarkar

and 3 more

Predicting the composition of soybean seeds while the plants are growing in the field is very important to understand how different genotypes, field condition and environment influence different seed composition parameters. Knowing this information at global scale is even more important to understand the dynamics of food insecurity and the interaction of seed composition with global environmental changes. This study aims to develop a machine learning-based soybean seed composition model from the fusion of PlanetScope, Sentinel and Landsat satellite images. Although satellite images provide global coverage throughout the year, it suffers from coarser spatial resolution. However, PlanetScope provides four-band (i.e., red, green, blue, and near infrared) multispectral imageries at approximately 3m spatial resolution daily. Alternatively, Sentinel-2B and Landsat-8 have coarser spatial resolution (10 - 30m), they provide enriched spectral resolution. Therefore, the objectives of this study are to 1) fuse the PlanetScope image with corresponding Landsat and Sentinel images, 2) evaluate several machine learning algorithms (e.g., partial least squares, support vector machine, random forest, and deep neural network) to predict protein and oil content of soybean seeds from the fused satellite images. Two soybean fields were established in 2020 and 2021 at Bradford, MO to perform the experiment. Corresponding PlanetScope, Sentinel, and Landsat images were downloaded and processed for the entire growth seasons. Current results indicate that deep neural network provide the best performance in predicting both protein and oil content of soybean. Future step is to assess different fusion algorithms and predict seed composition at regional or global scale.