Minirhizotron imagery can be used to assess plant root health, and the amount of data for analysis motivates automation of root detection through use of neural networks. Building upon previous work, we show that we can use transfer learning from our PRMI dataset to assess root health across twelve classes of a new dataset to answer questions regarding how root health is affected by access of a tree by large herbivores, site infestation by Pheidole megacephala, and location of the tree. This dataset was collected from three paired sites at the Ol Pejeta Conservancy in Laikipia, Kenya, and consists of 20,000 images collected between September 2021 and May 2022. Each paired site represents four locations based on all four possible combinations of site infestation by Pheidole megacephala for at least 20 years and existence of herbivore-exclusion fence to keep large herbivores out. 1,332 images across all twelve classes of site and treatment combination were labeled with respective ground truths for model training. Our work uses the UNet architecture using pretrained weights on the network encoder and decoder which were obtained in 2019 in work which achieved over 99% accuracy on a dataset of peanut and switchgrass imagery. In our work, we found that training the model with our new dataset resulted in consistent performance across all classes of our new dataset, with over 99% accuracy for each class.
High night air temperature stress (HNT) challenges rice production. Findings indicate 10% yield reduction for every 1 o C of increase in night air temperature. The responses of rice to HNT stress have been analyzed in limited number of genotypes mostly under greenhouse conditions. One of the limits for these studies under field conditions is implementing HNT stress on critical rice growth stage. The physiological and metabolic responses of rice to HNT stress under field conditions are not fully understood, thus, field studies are needed. Field-based phenotyping infrastructure that can house rice germplasm and stress imposition using computer-based system basing on ambient temperature still do not exist. In this study, six high tunnel greenhouses were built in a field experimental station in Harrisburg, AR in a split-plot design. These movable infrastructures fitted 310 rice accessions from the Rice Diversity Panel 1 (RDP1) and 10 hybrids from RiceTec. Each high tunnel greenhouse had heating and a cyber-physical system that recorded ambient air temperature and increased night air temperature relative to ambient temperature at the flowering stage. The system successfully imposed HNT stress of 4.01 o C and 3.94 o C as recorded by Raspberry Pi sensors for two weeks in the 2019 and 2020 cropping seasons, respectively. These greenhouses were able to endure constant flooding and resist heavy rain and 40-50 miles/h winds. Grain quality and other biochemical assays are still ongoing to fully assess the effects of HNT in the rice accessions and the hybrids.
The structures of roots play an essential role in plant growth, development, and stress responses. Minirhizotron imaging is one of the widely used approaches to capture and analyze root systems. After segmenting minirhizotron images, every individual root is separated from each other and the background. Root traits, like root lengths and diameter distributions, can provide information about the health of the plants. Current methods to analyze minirhizotron images usually rely on manually annotated labels and commercial software tools, which are time and labor-consuming. Unfortunately, these current methods usually generate a statistical analysis of the input image rather than the features of each root. In this work, we propose a pipeline to automatically use deep neural networks to segment roots from the background and then extract root features like lengths and diameter distributions from the individual segmented root. In detail, we first use a pre-trained U-Net to segment the roots in the minirhizotron images. Then, we separate each individual root with the help of connected component analysis. Finally, we extract the features like diameter distribution or root lengths of every individual root with morphological operations, like skeletonization. For evaluation, we conduct experiments on synthetic roots, which are made of strings and threads, and compare results against a benchmark root dataset (PRMI) of real switchgrass roots and compare the estimated results with the existing commercial software.
Most current phenotype plant research focuses primarily on above-ground traits, like leaves and flowers. Roots often get comparatively less attention because they are challenging to examine and image. Minirhizotron (MR) systems are one of the imaging approaches to studying plant roots underground. In MR systems, a tube is inserted into the ground to allow a camera to be inserted to capture the images of root systems. Unlike minirhizotron imaging, X-ray computed tomography (CT) captures the three-dimensional (3D) information of soil cores extracted from the soil. For a better analysis of roots, the first step is always to segment the roots from the background in the images or image sequences. The results of root segmentation play an essential role in further analysis like root diameter and length estimation. Current fully-supervised segmentation methods mainly use pixel/point-level annotated labels, which require much manual effort and time. In this work, we propose a weakly supervised root segmentation approach with graph convolutional networks. Our model only requires image-level annotations to segment roots from the images or image sequences. In detail, our model first constructs graphs for the neighboring pixels/points and then learns the distinguishable features used as hints for segmentation by training a classifier based on the image-level annotations. Finally, post-processing procedures like principal component analysis (PCA) are applied to refine the final segmentation results. We conduct experiments on the challenging 2D PRMI minirhizotron benchmark and 3D switchgrass root X-ray CT datasets for evaluation.
Hyperspectral imaging is a non-destructive imaging technique used in plant phenotyping to collect and analyze an array of electromagnetic information in visible (380-700 nm) and near-infrared wavelengths region (700-2,500 nm). Hyperspectral imaging can provide information of plant responses under various biotic and abiotic stress, e.g., drought, temperature rising, disease, and nutrition deficiency. We present a hyperspectral data processing pipeline designed for the data collected at Ag Alumni Seed Phenotyping Facility (AAPF) in Purdue University, USA. The procedure consists of initializing a processing session, radiometric calibration with white and dark references, geometric calibration (registration) of visible and near infrared (VNIR) and shortwave infrared (SWIR) images, vegetation and non-vegetation classification, vegetation indices calculation of a plant area, exporting data products, and quality control. In concern of large data size of hyperspectral data, we highlight the need to save memory usage during computation and save disk space for data products. We also address the need of human interpretable images in the hyperspectral data products for plant scientists without experiences in hyperspectral imaging. We expect the developed procedure could improve robustness of large hyperspectral data processing and promote the usage of hyperspectral data by increasing interpretability.
Global wheat production needs to increase by 60% to ensure food security in the future. Radiation use efficiency (RUE), defined as dry matter production per unit of light energy consumption, is an important trait that contributes to wheat yield potential. Traditionally, RUE is estimated through sequential biomass cuts evaluated against cumulative light interception, which is less precise and non-specific to genotypes. 3D models have recently been shown promise in estimating light interception when used along with ray tracing algorithms, mostly deployed in single plant-based models, while light interception at the canopy level remains to be explored. In this study, a mobile robotic phenotyping platform equipped with dual multispectral laser sensors was used to generate canopy 3D data. Using this platform, 100 spring wheat genotypes were scanned at heading stage to understand the genetic variation for RUE and its associated traits under field conditions. Ray-tracing algorithms were used to estimate the fraction of intercepted photosynthetically active radiation (FIPAR) for all genotypes, validated through a hand-held light ceptometer. Genotype-specific RUE was calculated as a slope between dry biomass and accumulated PAR. 3D model-based FIPAR was in close agreement with ceptometer-derived FIPAR. 3D model-derived RUE showed a large genetic variation across 100 wheat genotypes. It explained a higher variation in grain yield than ceptometer-derived RUE. These results indicate that canopy 3D models can be used as a rapid method for estimating canopy RUE in wheat, and potentially are extendable to other cereals.
Grain and seed properties can be evaluated using near-infrared spectroscopy and other methods for post-harvest quality assessment. Hyperspectral imaging combines spectroscopy with spatial information, which provides additional features that may improve predictive models of seed traits. To assess the ability of deep learning models to use hyperspectral data for predicting phenotypes, we first aimed to predict the genotype of maize seeds. Previous work achieved high identification accuracy between a small set of genotypes using either RGB images or hyperspectral data, and we hypothesized that high spectral resolution (350-1000nm) hyperspectral data would outperform simple RGB data in our study. Our dataset consisted of hyperspectral images of maize seeds from 47 inbred lines, including the 26 NAM lines, with 96 individual seeds per genotype. We evaluated the difference in genotype identification accuracy using three different representations of the individual seed data: 1) using the whole scan, containing the reflectance at 580 different wavelengths, 2) using a subset containing the reflectance at 3 different wavelengths corresponding to a pseudo-RGB image, and 3) a gray-scale image derived from the pseudo-RGB image. We fine-tuned VGG11, a popular convolutional neural network, using 85% of the individual seed data for each of the representations. We obtained around 90% genotype prediction accuracy on the unseen data for both the whole scan and the pseudo-RGB data, and 72% genotype prediction accuracy using the gray-scale data. The results indicate that the shape and color information contained in RGB images might be sufficient for the task of maize seed genotype identification.
Holistic assessment of fruit quality is an essential component of producing Strawberry varieties that will succeed in the marketplace and improve consumer satisfaction. However, several key quantitative traits are notoriously slow and expensive to assess using standard procedures, namely acidity and aroma, which require titration and gas chromatography and mass spectroscopy compared to others: brix, anthocyanins, and vitamin C, which are measured by refractometer and parallelized plate reader assays. Scaling up evaluations for acidity and aroma has been difficult as the techniques require 5 and 40 mins/sample, respectively, and sample preparation is equally intense, requiring multiple trained hands working for 10-hour sessions to create the sample series for 100 entries. We evaluated the ability (R 2 , RMSE) of a handheld near infrared (NIR) spectrometer, measuring 125 wavelengths between 800 and 1600 nm, and an electronic nose, measuring the reaction of 32 electrochemical sensors that respond to various compounds in gas samples, on 4,000 diverse strawberry accessions to determine if the 5 and 40 min/sample assays can be replaced with a 1 (0.33%) sec/sample (NIR) and 2 (5%) min/sample (E-nose) assay that require no additional sample prep. We also assess the NIR's ability to predict brix, anthocyanins, and vitamin C. With these two sensors, we will be able to increase the scale of early generation evaluation from hundreds to thousands of samples in early generations, produce full datasets prior to deadlines in the breeding program, and make more reliable genetic gains for quality traits affecting marketability and consumer acceptance.
The California strawberry industry generated more than 2 billion dollars in revenue in 2020 (USDA-ERS). Strawberry breeders develop new varieties to increase productivity in the face of shifting biotic and abiotic stresses. The University of California Davis maintains a strawberry breeding program that evaluates >10,000 entries yearly to meet the demand for new improved varieties, focusing on plant productivity, fruit quality, and resistance to soil borne pathogens. One challenge to a breeding program of this scale is efficiently scoring and collecting detailed information on cultivar performance. Traits like plant size and growth rate are rarely collected. It takes a crew of 4 people 20-25 hours to score fruit count, so it is currently done once per week. Correlated traits, e.g., plant size and vigor, assessed by drone imagery could provide high-quality information and replace labor intensive assessments of phenotypic traits and yield. In 2022 we deployed drones to generate research grade imagery of nearly 10,000 entries at Wolfskill Experimental Orchard in Winters, CA and another 3,000 entries under induced disease pressure to determine the best predictors of productivity and disease severity from drone imagery. We applied image analytics tools developed by HIPHEN to extract ground coverage, plant height, biovolume and a range of visual indices from multiple sensors to assess cultivar performance. The extracted traits were then used as independent variables to predict either yield or visual disease severity. We report our initial findings, examine the successes and learnings, and propose solutions to ongoing challenges in strawberry breeding.
ORCiD: [https://orcid.org/0000-0001-6665-6094] Plant phenotyping has been an essential aspect of crop science analytics that is saddled with tasks such as providing critical information about plants' genetics, traits, productivity, and other intricate details to gain insights about their survival under certain conditions and in a specific environment for various analyses. Various methods have been quantifying this information using various models culled from several kinds of datasets. In this study, we extract various phenotypic information about the soybean using UAS-based images captured over the growing fields within the selected experiment field. The DJI M300 unmanned aerial systems were equipped with the Zenmuse P1 and L1 sensors; both used to capture RGB and LiDAR images. In addition, the DJI P4 multispectral UAS was also used to collect multispectral information over these fields at various date intervals. The data captured is being processed using custom-developed algorithms and automated workflows to obtain biomass, vegetation indices, canopy cover, canopy height, and canopy volume. These indices would show variations in the traits of the crop under study as related to the soybean. This phenotypic information would be compared against the field measurements for validation.
Interactive Annotation for object delineation can be considered as a semi-supervised few-shot learning problem where machine learning models learn from a small set of annotated pixels and generalize to the entire picture to extract the object of interest. One aim of interactive annotation is to reduce the effort of manually labeling data. Some existing works attempted to address this problem with deep metric learning so that the encoding layers in the network are able to extract features that boost discriminability among pixels belonging to different classes. To keep the data structure in the embedding space, metric loss with prototypes has been proposed. In our work, we improved the existing methods by developing a new objective function to update the network and prototypes simultaneously. The prototypes are optimized based on the loss that enhances their dissimilarity instead of clustering or sampling from the dataset. Moreover, we designed a GUI with the proposed method for interdisciplinary collaboration of image-support plant phenotyping studies.
Urban agriculture has been broadly acknowledged for its potential to reduce carbon emissions, increase food security, and improve economic growth in some of the most vulnerable communities in the United States. Collard (B.oleracea var. viridis) is a diploid leafy green, grown on urban farms and community gardens across the country, including the St. Louis Metro region. Beyond their nutritional importance, collards provide urban and commercial agronomic systems with a plethora of important ecosystem services. They scavenge nitrogen and available resources, suppress weeds, and act as a biofumigant to control soil-borne pests and pathogens. Recently, The Heirloom Collard Project characterized the above-ground growth habits of 18 landrace collard varieties across 250 organic gardens and farms. Little work has been published to investigate collard root system architecture, which influences both quality traits and ecosystem services that contribute to sustainable crop production. The objectives of this research are to 1) quantify root spatial and temporal diversity across 18 landrace collard varieties, and 2) evaluate the relationship between root phenotype and urban farmer crowd-sourced data for key traits such as germination rate, disease resistance, vigor, yield, flavor, and winter hardiness. This work will lead to the development of a participatory framework for urban farmers and chefs to select varieties with improved root architecture based on regional needs.
Automation of plant phenotyping using data from high-dimensional imaging sensors is on the forefront of agricultural research for its potential to improve seasonal yield by monitoring crop health. We developed a mast-mounted hyperspectral imaging polarimeter (HIP) that can image a corn field across multiple diurnal cycles throughout a growing season. Using the polarization data, we present preliminary results demonstrating the potential to use polarization to de-couple light reflected from the surface versus light scattered from the tissues, thus enabling time of day, solar incidence angle, and viewing angle to be reduced as confounding factors for the spectral measurement. We present two approaches for polarization correction of our image data. The first is by using ground truth Normalized Difference Vegetation Index (NDVI) with linear regression and convolutional neural networks to train a deep learning model capable of compensating for the leaf normal relative to the camera and sun angle. The second approach involves using a recently constructed instrument which fits a scattering model of corn leaves by measuring the Bidirectional Reflectance Distribution Function (BRDF). This function models the behavior of light reflected off a leaf relative to its spectrum, polarization, and angle of incidence. Incorporating this model with data collected by the HIP, we estimate that the system will be able to distinguish leaves with surface normals facing towards the camera from leaves facing away from the camera. Preliminary results demonstrate a promising solution to reduce confounding factors in high-throughput systems for applications in plant phenomics and remote sensing.
Using magnetic resonance imaging (MRI), our established root phenotyping platform (van Dusschoten et al., 2016) can visualize and analyze plant roots in natural soil nondestructively (Pflugfelder et al., 2017). Using plant pots with 9 cm diameter and 30cm height, a root system can be scanned within 1h while roots down to diameters of 300µm can be detected and analyzed using our in-house root extraction software NMRooting (van Dusschoten et al., 2016). Thanks to automation with a pick-and-place robot the platform routinely achieves a throughput of 24 plants per day. All these values, however, are based on compromises between imaging speed and quality. In our system, the root detection limit is determined by the signal to noise ratio (SNR) of our images. The SNR can be increased by using smaller plant pots or by increasing the imaging time. In this contribution we investigate the potential gain in the root detection limit when sacrificing plant throughput in favor of image quality. We acquired low noise root images using repeated signal averaging during the measurement process. Using this approach, the root detection limit could be lowered, visualizing roots not detected by the standard imaging protocol.
ORCiD: https://orcid.org/0000-0001-7766-3775 The expanding geographic range of Phyllachora maydis, the fungus that induces Tar Spot infection on corn foliage, is increasingly threatening a Michigan industry that contributes over $1 billion to the state's economy annually. Foliar infection of maize by P. maydis is often difficult to detect early. Visible lesions initially appear tiny, ambiguous, and sparse, making them difficult to identify with the naked eye. Both farmers and breeders of corn desperately need better tools that allow early, definitive detection of lesions and provide more time for management decisions. This tool must verify presence of P. maydis and quantify infection severity as quickly as possible to allow growers the most options for treatment. Advances in machine learning now enable quantification of crop infection presence and severity using powerful object detection packages. With the growing availability of open-source tools, such as the Mask Region-Based Convolutional Neural Network (Mask R-CNN) and PlantCV, the field of plant disease phenotyping has more options for methods than ever before. I propose comparing the accuracy of two potential pipelines to quantify tar spot infection severity: one based on heuristic methods, involving techniques such as dynamic image colorspace thresholding, and the other based on the use of annotations, such as object detection and contour analysis. Comparison of these two methods will provide insight into challenges involved with phenotyping in the field as well as phenotyping foliar diseases using automated methods.
X-ray tomography (XRT) is a powerful and versatile tool for generating detailed non-destructive three-dimensional (3D) image data of large and complicated structures. In particular, excavated, cleaned and dried maize root crowns can be rapidly scanned, and the resulting 3D volumes processed in a computational feature extraction pipeline to provide a wide range of root trait measurements. These measurements provide rich data that give insights into how roots occupy 3D space in ways not possible with any 2D imaging/measurement systems. Hundreds of root crowns can be scanned in a moderate-throughput system, and multivariate statistical analyses can provide valuable insight into the role that genes and quantitative trait loci play in selected root traits. Research presented will describe XRT scan parameter optimization and its impact on root trait data generated by the feature extraction pipeline.
Imaging of plants using multi-camera arrays in high-density growth environments is a strategy for affordable high-throughput phenotyping. In multi-camera systems, simultaneous imaging of hundreds to thousands of plants eliminates the time delay in measurements between plants seen in plant-to-camera or camera-to-plant systems, which allows for the analysis of plant growth, development, and environmental responses at a high temporal resolution. On the other hand, high plant density, camera-to-camera variation, and other trade-offs increase the complexity of data analysis. Here we present two recent updates to the PlantCV image analysis package to improve usability when working with multi-plant datasets. First, we introduce a method to automate detection of plants organized in a grid layout, reducing the need to make separate workflows for each camera in a multi-camera system. Second, we reduced the number of input and output parameters for functions handling the shape and location of plants and introduce automatic iteration over multiple objects of interest (e.g. plants), reducing the level of programming needed to build workflows.