Choosing a suitable breeding strategy is essential to the success of a plant breeding program. Simulations are an important tool that allow plant breeders to propose and assess the merits of alternative breeding strategies. The Python package PyBrOpS provides a highly flexible and modular framework to make optimized breeding selection decisions and perform stochastic simulations of plant breeding programs. PyBrOpS utilizes a customizable scripting-based approach to constructing breeding simulations and optimizations. Through the use of software interfaces that allow for extensibility, the user may implement custom PyBrOpS modules that provide additional functionality. PyBrOpS offers pre-built subroutines for selection strategies such as conventional genomic selection, weighted genomic selection, optimal contribution selection, optimal population value selection, and optimal haploid value selection. Additionally, PyBrOpS is capable of both single-and multi-trait selection. For multi-trait selection scenarios, PyBrOpS offers the novel capability of mapping trade-off frontiers through the use of multi-objective evolutionary algorithms. Here, we describe the main features of PyBrOpS and provide example use cases for breeding program simulation and optimization.
Many correlations exist between spectral reflectance or transmission with various phenotypic responses from plants. Of interest to us are metabolic characteristics; namely, how the various polarimetric components of plants may correlate to underlying environmental, metabolic, and genotypic differences among different varieties within a given species, as conducted during large field experimental trials. In this presentation, we overview a portable Mueller matrix imaging spectropolarimeter, optimized for field use, by combining a temporal and spatial modulation scheme. Key aspects of the design included minimizing the measurement time while maximizing signal-to-noise ratio by mitigating systematic error. This was achieved while maintaining an imaging capability across multiple measurement wavelengths, spanning the blue to near-infrared spectral region (405-730 nm). To this end, we summarize our optimization procedure, simulations, calibration methods, and polarimetric error. Validation results indicated that the polarimeter provides an average absolute error of (5.3 ± 2.2)×10-3 or (7.1 ± 3.1)×10-3 , when using its slow or fast measurement modes, respectively. Finally, we provide preliminary field data (depolarization, retardance, and diattenuation) to establish baselines of barren and non-barren Zea maize hybrids (G90 variety), as captured from various leaf-and canopy-positions during our summer 2022 field experiments. Results indicated that subtle variations in retardance and diattenuation versus leaf canopy position may be present before they are clearly visible in the spectral transmission. We will also highlight some of our more recent work, from the summer of 2023, measuring the polarization properties of maize lesions and soybean leaves.
Advances in automated image analysis using open-source computer vision tools, such as PlantCV, have greatly increased the throughput of aboveground phenotyping in a variety of crop species. However, PlantCV was largely optimized to analyze images collected under controlled laboratory conditions, and has seldom been used to analyze images collected under field conditions. Further, there are no known applications of PlantCV for analyzing images collected belowground, such as those obtained from minirhizotron imaging devices. In this study, we demonstrated applications of PlantCV for extracting plant trait information from aboveground and belowground images collected in two perennial crop mapping populations. The first population was composed of nearly 1,200 individuals of a potential perennial oilseed crop (Silphium Integrifolium x Perfoliatum ), and the second population was composed of nearly 1,700 individuals of a perennial cover crop (Trifolium ambiguum , Kura Clover). We designed and used a field-based imaging cart to collect overhead and profile images of individuals from both populations in August and October, which improved the efficiency of field-based image capture. Around the time of aboveground image collection, belowground images of root networks were collected using minirhizotron imaging devices. We then assessed the application of PlantCV for measuring aboveground traits (crop canopy area, height, leaf color and growth rates) and belowground traits (root length and growth rates), and we explored future directions of PlantCV for field-based image analysis of aboveground and belowground crop tissues.
Phenotype characterization is an exciting new field that links agriculture and data science. Thanks to advances in remote sensing and artificial intelligence, we can now accurately quantify field-scale phenotypic information and integrate this big data into predictive and prescriptive management tools.Our past work has shown that phenotypic information measured from UAS is more precise and reliable than manual field measurements. In addition, we have shown that the vast size of this data can outweigh the complexity of the problem, to the point where even a simpler algorithm can outperform a sophisticated algorithm when big enough training data are available. This is because the big data captures even very infrequent aspects of the problem of interest, which can be modeled with simple logics.The availability of UAS data makes it possible to develop digital twin models to forecast future plant growth and develop in-season management plans. A digital twin model is a virtual representation of real-world entities and processes. These models use early growth patterns of a crop as input to artificial intelligence algorithms so that the algorithms can predict crop performance during the next 10, 20, or 30 days ahead of the last data point collected by the UAS. Crop growth features forecasting results can be useful for the management of irrigation, growth regulators, maturity of the crops, and to obtain early-season yield estimation.
Image segmentation is commonly used to estimate the location and shape of plants and their external structures. Segmentation masks are then used to localize landmarks of interest and compute other geometric features that correspond to the plant’s phenotype. Despite its prevalence, segmentation-based approaches are laborious (requiring extensive annotation to train), and error-prone (derived geometric features are sensitive to instance mask integrity). Here we present a segmentation-free approach which leverages deep learning-based landmark detection and grouping, also known as pose estimation. We use a tool originally developed for animal motion capture called SLEAP (Social LEAP Estimates Animal Poses) to automate the detection of distinct morphological landmarks on plant roots. Using high-throughput phenotyping method Root Architecture 3-D Imaging Cylinder (RADICYL) across multiple species, we show that our approach can reliably and efficiently recover root system topology at greater accuracy, faster speed, and with fewer annotated samples than segmentation-based approaches. In order to make use of this landmark-based representation for root phenotyping, we developed a Python library (sleap-roots ) for trait extraction directly comparable to existing segmentation-based analysis software. We show that landmark-derived root traits are highly accurate and can be used for common downstream tasks including genotype classification and unsupervised trait mapping. Altogether, this work establishes the validity and advantages of pose estimation-based plant phenotyping. To facilitate adoption of this easy-to-use tool and to encourage further development, we make sleap-roots , all training data, models, and trait extraction code available at: https://github.com/talmolab/sleap-roots.
Nitrogen fertilizers are one of the top expenses for corn farmers in North America, and the highest cost of any input over a growing season. The success of developing better strategies for nitrogen-use efficiency, such as improved varieties and biologics, depends on an efficient way of measuring in-planta nitrogen content. We are testing a reflectance-based UAV hyperspectral approach and a transmittance-based hyperspectral handheld called LeafSpec to estimate corn nitrogen from hyperspectral images. The UAV approach is high throughput but with relatively low spatial resolution and has high susceptibility to environmental factors. The LeafSpec is low throughput but with high spatial resolution and is not significantly affected by environmental factors. Also, the LeafSpec can account for the nitrogen inside the corn leaves by using transmittance, whereas the UAV can only see surface effects from reflectance. We are testing well-established machine learning and state-of-the-art deep learning models for both approaches. This presentation will share our learnings from testing UAV and LeafSpec as two hyperspectral imaging approaches to estimate corn plant nitrogen from hyperspectral images regarding potential use-cases for each approach, considering precision, ease of use, throughput, and potential for further development.
Root-root interactions significantly impact the formation of architectural root phenotypes, yet are poorly understood. Phenotype formation is impacted by sensing of soil resources and exudates of neighboring plants (Nord et al., 2011; Wang et al., 2021), which motivates the need to accurately quantify this phenomenon into its underlying causes. Currently, we are developing a complete experimental system for root-root interactions. A mesh frame has been designed to support the growth of two mature plant root systems. The frame is inserted into a large mesocosm, filled with a sand/soil mixture, and two plants are grown. To harvest, the mesocosm is disassembled and the sand/soil is gently washed away. Root systems are left suspended in the mesh and using a Canon EOS Rebel T5, ~500 total photos are taken at 10 different angles ranging from below to above the roots, 360° around the frame. DIRT/3D is used to construct 3D models and extract data from individual root systems. We are in the process of improving our data extraction methods to include spatial traits relative to the two root systems. To do so, we dye the root systems right before harvesting. The difference in coloration allows the use of a deep learning model based on U-net architecture to perform image segmentation and separate roots in the 3D models. Our next step is to run a larger experiment with 10 mesh frames. This will provide statistical significance for trait identification and adaptation of DIRT/3D for root-root interaction data extraction and analysis.
The flowering date of sunflowers is a crucial trait that significantly influences crop management practices and product placement. This trait can be measured by counting the number of days from planting until 50% of plants in a given research plot have reached flowering at R5 developmental growth stage. Traditional ground methods for data collection are labor-intensive and subjective, requiring field scientists to manually estimate and record plots with 50% flowering every 1-2 days. This approach not only consumes considerable time but also potentially overlooks valuable information related to flowering rates and duration. To address these challenges, we leveraged UAVs (Unmanned Aerial Vehicles), which allow for surveying a field in a short span of time, to model flower counts over time and predict the date when the plot reached 50% flowering. The method developed employs a deep learning model trained to detect yellow sunflower heads from UAV imagery and modeling these counts over time using a logistic function to estimate the 50% flowering date. With this method, flowering date was precisely estimated with high correlation relative to the ground measurements (r = >0.92) across experiments and locations. An increase in heritability for the remote sensing trait was also observed relative to the ground trait, and more importantly we were able to gain additional insights into flowering rates and duration. This innovative approach offers a promising avenue for enhancing the efficiency and accuracy of sunflower phenotyping.
Corn (Zea Mays ) is one of the most valuable row crops grown in the United States. Thus, securing and increasing the yield of corn is necessary for the progression of society. However, blindly applying excessive nitrogen fertilizer to reach higher yield has resulted in disastrous consequences. Therefore, it is crucial to precisely monitor the nitrogen content in the corn plant and provide suggestions on nitrogen application is crucial to increase the sustainability of agriculture.Currently, spectroscopy imaging has been proved effective in measuring nitrogen content in corn plants. However, the trade-off between throughput and data quality is rarely addressed and it has become an obstacle to scale up the application of spectroscopy imaging. Hence, in this study, we have developed a new robotic system to operate a proximal imaging device to capture a spectroscopy image of a single corn leaf. The robot has 3-DoF, including one rotational and two translational actuators. One 3-D camera works as the perception system to detect and localize the targeted corn leaf. The manipulator is a uniquely designed spectroscopy imaging device that captures spectroscopy images with low-noise and high-resolution. Once deployed, this robot could capture high quality images for a single leaf within 30 seconds and free human from laborious field work.
Fusarium head blight (FHB) is an economically important disease in wheat which can cause yield losses >50%. Breeding for host resistance is the most effective control method, however time, labor, and human subjectivity limit phenotyping efforts. A novel, high-throughput phenotyping rover was used to collect in-field RGB images of inoculated wheat spikes at multiple time points in 2021 and 2022. A deep neural network pipeline was developed to classify wheat spikes, segment healthy and diseased tissue, and quantify FHB severity as the region of intersection between spike and disease masks. To validate the pipeline, model inferences on a plot and spike scale were compared to five raters who performed disease scoring in the field and on images. The precision and throughput of the phenotyping rover and FHB quantification pipeline exceeded conventional rating methods. The plot aggregate disease scores based on pipeline outputs correlated strongly with plot-level disease scores by raters in the field and imagery. When comparing disease annotations on spike images, pipeline to human disease correlations were equivalent to correlations between raters, however location tended to influence disease assessment. The pipeline has strong generalizability and performed well on images taken across environments, with different camera orientations, and throughout disease progression. These results demonstrate a breakthrough in FHB phenotyping and facilitate precise and efficient disease quantification on spikes and plot aggregates across time and imaging conditions that are unachievable using conventional methods.
Wheat (Triticum aestivum L.) is one of the key staple crops worldwide. Even though future demand for wheat is estimated to increase by 6% by 2050, wheat production might drop by 30% due to climate change. The purpose of this research is to identify canopy architecture (light capture) and anatomical (stomatal) traits that significantly increase radiation use efficiency (RUE; dry weight biomass produced per unit radiation intercepted) and improve yield under high temperature and drought stress conditions due to the growing concern over food security. This research was conducted with five contrasting wheat genotypes in a new field-based high tunnel system with 4 treatments and three replications (control, heat stress, drought stress and heat x drought stress with 3 replications = 12 tunnels) in a randomized completed block design with stress applied at the heading stage. Canopy architecture was graded according to the visual scoring scale concerning UPOV and RUE was calculated in three growth stages. New techniques of high throughput imaging of stomatal number and size using a handheld digital microscope will also be presented.
In precision agriculture and plant biology, monitoring nutrient stress in crops is of paramount importance for ensuring optimal yield and resource utilization. However, nutrient stress phenotypes can be nuanced, subtle, and display in a variety of ways. We propose using Neural Radiance Fields (NeRFs) for the organized reconstruction of plant structures to observe the changes of plant structure and color under nutrient stress.Neural Radiance Fields, a cutting-edge technique in computer vision, leverage neural networks to model complex high-frequency geometry directly from 2D images, offering high-fidelity reconstructions. This methodology holds immense potential for plant imaging, as it allows for the creation of detailed and organized 3D models that can capture subtle alterations in plant morphology associated with nutrient stress responses.The proposed methodology involves the acquisition of high-resolution images of plants under different nutrient conditions. These images are inputted to the NeRFStudio Nerfacto implementation, a NeRF model that is a aggregation of many different existing models. A 3D reconstruction of the scene is outputted from the model and can be further reduced to a point cloud containing point locations, colors, and normals. Phenotypic traits are then calculated from the point clouds. The reconstructed plant models enable the quantitative analysis of morphological changes associated with nutrient stress. This includes alterations in leaf size, branching patterns, and overall plant geometry. The utilization of NeRFs allows for non-destructive monitoring, offering a significant advantage over traditional methods that may be labor-intensive or invasive.This research not only contributes to the field of precision agriculture but also presents a powerful tool for plant biologists to deepen their understanding of how nutrient stress impacts plant architecture. The insights gained from this approach have the potential to inform precision nutrient management strategies, leading to more sustainable and efficient agricultural practices.
Walnuts are the second most produced and consumed tree nut, with over 2.6 million metric tons produced in the 2022-23 harvest cycle alone. The United States is the second largest producer, accounting for 25% of the total global supply. Nonetheless, producers face an ever-growing demand in a more uncertain climate landscape, which requires effective and efficient walnut selection and breeding of new cultivars with increased kernel content and easy-to-open shells. Past and current efforts select for these traits using hand-held calipers and eye-based evaluations. Yet there is plenty of morphology that meets the eye but goes unmeasured, such as the volume of inner air or the convexity of the kernel. Here, we study the shape of walnut fruits based on X-ray CT (Computed Tomography) 3D reconstructions. We compute 49 different morphological phenotypes for 1264 individuals comprising 149 accessions. These phenotypes are complemented by traits of breeding interest such as ease of kernel removal and kernel weight. Through allometric relationships —relative growth of one tissue to another—, we identify possible biophysical constraints at play during development. We explore multiple correlations between all morphological and commercial traits, and identify which morphological traits can explain the most variability of commercial traits. We show that using only volume and thickness-based traits, especially inner air content, we can successfully encode several of the commercial traits.
PlantCV is an open-source open-development image analysis software package for plant phenotyping written in Python that has been actively developed since 2014. A new version of PlantCV was recently released. Major goals of the version 4 release were to 1) simplify the process of developing workflows by reducing the amount of coding needed; 2) broadening the set of supported data types; and 3) introducing interactive annotation tools that can be used directly in PlantCV workflow notebooks. Here we highlight the use of point annotations that can be used to quickly collect sets of points for parameterization of functions such as regions of interest or the identification of landmark points. Another application of point annotations this for image annotation, which is a major bottleneck in plant phenomics. For example, we have used point annotations to analyze microscopy images aimed at measurement of quinoa salt bladders, the number and size of stomata, and scoring of pollen germination. These tasks have traditionally been low throughput and have required manual scoring, but our point annotation tools can be used along with traditional segmentation methods to semi-automatically detect and annotate images. The PlantCV point annotation tools also allow users to correct semi-automated detection results before classification (e.g., germinated vs non-germinated pollen) and extraction of size & color traits per object. Once images are annotated, results can be analyzed directly or potentially can be used as labeled data in supervised learning methods.
We propose a solution with edge image processing and long-range connectivity named AICropCAM that can be used in drones, ground platforms, or as distributed sensor networks for plant phenotyping. We have successfully run multiple image classification, segmentation, and object detection models on this platform. Classification models help classify images based on image quality, crop type, and phenological stage. Object detection models could detect and count the number of plants, weeds, and insects and expand to count the flowers, fruits, and leaves. Segmentation models can separate the canopy from the background and potentially segment traits that indicate the nutrient deficit or disease. Canopy segmentation results help estimate leaf area index and chlorophyll content. Because the models run sequentially, like a decision tree, there is flexibility to select the most accurate model considering the crop type and the crop’s phenological stage that helps scan fields with multiple crops. The generated information is geo-tagged and transmitted through low throughput long-range communication protocol (e.g., LoRa) to cloud data storage. AICropCAM reduces 2-megabyte image files to around 100-byte actionable data, resulting in massive savings in data storage and transmission costs. This edge image capturing and processing system is open to improvement with new neural network predictive models and faster edge computers. This system provides plant scientists and crop breeders a low-cost, flexible phenotyping tool to extract multiple crop traits related to abiotic and biotic stress responses.
This study aimed to investigate the physiological and morphological responses of maize seedlings to environmental stressors. High-throughput imaging analysis was used to characterize the stress response phenotypes of 47 distinct maize genotypes exposed to water limitation (40% field capacity), heat (38 °C), and the combination of the two stresses. RGB and NIR images were collected daily, and analyzed with open-source, open-development software PlantCV. Our investigation focused on quantitative measurements of daily area, daily water loss, evaluation of estimated water use efficiency (WUE), and near-infrared (NIR) reflectance as an estimation of water content in tissues. Quantitatively comparing two non-normal distributions, like the NIR histogram data, can be challenging but important, since metrics like median mode values often do not capture variation across a sample. One of the primary obstacles lies in defining an appropriate metric to accurately quantify the variation between two distributions. To analyze NIR reflectance, we evaluated the dissimilarity between pairwise NIR histograms by utilizing the earth mover’s distance (EMD) analysis. The EMD quantifies the difference between two NIR reflectance distributions, and therefore indirectly evaluates the difference in leaf water content for stress vs control conditions. Overall, most genotypes displayed growth reduction under drought and heat. Interestingly, specific lines exhibited heightened WUE under water limitation, suggesting response to water scarcity. EMD results showing dissimilarities in pairwise NIR reflectance of control vs drought, can be used to describe variation in dynamic changes in water content levels among these groups.
As the world’s population grows and the demand for food rises, more attention has been paid to increase crop yields and enhance global food security. Modern remote sensing technologies enable us to capture spectral features (such as NDVI) of crop canopy, which are widely used to assess crop growth, health, and stress conditions. We noticed in the literature that crop NDVI shows short-term variation within a day. Therefore, in this study, we leveraged the Spidercam Field Phenotyping Facility at University of Nebraska-Lincoln to measure and quantify the diurnal variation of canopy NDVI for corn and soybean crops. The experiments and data collection were conducted in 2022 and 2023, with canopy reflectance measured by a spectrometer (400-1000 nm) at multiple days covering different growth stages. In each day, measurements were taken at multiple time points within a time window ±3 hours centered around solar noon. Our analysis showed a clear concave-shaped, diurnal trend in NDVI for both crops, with the lowest NDVI at solar noon. More analyses will be performed to quantify this diurnal pattern and dissect the sources of variation due to solar angle and change in canopy morphology. This research will further improve the accuracy and relevance of NDVI in plant phenotyping and many other scientific disciplines and applications.