loading page

A 3D Light weight neural network for plant part segmentation and architectural trait extraction
  • Farah Saeed,
  • Changying Li
Farah Saeed
University of Georgia, University of Georgia

Corresponding Author:[email protected]

Author Profile
Changying Li
University of Georgia, University of Georgia
Author Profile

Abstract

Plant architecture is an important contributing factor for enhanced yield production and quality. The architecture traits are analyzed for crop health monitoring and genetic manipulation for generating high yielding varieties. Computer vision methods applied on 3D pointcloud allow more accurate extraction of architecture traits but consume more time and memory compared to 2D images. This study aims to design light weight 3D deep network for Cotton plant part segmentation and derive seven architectural traits of mainstem height, mainstem diameter, branch inclination angle, branch diameter, and number of branches, nodes, and cotton bolls. The pointcloud data is collected using FARO LiDAR scanner. The mainstem, branches and cotton bolls are manually annotated using Open3D. The preprocessing steps of denoising, normalization and down sampling are applied. 3D Deep network is designed to sample 1024, 512 and 256 points where neighborhood aggregation is performed at radius levels of 1cm, 5cm, and 30cm respectively. Features for remaining points are interpolated. The features from each radius level are concatenated and passed to multi-layer perceptron for pointwise classification. Results indicate that mean IoU and accuracy of 84% and 94% are achieved respectively. A 6.5 times speedup in inference time and 2.4 times reduction in memory consumption compared to Pointnet++ is gained. After applying postprocessing on part segments, an R square value of more than 0.8 and mean absolute percentage error of less than 11% are achieved on all derived architecture traits. The trait extraction results indicate potential utility of this process in plant physiology and breeding programs.