loading page

Self-Supervised Representation Learning for Digital Agriculture
  • Sudhir Sornapudi
Sudhir Sornapudi
Farming Solutions & Digital, Research & Development, Corteva Agriscience

Corresponding Author:[email protected]

Author Profile

Abstract

The true bottleneck of artificial intelligence (AI) is not access to the data, but rather labeling this data. We have tons of raw agriculture image data coming from various sources and manual labelling remains to be a crucial step to keep the data well organized which requires considerable amount of time, money, and labor. This process can be made more efficient if we can automatically label the raw data. We propose contrastive learning representations for agriculture images (AgCLR) model that uses self-supervised representation learning approach on unlabeled Encirca Notes data (customer generated field notes data), to learn the useful image feature representations from the real-world agriculture images. Contrastive learning is a self-supervised approach that enables model to learn attributes by contrasting samples against each other without the use of labels. AgCLR leverages the state-of-the-art SimCLRv2 framework to learn representations by maximizing the agreement between differently augmented views of same sample. We have incorporated critical enablers like mixed precision, multi-GPU distributed parallel computing, and use of Google Cloud's Tensor Processing Units (TPU) for optimizing the training process. We achieved 80.2% accuracy while classifying the test data. We further applied AgCLR to unrelated task to determine the alleys and rows in corn field videos for corn phenotyping and we observed two cluster formations for alleys and rows when plotted embeddings in a 3-dimensional space. We also developed a content-based image retrieval tool (pixel affinity) to identify similar images in our database and results were visually very promising.
06 Feb 2023Submitted to NAPPN 2023 Abstracts
07 Feb 2023Published in NAPPN 2023 Abstracts