You need to sign in or sign up before continuing. dismiss

Steven Lee

and 3 more

Traditionally, walnut kernel color is scored by a human on a four-point color scale ranging from extra light to Amber, developed by the California DFA. Two important reasons for using a high throughput machine vision system for kernel phenotyping is to obtain better quantitative resolution to use in molecular breeding and to avoid errors in human vision phenotyping. RGB and LAB values give much more depth to qualitative calculations, and machine data is far more consistent than human scoring. Previous studies on walnut kernel imaging utilized a thresholding-based approach to segment walnut kernels from their background, however, detection accuracy can be improved. In this study, we make changes to detection methods, primarily using PyTorch computer vision processing and an improved thresholding method to better segment walnut kernels. The PyTorch CNN pipeline allows us to use specific photos to train a model, and the model can segment photos without the need to figure out image thresholds. Our proposed thresholding method uses the magick package in R instead of the ImageJ macros that were previously used. After taking photos with a CVS, we used the CNN model as well as an R script to identify and segment kernels from the background. Our preliminary data shows that with enough training, the CNN model is more robust in edge cases where we have overlapping kernels or shifted images. This paper will focus on exploring the differences between the three thresholding methods, and use the best method for future breeding projects.