Automated Segmentation of Insect Anatomy from Micro-CT Images Using Deep
Learning
Abstract
Three-dimensional (3D) imaging, such as micro-computed tomography
(micro-CT), is increasingly being used by organismal biologists for
precise and comprehensive anatomical characterization. However, the
segmentation of anatomical structures remains a bottleneck in research,
often requiring tedious manual work. Here, we propose a pipeline for the
fully-automated segmentation of anatomical structures in micro-CT images
utilizing state-of-the-art deep learning methods, selecting the ant
brain as a test case. We implemented the U-Net architecture for 2D image
segmentation for our convolutional neural network (CNN), combined with
pixel-island detection. For training and validation of the network, we
assembled a dataset of semi-manually segmented brain images of 76 ant
species. The trained network predicted the brain area in ant images fast
and accurately; its performance tested on validation sets showed good
agreement between the prediction and the target, scoring 80%
Intersection over Union (IoU) and 90% Dice Coefficient (F1) accuracy.
While manual segmentation usually takes many hours for each brain, the
trained network takes only a few minutes. Furthermore, our network is
generalizable for segmenting the whole neural system in full-body scans,
and works in tests on distantly related and morphologically divergent
insects (e.g., fruit flies). The latter suggests that methods like the
one presented here generally apply across diverse taxa. Our method makes
the construction of segmented maps and the morphological quantification
of different species more efficient and scalable to large datasets, a
step toward a big data approach to organismal anatomy.