A potential Mars Sample Return (MSR) architecture is being jointly studied by NASA and ESA. As currently envisioned, the MSR campaign consists of a series of 3 missions: sample cache, fetch and return to Earth. In this paper, we focus on the fetch part of the MSR, and more specifically the problem of autonomously detecting and localizing sample tubes deposited on the Martian surface. Towards this end, we study two machine-vision based approaches: First, a geometry-driven approach based on template matching that uses hard-coded filters and a 3D shape model of the tube; and second, a data-driven approach based on convolutional neural networks (CNNs) and learned features. Furthermore, we present a large benchmark dataset of sample-tube images, collected in representative outdoor environments and annotated with ground truth segmentation masks and locations. The dataset was acquired systematically across different terrain, illumination conditions and dust-coverage; and benchmarking was performed to study the feasibility of each approach, their relative strengths and weaknesses, and robustness in the presence of adverse environmental conditions.

Reference

[pdf]

@inproceedings{daftry2021machine,
  title={Machine Vision based Sample-Tube Localization for Mars Sample Return},
  author={Daftry, Shreyansh and Ridge, Barry and Seto, William and Pham, Tu-Hoa and Ilhardt, Peter and Maggiolino, Gerard and Van der Merwe, Mark and Brinkman, Alex and Mayo, John and Kulczyski, Eric and others},
  booktitle={IEEE Aerospace Conference},
  pages={1--12},
  year={2021},
  organization={IEEE}
}