Logo image
Terrain-aware autonomous ground navigation in unstructured environments informed by human demonstrations: a dissertation in Electrical Engineering
Dissertation   Open access

Terrain-aware autonomous ground navigation in unstructured environments informed by human demonstrations: a dissertation in Electrical Engineering

Christian C. Ellis
Doctor of Philosophy (PHD), University of Massachusetts Dartmouth
2024
DOI:
https://doi.org/10.62791/2000

Abstract

Mobile robots equipped with the capability to perform autonomous waypoint navigation can replace humans for applications such as humanitarian assistance, nuclear cleanup, reconnaissance, and transportation. In such tasks, the robot must be able to perform complex navigation behaviors, including the ability to navigate accurately and reliably over unstructured terrain while responding to unseen situations, similar to how a human would. To successfully complete missions in unstructured natural environments, agents must (i) respond to previously unseen environmental features, and (ii) be able to develop an accurate perception representation of the current environment. This dissertation provides a solution for each of the two aforementioned sub-problems using human teleoperated demonstrations. Learning from demonstration has been shown to be advantageous for navigation tasks as it allows for machine learning non-experts to quickly provide information needed for robots to learn complex traversal behaviors. First, I present a Bayesian methodology which quantifies uncertainty over the weights of a linear reward function given a dataset of minimal human demonstrations to operate safely in dynamic environments. This uncertainty is quantified and incorporated into a risk averse set of weights used to generate cost maps for trajectory planning. This results in a robot which follows risk averse trajectories by expressing uncertainty in the designed reward function while considering all possible reward functions that satisfy the training environment and human demonstrations. Second, I present a methodology to obtain a perception subsystem for an autonomous ground vehicle by mapping a set of non-semantic classes from an unsupervised algorithm to a set of high-level semantic classes using only unlabeled images and human demonstrations. This enables the robot to obtain a unique fine-tuned set of abstract and semantic classes which represent the current operational environment while avoiding time consuming and expensive ground truth data labeling. Lastly, I outline a framework which combines the two sub-problems, resulting in a methodology which takes a data stream of unlabeled images as input, and provides as output- (i) a refined perception representation, and (ii) a risk-averse reward function for unstructured terrain aware navigation. This formulation enables engineers to drop a robot in an environment it has never been in before, learn an appropriate perception representation, and learn the costs associated with the environmental terrains, using only unlabeled data and a minimal set of human demonstrations.
pdf
Ellis C.C. COE PhD Dissertation 20249.19 MBDownloadView
Open Access CC BY-NC-ND V4.0

Metrics

349 File views/ downloads
54 Record Views

Details

Logo image