EVORA: Deep EVidential Traversability Learning for Risk-Aware Off-Road Autonomy

1Massachusetts Institute of Technology, 2DEVCOM Army Research Laboratory,
3Boston Dynamics AI Institute, 4Northeastern University

Abstract

Traversing terrain with good traction is crucial for achieving fast off-road navigation. Instead of manually designing costs based on terrain features, existing methods learn terrain properties directly from data via self-supervision, but challenges remain to properly quantify and mitigate risks due to uncertainties in learned models. This work efficiently quantifies both aleatoric and epistemic uncertainties by learning discrete traction distributions and probability densities of the traction predictor's latent features. Leveraging evidential deep learning, we parameterize Dirichlet distributions with the network outputs and propose a novel uncertainty-aware squared Earth Mover's distance loss with a closed-form expression that improves learning accuracy and navigation performance. The proposed risk-aware planner simulates state trajectories with the worst-case expected traction to handle aleatoric uncertainty, and penalizes trajectories moving through terrain with high epistemic uncertainty. Our approach is extensively validated in simulation and on wheeled and quadruped robots, showing improved navigation performance compared to methods that assume no slip, assume the expected traction, or optimize for the worst-case expected cost.

intro_figure_overall_architecture

Overview of the EVORA uncertainty-aware traversability learning and risk-aware navigation pipeline. To handle aleatoric uncertainty, EVORA learns empirical traction distributions and uses the conditional value at risk (CVaR) of traction to forward simulate robot states. To handle epistemic uncertainty, EVORA estimates the densities of traction predictor's latent features to identify and avoid out-of-distribution (OOD) terrains.

Indoor Planner Benchmark (2 laps)

rc_env_highlevel

The training and test environments used for the indoor racing experiments. Note that the bi-modality of traction distribution over the vegetation could cause the robot to slow down significantly. During testing, the robot was tasked to drive two laps following a carrot goal along the reference path while deciding between a detour without vegetation and a shorter path with vegetation.


CVaR-Dyn (proposed)

  1. Best time-to-goal and success rate
  2. Risk tolerance alpha = 0.8
  3. The robot preemptively turned early to go through the shortcut.

CVaR-Cost

  1. Risk tolerance alpha = 1.0
  2. Conservative. The robot consistently took the safer detours.

Baseline

  1. Soft vegetation penalty = 10
  2. The robot suffered from understeering and collided with obstacles.

Outdoor Planner Benchmark (3 round trips)

spot_env_highlevel

The outdoor environment consisted of vegetation terrains with different heights and densities. Unlike wheeled robots, a legged robot typically has good linear traction through vegetation, but angular traction may exhibit multi-modality due to the greater difficulty of turning. During testing, two start-goal pairs were used to benchmark the planners and analyze the benefits of avoiding OOD terrains.


CVaR-Dyn (proposed)

  1. Best time-to-goal
  2. Risk tolerance alpha = 0.9
  3. The robot walked very closely to the tall grass.

CVaR-Cost

  1. Risk tolerance alpha = 0.8
  2. A lot of unnecessary turns due to poor solution quality.

Baseline

  1. Penalty for grass and bushes = 20
  2. The robot walked away from tall grass to stay on easier terrain.

Outdoor OOD Avoidance Demonstration


  1. By avoiding OOD terrains, the planner was less prone to getting stuck in local minima due to imperfect map information and unreliable traction estimation.
  2. Without avoiding OOD terrains, the planner was stuck in local minima and the human had to intervene.

BibTeX

@article{cai2023evora,
      title={EVORA: Deep EVidential Traversability Learning for Risk-Aware Off-Road Autonomy},
      author={Cai, Xiaoyi and Ancha, Siddharth and Sharma, Lakshay and Osteen, Philip R. and Bucher, Bernadette and Phillips, Stephen and Wang, Jiuguang and Everett, Michael and Roy, Nicholas and How, Jonathan P.},
      journal={arXiv preprint arXiv:2311.06234},
      year={2023}
    }