Tillman Radmer & Fabian Hüger & Nico Schmidt

Uncertainty Estimation of Neural Networks

How can a neural network know what it doesn't know? Discover how uncertainty estimation creates a critical safety net for autonomous driving.

Uncertainty Estimation of Neural Networks
#1about 5 minutes

Understanding uncertainty through rare events in driving

Neural networks are more uncertain in rare situations like unusual vehicles on the road because these events are underrepresented in training data.

#2about 3 minutes

Differentiating aleatoric and epistemic uncertainty

Uncertainty is classified into two types: aleatoric (data noise, like blurry edges) and epistemic (model knowledge gaps), which can be reduced with more data.

#3about 3 minutes

Why classification scores are unreliable uncertainty metrics

Neural network confidence scores are often miscalibrated, showing overconfidence at high scores and underconfidence at low scores, making them poor predictors of true accuracy.

#4about 2 minutes

Using a simple alert system to predict model failure

The alert system approach uses a second, simpler model trained specifically to predict when the primary neural network is likely to fail on a given input.

#5about 15 minutes

Using Monte Carlo dropout and student networks for estimation

The Monte Carlo dropout method estimates uncertainty by sampling predictions, and its performance can be accelerated by training a smaller student network to mimic this behavior.

#6about 14 minutes

Applying uncertainty for active learning and corner case detection

An active learning framework uses uncertainty scores to intelligently select the most informative data (corner cases) from vehicle sensors for labeling and retraining models.

#7about 4 minutes

Challenges in uncertainty-based data selection strategies

Key challenges for active learning include determining the right amount of data to select, evaluating performance on corner cases, and avoiding model-specific data collection bias.

#8about 7 minutes

Addressing AI safety and insufficient generalization

Deep neural networks in autonomous systems pose safety risks due to insufficient generalization, unreliable confidence, and brittleness to unseen data conditions.

#9about 8 minutes

Building a safety argumentation framework for AI systems

A safety argumentation process involves identifying DNN-specific concerns, applying mitigation measures like uncertainty monitoring, and providing evidence through an iterative, model-driven development cycle.

Related jobs
Jobs that call for the skills explored in this talk.

test

Milly
Vienna, Austria

Intermediate

test

Milly
Vienna, Austria

Intermediate

d

Saby Company
Delebio, Italy

Junior

Featured Partners

Related Articles

View all articles
CH
Chris Heilmann
Exploring AI: Opportunities and Risks for Developers
In today's rapidly evolving tech landscape, the integration of Artificial Intelligence (AI) in development presents both exciting opportunities and notable risks. This dynamic was the focus of a recent panel discussion featuring industry experts Kent...
Exploring AI: Opportunities and Risks for Developers
BB
Benedikt Bischof
How we Build The Software of Tomorrow
Welcome to this issue of the WeAreDevelopers Live Talk series. This article recaps an interesting talk by Thomas Dohmke who introduced us to the future of AI – coding.This is how Thomas describes himself:I am the CEO of GitHub and drive the company’s...
How we Build The Software of Tomorrow

From learning to earning

Jobs that call for the skills explored in this talk.

AI Developer

AI Developer

TRUSTEQ GmbH

DevOps
Gitlab
Docker
Node.js
Jenkins
+2
AI Developer

AI Developer

TRUSTEQ GmbH

DevOps
Gitlab
Docker
Node.js
Jenkins
+1