Generalized Regularized Evidential Deep Learning Models

Published:Dec 27, 2025 11:26
1 min read
ArXiv

Analysis

This paper addresses a key limitation of Evidential Deep Learning (EDL) models, which are designed to make neural networks uncertainty-aware. It identifies and analyzes a learning-freeze behavior caused by the non-negativity constraint on evidence in EDL. The authors propose a generalized family of activation functions and regularizers to overcome this issue, offering a more robust and consistent approach to uncertainty quantification. The comprehensive evaluation across various benchmark problems suggests the effectiveness of the proposed method.

Reference

The paper identifies and addresses 'activation-dependent learning-freeze behavior' in EDL models and proposes a solution through generalized activation functions and regularizers.