Generalized Regularized Evidential Deep Learning Models
Analysis
This paper addresses a key limitation of Evidential Deep Learning (EDL) models, which are designed to make neural networks uncertainty-aware. It identifies and analyzes a learning-freeze behavior caused by the non-negativity constraint on evidence in EDL. The authors propose a generalized family of activation functions and regularizers to overcome this issue, offering a more robust and consistent approach to uncertainty quantification. The comprehensive evaluation across various benchmark problems suggests the effectiveness of the proposed method.
Key Takeaways
- •EDL models are improved by addressing the learning-freeze behavior.
- •Generalized activation functions and regularizers are proposed to improve EDL.
- •The approach is validated on multiple benchmark datasets.
“The paper identifies and addresses 'activation-dependent learning-freeze behavior' in EDL models and proposes a solution through generalized activation functions and regularizers.”