Search:
Match:
1 results
Research#llm🔬 ResearchAnalyzed: Jan 4, 2026 07:58

MASE: Interpretable NLP Models via Model-Agnostic Saliency Estimation

Published:Dec 4, 2025 02:20
1 min read
ArXiv

Analysis

This article introduces MASE, a method for creating interpretable NLP models. The focus is on model-agnostic saliency estimation, suggesting a broad applicability across different NLP architectures. The title clearly states the core contribution: interpretability.
Reference