ProfASR-Bench: A Benchmark for Context-Conditioned ASR
Analysis
This paper introduces ProfASR-Bench, a new benchmark designed to evaluate Automatic Speech Recognition (ASR) systems in professional settings. It addresses the limitations of existing benchmarks by focusing on challenges like domain-specific terminology, register variation, and the importance of accurate entity recognition. The paper highlights a 'context-utilization gap' where ASR systems don't effectively leverage contextual information, even with oracle prompts. This benchmark provides a valuable tool for researchers to improve ASR performance in high-stakes applications.
Key Takeaways
- •Introduces ProfASR-Bench, a new benchmark for evaluating ASR in professional settings.
- •Highlights the 'context-utilization gap' in current ASR systems.
- •Provides a standardized context ladder and entity-aware reporting.
- •Offers a reproducible testbed for comparing ASR systems.
“Current systems are nominally promptable yet underuse readily available side information.”