LLM-Guided Exemplar Selection for Few-Shot HAR
Published:Dec 26, 2025 21:03
•1 min read
•ArXiv
Analysis
This paper addresses the challenge of few-shot Human Activity Recognition (HAR) using wearable sensors. It innovatively leverages Large Language Models (LLMs) to incorporate semantic reasoning, improving exemplar selection and performance compared to traditional methods. The use of LLM-generated knowledge priors to guide exemplar scoring and selection is a key contribution, particularly in distinguishing similar activities.
Key Takeaways
- •Proposes an LLM-Guided Exemplar Selection framework for few-shot HAR.
- •Uses LLM-generated knowledge priors for semantic reasoning.
- •Achieves state-of-the-art performance on UCI-HAR dataset under few-shot conditions.
- •Combines semantic priors with structural and geometric cues for exemplar selection.
Reference
“The framework achieves a macro F1-score of 88.78% on the UCI-HAR dataset under strict few-shot conditions, outperforming classical approaches.”