Search:
Match:
1 results

Analysis

This article likely discusses a method to improve the performance of CLIP (Contrastive Language-Image Pre-training) models in few-shot learning scenarios. The core idea seems to be mitigating the bias introduced by the template prompts used during training. The use of 'empty prompts' suggests a novel approach to address this bias, potentially leading to more robust and generalizable image-text understanding.
Reference

The article's abstract or introduction would likely contain a concise explanation of the problem (template bias) and the proposed solution (empty prompts).