Search:
Match:
1 results

Analysis

This research explores a critical security vulnerability in fine-tuned language models, demonstrating the potential for attackers to infer whether specific data was used during model training. The study's findings highlight the need for stronger privacy protections and further research into the robustness of these models.
Reference

The research focuses on In-Context Probing for Membership Inference.