ProSocialAlign: Preference Conditioned Test Time Alignment in Language Models
Published:Dec 6, 2025 18:00
•1 min read
•ArXiv
Analysis
This article introduces ProSocialAlign, a method for aligning language models with human preferences during test time. The focus is on improving the model's behavior based on conditioned preferences. The source is ArXiv, indicating a research paper.
Key Takeaways
Reference
“”