OLS Robustness to Sample Removals: Theoretical Analysis

Published:Dec 28, 2025 20:29
1 min read
ArXiv

Analysis

This paper investigates the robustness of Ordinary Least Squares (OLS) to the removal of training samples, a crucial aspect for trustworthy machine learning models. It provides theoretical guarantees for OLS robustness under certain conditions, offering insights into its limitations and potential vulnerabilities. The paper's analysis helps understand when OLS is reliable and when it might be sensitive to data perturbations, which is important for practical applications.

Reference

OLS can withstand up to $k \ll \sqrt{np}/\log n$ sample removals while remaining robust and achieving the same error rate.