Research Paper#Causal Inference, Randomized Experiments, Monotonicity🔬 ResearchAnalyzed: Jan 3, 2026 06:34
Testing Monotonicity in Randomized Experiments: Limited Learnability
Published:Dec 31, 2025 18:29
•1 min read
•ArXiv
Analysis
This paper investigates the testability of monotonicity (treatment effects having the same sign) in randomized experiments from a design-based perspective. While formally identifying the distribution of treatment effects, the authors argue that practical learning about monotonicity is severely limited due to the nature of the data and the limitations of frequentist testing and Bayesian updating. The paper highlights the challenges of drawing strong conclusions about treatment effects in finite populations.
Key Takeaways
- •Monotonicity in treatment effects is a key concept in causal inference.
- •Design-based perspective allows for formal identification of treatment effect distribution.
- •Frequentist tests have limited power for testing monotonicity.
- •Bayesian updating can be insensitive to whether monotonicity holds.
- •Learning about monotonicity from data is practically challenging.
Reference
“Despite the formal identification result, the ability to learn about monotonicity from data in practice is severely limited.”