[D] NOMA update: reproducible self-growing XOR benchmark (shared init, N=10) + optimizer-state “preserve vs reset” ablation
Research#llm📝 Blog|Analyzed: Dec 27, 2025 22:31•
Published: Dec 27, 2025 22:14
•1 min read
•r/MachineLearningAnalysis
This post details an update on NOMA, a system language and compiler focused on implementing reverse-mode autodiff as a compiler pass. The key addition is a reproducible benchmark for a "self-growing XOR" problem. This benchmark allows for controlled comparisons between different implementations, focusing on the impact of preserving or resetting optimizer state during parameter growth. The use of shared initial weights and a fixed growth trigger enhances reproducibility. While XOR is a simple problem, the focus is on validating the methodology for growth events and assessing the effect of optimizer state preservation, rather than achieving real-world speed.
Key Takeaways
Reference / Citation
View Original"The goal here is methodology validation: making the growth event comparable, checking correctness parity, and measuring whether preserving optimizer state across resizing has a visible effect."