Audited Skill-Graph Self-Improvement for Agentic LLMs
Published:Dec 28, 2025 19:39
•1 min read
•ArXiv
Analysis
This paper addresses critical security and governance challenges in self-improving agentic LLMs. It proposes a framework, ASG-SI, that focuses on creating auditable and verifiable improvements. The core idea is to treat self-improvement as a process of compiling an agent into a growing skill graph, ensuring that each improvement is extracted from successful trajectories, normalized into a skill with a clear interface, and validated through verifier-backed checks. This approach aims to mitigate issues like reward hacking and behavioral drift, making the self-improvement process more transparent and manageable. The integration of experience synthesis and continual memory control further enhances the framework's scalability and long-horizon performance.
Key Takeaways
- •Proposes Audited Skill-Graph Self-Improvement (ASG-SI) for agentic LLMs.
- •Focuses on creating auditable and verifiable improvements.
- •Treats self-improvement as iterative compilation of an agent into a skill graph.
- •Integrates experience synthesis and continual memory control.
- •Aims to address security and governance challenges in self-improving agents.
Reference
“ASG-SI reframes agentic self-improvement as accumulation of verifiable, reusable capabilities, offering a practical path toward reproducible evaluation and operational governance of self-improving AI agents.”