LLA: Securing Generative Models with Logic-Locked Accelerators

Published:Dec 26, 2025 05:47
1 min read
ArXiv

Analysis

This paper addresses the critical issue of intellectual property protection for generative AI models. It proposes a hardware-software co-design approach (LLA) to defend against model theft, corruption, and information leakage. The use of logic-locked accelerators, combined with software-based key embedding and invariance transformations, offers a promising solution to protect the IP of generative AI models. The minimal overhead reported is a significant advantage.

Reference

LLA can withstand a broad range of oracle-guided key optimization attacks, while incurring a minimal computational overhead of less than 0.1% for 7,168 key bits.