Research#llm📝 BlogAnalyzed: Dec 29, 2025 07:37

Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini - #618

Published:Feb 27, 2023 18:26
1 min read
Practical AI

Analysis

This article from Practical AI discusses privacy and security concerns in the context of Stable Diffusion and Large Language Models (LLMs). It features an interview with Nicholas Carlini, a research scientist at Google Brain, focusing on adversarial machine learning, privacy issues in black box and accessible models, privacy attacks in vision models, and data poisoning. The conversation explores the challenges of data memorization and the potential impact of malicious actors manipulating training data. The article highlights the importance of understanding and mitigating these risks as AI models become more prevalent.

Reference

In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models.