Research#llm📝 BlogAnalyzed: Dec 29, 2025 07:27

Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678

Published:Apr 1, 2024 19:15
1 min read
Practical AI

Analysis

This podcast episode from Practical AI discusses the vulnerabilities of Large Language Models (LLMs) and the potential risks associated with their deployment, particularly in real-world applications. The guest, Jonas Geiping, a research group leader, explains how LLMs can be manipulated and exploited. The discussion covers the importance of open models for security research, the challenges of ensuring robustness, and the need for improved methods to counter adversarial attacks. The episode highlights the critical need for enhanced AI security measures.

Reference

Jonas explains how neural networks can be exploited, highlighting the risk of deploying LLM agents that interact with the real world.