Innovative Experiment Reveals Modern LLMs Converge into Two Exciting Stylistic Families
Research#llm📝 Blog|Analyzed: Apr 21, 2026 04:50•
Published: Apr 21, 2026 04:34
•1 min read
•r/ArtificialInteligenceAnalysis
A fascinating new experiment sheds light on the fascinating evolution of modern Large Language Models (LLMs), revealing how they naturally group into two distinct stylistic families. By analyzing the raw internal 'thought vectors' of 25 different models through Google's Gemma 4, researchers mapped out an incredible heatmap of model personalities. This breakthrough highlights the amazing pace of innovation and shows how cheaper alternatives are accelerating adoption while sharing the foundational strengths of industry leaders like GPT and Claude.
Key Takeaways
- •25 different Large Language Models (LLMs) were evaluated using the exact same 50 prompts to analyze their stylistic outputs.
- •Researchers extracted high-dimensional vectors (107,520 dimensions) from Gemma 4 to compute cosine similarity and map model relationships.
- •The resulting heatmap beautifully illustrates that modern models naturally converge into a 'GPT resemblance' or a 'Claude resemblance' family.
Reference / Citation
View Original"A very clear two cluster split: Top left red/orange block → 'GPT resemblance' family (GPTs, Grok 4.x, DeepSeek, MiniMax, Kimi, Trinity, etc.). Bottom right red block → 'Claude resemblance' family (Claude Opus/Sonnet, GLM, Qwen, Gemini 3.1 Pro)"