Tencent Unveils High-Performance 'Hy3 Preview': A Highly Efficient 295B MoE Model

product#llm📝 Blog|Analyzed: Apr 24, 2026 02:40
Published: Apr 24, 2026 02:32
1 min read
Gigazine

Analysis

Tencent is making incredible strides in the generative AI space with the launch of their impressive new model, Hy3 preview! Boasting a massive 295 billion parameters while actively utilizing 21 billion, this Mixture of Experts (MoE) architecture delivers phenomenal efficiency without sacrificing raw capability. It is absolutely thrilling to see such powerful and scalable large language models pushing the boundaries of what high-performance inference can achieve.
Reference / Citation
View Original
"Tencent has released the high-performance inference model "Hy3 preview", a 295B-A21B MoE model demonstrating high efficiency."
G
GigazineApr 24, 2026 02:32
* Cited for critical analysis under Article 32.