Tencent Releases WeDLM 8B Instruct on Hugging Face
Published:Dec 29, 2025 07:38
•1 min read
•r/LocalLLaMA
Analysis
This announcement highlights Tencent's release of WeDLM 8B Instruct, a diffusion language model, on Hugging Face. The key selling point is its claimed speed advantage over vLLM-optimized Qwen3-8B, particularly in math reasoning tasks, reportedly running 3-6 times faster. This is significant because speed is a crucial factor for LLM usability and deployment. The post originates from Reddit's r/LocalLLaMA, suggesting interest from the local LLM community. Further investigation is needed to verify the performance claims and assess the model's capabilities beyond math reasoning. The Hugging Face link provides access to the model and potentially further details. The lack of detailed information in the announcement necessitates further research to understand the model's architecture and training data.
Key Takeaways
- •Tencent releases WeDLM 8B Instruct on Hugging Face.
- •Model claims significant speed improvements in math reasoning.
- •Further research needed to validate performance and capabilities.
Reference
“A diffusion language model that runs 3-6× faster than vLLM-optimized Qwen3-8B on math reasoning tasks.”