Port of OpenAI's Whisper model in C/C++
Research#llm👥 Community|Analyzed: Jan 3, 2026 15:59•
Published: Dec 6, 2022 10:46
•1 min read
•Hacker NewsAnalysis
This Hacker News post highlights a C/C++ implementation of OpenAI's Whisper model. The developer reimplemented the inference from scratch, resulting in a lightweight, dependency-free version. The implementation boasts impressive performance, particularly on Apple Silicon devices, outperforming the original PyTorch implementation. The project's portability is also a key feature, with examples for iPhone, Raspberry Pi, and WebAssembly.
Key Takeaways
- •A C/C++ implementation of OpenAI's Whisper model is available.
- •The implementation is lightweight and dependency-free.
- •It offers significant performance improvements, especially on Apple Silicon.
- •The model is portable and runs on various devices, including iPhone, Raspberry Pi, and WebAssembly.
Reference / Citation
View Original"The implementation runs fully on the CPU and utilizes FP16, AVX intrinsics on x86 architectures and NEON + Accelerate framework on Apple Silicon. The latter is especially efficient and I observe that the inference is about 2-3 times faster compared to the current PyTorch implementation provided by OpenAI when running it on my MacBook M1 Pro."