Breaking New Ground: StepFun and Qianli Technology Join Forces to Build a Native Autonomous Driving Foundation Model from Scratch!
business#autonomous driving📝 Blog|Analyzed: Apr 29, 2026 07:57•
Published: Apr 29, 2026 15:42
•1 min read
•InfoQ中国Analysis
This is an incredibly exciting development for the autonomous vehicle industry, as Qianli Technology and StepFun are taking a bold leap beyond standard end-to-end systems. By building a native driving foundation model from scratch rather than just fine-tuning an existing Large Language Model (LLM), they are unlocking profound new levels of physical world comprehension. This innovative approach elegantly bridges the gap between advanced AI research and massive commercial scalability in the automotive sector.
Key Takeaways
- •Qianli Technology and StepFun are building a 'native autonomous driving foundation model' that understands driving from its inception rather than just layering driving capabilities onto an existing model.
- •This collaboration uniquely combines massive general corpora with real-world driving perception data during the pre-training phase to give the model an innate grasp of 3D space and vehicle dynamics.
- •Qianli Technology is seeing massive commercial success, with plans to scale its ASD system to over 1 million vehicles by the end of 2026, creating a highly effective data and model feedback loop.
Reference / Citation
View Original""Not all large models are suitable for moving towards L4. Using an open-source large language model for post-training to obtain an intelligent driving assistance model has a relatively limited upper limit of capability.""
Related Analysis
business
Evolving AI Coding Assistants: GitHub Copilot and Claude Embrace Scalable Usage Models
Apr 29, 2026 08:09
businessOpenAI and Amazon Web Services Launch Historic Joint Announcement to Power Agentic AI
Apr 29, 2026 05:51
businessExploring the Dynamic Ecosystem of AI Data Sourcing and Contractor Management
Apr 29, 2026 08:22