At NVIDIA GTC in San Jose, Calif. DeepRoute.ai unveiled a 40-billion-parameter Vision-Language-Action Foundation Model that it said integrates perception, reasoning and vehicle control into a single architecture.
The company described the model as performing three roles at once, acting as the driver to execute real-time actions, as the analyst to identify critical events and explain decisions, and as the critic to evaluate trajectories for safety and comfort, Tongyi Cao said.
DeepRoute.ai said the unified model automates large portions of the data pipeline, autonomously spotting high-value events such as near-misses and rare scenarios, performing root-cause analysis, and generating reasoning annotations without manual intervention.
Scale And Commercial Momentum
The firm said the architecture compresses a traditional data closed-loop from more than five days to approximately 12 hours by automating data mining, diagnosis and behavior scoring, creating what the company described as a self-evolving data flywheel.
Tongyi Cao said the reconstructed workflow compounds improvements directly into the model, and that each iteration enhances the system's ability to curate training data and accelerate capability growth.
DeepRoute.ai said it has delivered its advanced autonomous driving systems across more than 250,000 production vehicles and that it has deployed systems across more than 200,000 mass-produced consumer vehicles, as reported by the company.
The company said it captured nearly 40 percent market share among third-party suppliers in the high-level autonomous driving segment for a single month, and that it is targeting deployment of one million vehicles within the timeline it announced, as reported by DeepRoute.ai.
DeepRoute.ai described the 40-billion-parameter Foundation Model as the core cornerstone for its next-generation autonomous driving assistance and as a fundamental AI framework for the physical world, noting it is backed by top-tier investors with over 700 million dollars in funding.
At NVIDIA GTC the company demonstrated how the model, combined with automated data curation and rapid iteration cycles, aims to speed development toward scalable, safety-focused autonomous driving, the company said.