PUBLISHER: ResearchInChina | PRODUCT CODE: 1744403
PUBLISHER: ResearchInChina | PRODUCT CODE: 1744403
Research on intelligent cockpit platforms: in the first year of mass production of L3 AI cockpits, the supply chain accelerates deployment of new products
An intelligent cockpit platform primarily refers to the software and hardware architecture that implements various subsystems and functions of an intelligent cockpit. It consists of two parts: hardware and software. The hardware part is mainly the underlying hardware platform composed of a variety of heterogeneous chips; the software part is mainly the software platform product that is composed of underlying operating system, middleware and application layer and enables cockpit system functions.
In this report, referencing the cockpit classification standards of China SAE, we further categorize vehicle intelligent cockpit platforms into:
L0, Functional Cockpits, providing basic IVI services;
L1, Perceptive Cockpits, equipped with connection and OTA capabilities, and basic in-cabin perception capabilities (e.g., voice);
L2, Partially Cognitive Intelligent Cockpits, that boasts enhanced in-cabin and out-cabin cognitive capabilities. Some supports DMS/OMS, V2X, biometrics, etc., and is capable of active interaction with the driver; most support on-device foundation models with 100 million to 2 billion parameters;
L3, Highly Cognitive AI Cockpits, featuring robust in-cabin AI performance, all-scenario active perception, and the ability to make intelligent decisions and autonomously execute tasks in certain scenarios, and supporting on-device foundation models with 5 to 15 billion parameters;
L4, Fully Cognitive AI Cockpits, supporting driverless operation, active execution, remote cloud control, emotional interaction, and on-device foundation models with 30+ billion parameters.
Specifically, in the research, intelligent cockpit platform products by framework can be divided into 7 major categories and over 40 subcategories to explore intelligent cockpit product innovation directions, intelligent cockpit architecture evolution, and supply chain construction of OEMs and Tier1s.
Cockpit platform innovation directions: the pace of deploying new hardware and software products like Qualcomm 8397 Platform and AI Agent quickens
As intelligent cockpits evolve into a "third living space" that integrates multimodal interaction, real-time data processing, and immersive experiences, higher requirements are placed on cockpit hardware platforms. In general, on the basis of cost control, innovation and upgrade can be carried out mainly in the following aspects:
High Computing Power Requirement: Driven by requirements for on-device AI foundation models, 3D HMI, in-vehicle gaming, immersive interaction, and integration of intelligent cockpit and intelligent driving, cockpits require high computing power to support complex algorithms. Mainstream high-end cockpit SoCs in mass production now offer CPU compute of 200+ KDMIPS, with continuous improvements in GPU and NPU.
High Bandwidth Requirement: Peripheral functions like on-device AI foundation models, multiple sensors, multi-screen display, and multimodal interaction enable cockpits to pose higher requirements for cockpit multi-source data transmission speed, smooth multi-screen interaction, and real-time multimedia playback, so high bandwidth is required to support the implementation of intelligent cockpit functions. With increasing parameter sizes, on-device large multimodal models demand ever higher memory capacity and data reading speeds (bandwidth). DRAM bandwidth evolves from tens of GB/s to 100+ GB/s, and the specification is being upgraded to LPDDR5/5x.
High-Speed Transmission Requirement: The requirements of cockpits for high-speed communication lie in ensuring fast, low-latency, reliable, stable, and secure data transmission to support ever more cockpit functions and applications.
For instance, in the rapid development of intelligent cockpits, PATEO CONNECT+ works to make its layout and offers a diverse range of innovative products to meet the needs for higher-performance intelligent cockpits, such as introduction of AI foundation models in vehicles and immersive 3D experience. Examples include the new-generation intelligent cockpit solution powered by the Qualcomm Snapdragon Cockpit Elite (QAM8397P), and PATEO CONNECT+'s AI Agent device-cloud integrated platform.
PATEO's Next-Gen Cockpit Platform: Based on the Qualcomm Snapdragon Cockpit Elite (QAM8397P)
In April 2025, PATEO CONNECT+ announced its next-gen intelligent cockpit solution powered by the Qualcomm Snapdragon Cockpit Elite (QAM8397P) will help global OEMs offer top-tier intelligent cockpit products for central computing architectures and deliver high-performance, highly scalable, and high-security cockpit solutions. With CPU compute of 660K DMIPS and AI compute of 360 TOPS, the fifth-gen Qualcomm cockpit SoC product can support up to 16 4K displays, real-time ray tracing, and immersive 3D experience. Its built-in Hexagon NPU can not only process large language models to provide services like local user manual and Q&A, but learn and adapt to user preferences and even automatically execute certain actions to enhance performance of on-device AI foundation models in operation.
PATEO's AI Solution: PATEO Device-Cloud Integrated AI Agent Platform
PATEO CONNECT+ is developing a device-cloud integrated AI Agent platform for intelligent cockpits, based on powerful foundation models and unique requirements of Internet of Vehicles. Through tight integration of cloud and vehicle, the platform enables more advanced features, for example, precise vehicle manual recognition and intelligent Q&A, role-playing, and casual conversation, brining a new human-vehicle interaction experience and driving intelligent cockpit evolution.
Cockpit platform innovation directions: on-device AI foundation model parameters grow from 1 billion to 7 billion, and even 10 billion
AI foundation models are transforming the automotive industry. They are enabling a new generation of immersive cockpit experience in terms of human-computer interaction, driving and riding experience, and in-vehicle entertainment services, transforming cockpits from a tool into an "emotional and soulful partner".
In intelligent cockpits, AI foundation models typically rely on a device-cloud cooperative architecture. The cloud handles large-scale computation and data storage, while the device ensures low-latency real-time responses and user privacy protection. Considering that cloud foundation models are subject to privacy security, latency, stability, cost and so forth, in the future, partial computation and storage in using AI foundation models will need to be completed by vehicle computing resources, so it is necessary to add on-device AI foundations. In recent two years, suppliers have launched new hardware and software products to facilitate AI cockpit solution evolution.
On-device AI hardware requirement: single-chip solutions like MediaTek's 3nm MT8678 & dual-Qualcomm Snapdragon 8295 hardware platform products
Currently, vehicle cockpit AI foundation models only have 1 billion parameters or even fewer, with slightly insufficient performance. Future intelligent cockpits will need to support AI large language models with over 7 billion parameters, thus requiring a higher-compute, higher-performance base for intelligent cockpit hardware products.
For example, BICV's MARS 06 Intelligent Cockpit, based on the MT8678 platform with NPU compute of 46+ TOPS, supports on-device AI foundation models (e.g., LLMs and Stable Diffusion) with 13 billion parameters, and allows for integration of IVI, cluster, AVM, DMS, IMS, AI foundation models, and APA.
In addition to improving single-chip performance, some OEMs even adopt dual cockpit SoCs to ensure low-latency, efficient, and stable operation of cockpits. For example, the cockpit of Lynk & Co 900 launched in 2025 uses dual Qualcomm Snapdragon 8295 chips to build a high-compute base, enabling such cockpit features as eight-screen interconnection (including a 30-inch 6K integrated main screen, rear 30-inch 6K floating entertainment screens, and a 95-inch AR-HUD), 3D visual desktop, cross-screen gestures, immersive audio effects, and emotional interaction via the "AI Assistant".
However, hardware upgrades are often accompanied by surging costs. How to find a balance between performance improvement and cost control has become a core issue for OEMs and suppliers.
On-device AI software requirement: cockpits are connected to AI foundation models and integrate capabilities of foundation models like DeepSeek
In recent two years, OEMs have deployed cockpit AI foundation models on their vehicles, significantly enhancing interaction naturalness, adaptability to scenarios, and personalization of their cockpits. For example, Geely's full-stack self-developed AI foundation model, the Xingrui AI Foundation Model, is introduced into vehicles, covering three foundation models: large language model, large multimodal model, and digital twin model.
In February 2025, Geely's Xingrui AI Foundation Model was deeply coupled with DeepSeek-R1. Basic intent is processed by Xingrui AI Foundation Model System 1, while complex intent is processed by the System 2 consisting of Xingrui VLM perception model and DeepSeek reasoning model. After the two systems make a decision, commands are sent to the execution system to control functions including intelligent driving, intelligent cockpit, generative interaction, and UI Agent interaction. The decision-making process also refers to signal libraries, function libraries, and external information sources.
Cockpit platform innovation directions: cockpit applications like acoustics and HMI pursue immersive and high-quality experiences
As cockpit intelligence advances, the application layer of intelligent cockpits is gradually progressing toward immersive and high-quality experiences. In the case of cockpit acoustic systems, in recent years OEMs have shifted from replying on suppliers to the self-development + supplier model, and transformed from traditional single sound effect output to multi-dimensional sensory interaction. Based on simultaneous upgrade of software and hardware, acoustic systems develop in the direction of immersion, emotionality, scenario-based interaction and personalization.
NIO ET9, launched in 2025, is equipped with the NIO Lyre 8.2.4.8 Sound System, an immersive sound system featuring 35 speakers, and power of 2,800W at 10% distortion. It introduces headrest speakers, a rear center surround unit, and a front and rear dual subwoofer combination, enabling a 360° sound field enveloping every seat and delivering a cinematic-level immersive experience.
At the Auto Shanghai 2025, Zeekr showcased Zeekr 9X, its new flagship SUV equipped with the latest Zeekr Sound MAX audio system. This 32-speaker audio system with maximum power of over 4,000W uses new materials and technologies such as graphene diaphragms and NANO-CELL, and supports 7.1.4 panoramic immersive sound effects.
Besides acoustic hardware upgrade, the Zeekr Sound system also features a "Tuning Geek" function, which enables independent adjustment of gain, delay, and equalization across all channels, and full vehicle Reverb effect adjustment, allowing users to create personalized audio effect. Additionally, the Tuning Geek function supports sharing. Users can share their custom sound effects with others or add others' sound effects via shareable codes.
As OEMs pay ever more attention to the functionality and experience of intelligent cockpit applications like in-car acoustics, corresponding products require innovation and upgrade. In response, suppliers are working to develop new products to meet market demands swiftly.
PATEO CONNECT+ has introduced an all-scenario intelligent interaction system for vehicles. It utilizes body panels to produce sound and integrates body panel vibration sensing to enable innovative exterior interactions. By lightly tapping the body panels, users can activate voice recognition, open the trunk or doors, and project sound externally on the corresponding side. This system combines sound vibrators, vibration sensors, and control units, and offers multi-modal interaction capabilities, such as omnidirectional sound emission and perception outside the vehicle. It supports over 50 new application scenarios, and covers leisure and entertainment, exterior interaction, safe vehicle control and more, including collision detection, external charging control, local rescue, interior and exterior intercoms, and external stereo music playback.