PUBLISHER: ResearchInChina | PRODUCT CODE: 1797625
PUBLISHER: ResearchInChina | PRODUCT CODE: 1797625
Patent Trend: Three Major Directions of Intelligent Cockpits in 2025
This report explores the development trends of cutting-edge intelligent cockpits from the perspective of patents. The research scope covers 8 categories of products, i.e., new cockpit display, cockpit interaction, in-cabin monitoring, infotainment, digital key, smart seat, smart speaker, and panoramic sunroof.
This report obtains a relatively effective patent dataset for intelligent cockpits by way of keyword search, data cleaning, and duplicate checking in different segments, and conducts cross-over analysis to obtain the characteristics of patents in the intelligent cockpit field in current stage, involving number of patents, patent distribution, hot patent technology maps, ranking of OEMs and suppliers by number of patents, tracking of featured patents, and development trends of each segment.
About 5,000 Intelligent Cockpit Patents Are Published in China Every Year
As of May 31, 2025, the intelligent cockpit patents in China have numbered 53,673 in total, including 44,267 patents being since 2015. Since 2021, around new 5,000 patents have been published each year.
In terms of patent types, intelligent cockpit patents are concentrated in two categories: cockpit interaction and infotainment. In addition, smart seat, digital key, and cockpit display are also key research areas for supply chain companies.
In terms of applicant types, suppliers are the largest holders of intelligent cockpit patents, accounting for as high as 57.1% of the total number of patents; OEMs rank second, making up 17.0%; universities/research institutions take a 13.0% share.
The patent layout of suppliers shows the following characteristics:
Comprehensive cockpit suppliers such as Baidu and PATEO specialize in infotainment and cockpit interaction businesses, generally with over 50 infotainment patents, and over 20 cockpit interaction patents.
FUTURUS and Jingwei Hirain among others focus on layout of cockpit display.
Huawei, Shenzhen Yinwang Intelligent Technology and more place emphasis on layout of intelligent interiors such as smart seats.
The patent layout of OEMs shows the following characteristics:
Infotainment and cockpit interaction, as the foundation of intelligent cockpits, have become the R&D focus of various OEMs.
FAW, Changan, and Dongfeng have applied for more patents in cockpit displays. They use technologies such as AR HUD, 3D display, and holographic images to create immersive cockpit experiences.
Geely, Chery, FAW, Dongfeng, etc. have invested more in R&D of smart seats to improve ride comfort, generally with over 50 patents.
As a convenient function, digital keys have been competitively laid out by multiple OEMs such as Changan, Chery, Geely, and BESTUNE.
Intelligent cockpit acts as the core carrier for users to perceive vehicles. In its future development, more attention will be paid to technology integration, user experience, and ecosystem construction.
Technology Integration: Intelligent cockpits are deeply integrated with technologies such as autonomous driving, the Internet of Vehicles, and AI, gradually breaking down the computing power barriers (e.g., cockpit-driving integration), time-space barriers (e.g., the Internet of Vehicles, satellite communication, V2X), and function barriers (e.g., scenario linkage), realizing functional collaboration, and enabling full-scenario coverage of cockpit services.
Enhanced User Experience: User experience gets improved by personalized and customized services and scenario-based applications. For example, accurate user identification is achieved through multi-modal perception, and the cockpit environment is adjusted according to the user's status and emotion. Based on user profile and historical data, proactive content and service recommendations are provided to achieve a personalized and customized experience. For the functional requirements in different scenarios such as rest mode, sightseeing mode, office mode, and child-care mode, multiple functions are linked to achieve convenient operation and an immersive user experience.
The development of AI has achieved fruitful results in both intelligent driving and intelligent cockpit. However, the current isolated design cannot integrate different sensors, and there is still a long way to go for cross-domain integration. In current stage, the integration of intelligent cockpit and intelligent driving mainly stays at the simple domain information interaction level, where intelligent driving scenarios need to be displayed through the cockpit, or driving and riding instructions are transmitted through the cockpit. Baidu and Changan among others have taken important steps towards the integration of intelligent cockpit and intelligent driving based on business needs.
Case 1: Baidu's Patent on the Integration of Autonomous Driving and Intelligent Cockpit
In July 2025, Baidu disclosed a patent: Method, Device, Equipment and Medium for Determining Passenger Status Based on Unmanned Driving.
Patent Number: CN120308129A
Characteristics: This method is applied to unmanned vehicles. It determines whether passengers have gotten off the vehicle according to multi-dimensional environmental monitoring information (door, seat, vision) obtained from intelligent cockpit sensors, thereby improving the accuracy of judging whether the passenger has got off the vehicle.
Case 2: Changan's Data Transmission Method for Intelligent Driving and Intelligent Cockpit
In January 2025, Changan Automobile announced a patent: Intelligent Driving Data Transmission Method, Device, Intelligent Cockpit System and Vehicle.
Patent Number: CN119636612A.
Characteristics: It includes receiving intelligent driving data based on an external Ethernet interface, and receiving demand information of each functional application in the intelligent cockpit system based on an internal Ethernet interface. Finally, it realizes the internal data distribution of the intelligent cockpit system based on Ethernet, supporting real-time data interaction for functions such as navigation, entertainment, and driving assistance.
Intelligent cockpit is evolving from "function stacking" to "cognitive hub". The deep integration with AI technology is reconstructing the human-vehicle interaction model. With the application of AI foundation models, the intelligent, proactive, and emotional experience of cockpit interaction is further enhanced. Intelligent cockpits tend to offer simpler functions and easier operation with the support of AI foundation models.
Case 1: XPeng's Patent on Vision-Language Model
In July 2025, XPeng disclosed a patent: Request Processing Method, Device, Equipment and Medium Based on Vision Language Model.
Patent Number: CN120259425A
Technical Features: It realizes in-depth understanding of graphic and text information, and task execution through a vision-language model (VLM), aiming to solve the problem of low recognition accuracy of target objects (such as icons, characters, and wireframes) on the IVI screen, especially in recognition of small targets and complex-feature objects. Its core value lies in the deep adaptation of the vision-language model to IVI scenarios, providing a feasible technical solution for precise interaction in intelligent cockpits and promoting the upgrade of vehicle systems from "passive response" to "precise understanding".
Case 2: iGentAI Computing's Intelligent Cockpit System Based on AI Foundation Model
In July 2025, iGentAI Computing disclosed a patent: Vehicle Intelligent Cockpit Terminal and System Based on AI Foundation Model.
Patent Number: CN120096602B
Technical Features: Its core is to analyze multi-dimensional sensor data in real time through an AI foundation model, predict dangerous states in advance, and trigger hierarchical emergency measures to improve vehicle safety performance.
The sensor data collected in real time in this patent includes:
Environmental data: interior and exterior vehicle temperature (accuracy: +-0.5°C), smoke concentration, ambient noise;
Mechanical data: battery internal pressure, vehicle body deformation (in millimeters), collision acceleration (tri-axis G-value);
Visual data: postures of in-vehicle occupants collected by cameras, images of external obstacles (resolution: 1280X720, 15fps).
The data is collected via the CAN/LIN bus at a 100ms cycle. After filtering and normalization, it is concatenated with the previous 5 seconds of data to form a temporal sequence, providing historical context for model prediction.
Scenario-based application is one of the core development directions of intelligent cockpits. Its essence is to transform the vehicle into a "third living space" that can actively perceive, understand, and respond to user needs through the collaboration of software and hardware.
The scenario-based functions of intelligent cockpits mainly involve three aspects: Firstly, scenario definition: by combining users' functional requirements with real-time scenarios, provide active control of the hardware system; secondly, dynamic scenario switching: support users to customize trigger conditions (e.g., temperature and time) and automatically link hardware modules like air conditioner and seat; thirdly, emotional interaction: through fuzzy intention recognition (e.g., "It's stuffy in the car") and edge learning, realize natural interaction in specific scenarios.
Case 1: Foryou's Intelligent Cockpit Control Method Based on Scenario Mode
In May 2025, Huayang disclosed a patent: A Method for Controlling An Automobile Intelligent Cockpit Based on A State Machine.
Patent Number: CN120029128A
Technical Features: According to users' functional requirements, cockpit scenarios are divided into 7 categories (vehicle dormancy, regular vehicle use, remote control, OTA update, sentry mode, charging/discharging, and life detection), and 5 corresponding target working modes are defined. Each mode specifies the power supply state, network state, and functional system state, enabling the cockpit system to operate flexibly, efficiently, and with low power consumption in different scenarios.
Case 2: SERES' Scenario Classification Model Training Method
In February 2025, SERES disclosed a patent: Scenario Classification Model Training Method, Device, Computer Equipment and Storage Medium.
Patent Number: CN119399532A
Technical Features: Focusing on the viewing scenario in intelligent cockpits, dynamically adjust the air supply direction of the air conditioner by training a scenario classification model to recognize wind direction scenarios in videos, so as to enhance the immersive in-vehicle viewing experience.
Case 3: NIO's Multimedia Content Interaction Based on Vehicle Cockpit
In June 2025, NIO disclosed a patent: Multimedia Content Interaction Method, Device, Equipment and Storage Medium Based on Vehicle Cockpit.
Patent Number: CN120191303A
Technical Features: For scenarios such as audio-visual media, interactive games, and music playback, connect multimedia content with vehicle hardware through an interface encapsulation platform system, constructing a real-time "perception-decision-execution" interaction process. For example, when playing a movie, the system automatically dims the lights, adjusts the seat angle, and releases fragrances corresponding to the scenario via the air conditioner (e.g., fresh fragrance for ocean scenario).
The hardware that can be called in this patent includes:
Suspension system: Simulate motion effects (e.g., vibration and tilt) in videos/games, with parameters including response intensity and frequency.
Air conditioning system: Adjust temperature, air volume, and air direction according to scenarios.
Ambient light system: Synchronize with screen brightness, color, or audio rhythm.
Fragrance system: Release scents matching the scenario (e.g., forest and restaurant).
Audio system: Spatial sound positioning to enhance immersion.
Seating system: Trigger massage and heating/ventilation functions according to content.
Steer-by-wire system: Simulate steering wheel feedback during game interaction (decoupled from tire steering to ensure safety).