PUBLISHER: ResearchInChina | PRODUCT CODE: 1872685
PUBLISHER: ResearchInChina | PRODUCT CODE: 1872685
Cross-Domain Integration Strategy Research: Automakers' Competition Extends to Cross-Domain Innovative Function Scenarios such as Cockpit-Driving, Powertrain, and Chassis
Cross-domain integration of intelligent vehicles has become a key choice for automakers to break through technical bottlenecks and build differentiated advantages, serving as the core trend in the current industry development. Although major players have different focuses and approaches to the collaborative cooperation between domains, their development ideas all involve first integrating the functions of some domains into a high-performance computing unit, and then gradually aggregating the functions of more domains.
The core of cross-domain integration lies in "computing power sharing", "data intercommunication", and "scenario extension". Through cross-domain integration and reconstruction, it achieves "hardware resource sharing, software service intercommunication, and scenario collaborative linkage", and ultimately realizes the full-domain perception and intelligent decision-making of "human-vehicle-environment". While improving vehicle energy efficiency, driving pleasure, and riding comfort, it further simplifies the system and reduces costs. At the same time, through scalable and modular design, it explores a variety of innovative functions and scenarios.
Cross-Domain Integration Innovative Function and Scenario Exploration 1: Cockpit-Driving Integration Enables Smooth "Lane Change Collaboration"
In October 2025, the 2026 BAIC Arcfox aT5 was officially mass-produced and launched. It is equipped with Zhuoyu's cockpit-driving integrated solution based on Qualcomm SA8775P single chip, relying on a 144TOPS computing power platform to achieve advanced cockpit, urban and highway navigation assistance, automatic parking assistance, cross-floor memory parking and other functions.
Through the application of the cockpit-driving integrated domain controller, Arcfox aT5 promotes the high integration of core components, realizing the full integration and intelligence of intelligent driving and cockpit. Compared with the stage of domain concentration where the cockpit and intelligent driving operate independently, cockpit-driving integration achieves shorter communication links, higher bandwidth, lower latency, and more responsive interaction. It not only makes the urban NOA multi-scenario situations such as full-scenario detour, navigation lane change, urban left and right turns, and CUT-in response smoother, and the intelligent driving and cockpit voice assistant can run multiple tasks in parallel, but also promotes the efficient linkage of intelligent driving and cockpit in full scenarios, providing a more intelligent integrated service experience.
Based on the cockpit-driving integration solution, the BAIC Arcfox aT5 can realize the "lane change collaboration" scenario. That is, after the intelligent driving system identifies a safe lane change opportunity, it will synchronize the data to the cockpit in real time - the AR-HUD immediately projects a steering arrow, the audio emits a prompt sound from the lane change side, and the instrument panel displays the vehicle condition behind the side. The entire interaction response time is less than 230ms, and the information transmission efficiency is increased by 40%.
Cross-Domain Integration Innovative Function and Scenario Exploration 2: Driven by cockpit-driving Integration and AI Large Models, Realizing the Evolution from "Voice Vehicle Control -> Voice Driving Control -> Seamless Human-Machine Co-Driving"
With the help of AI large models, the capabilities of voice assistants have been continuously improved. From voice control of non-driving vehicle functions such as seats, windows, and air conditioners to voice driving control becoming a reality, automakers such as Xpeng and Li Auto have launched scenario functions such as voice driving control to further enhance the driving experience.
In May 2025, the Xpeng MONA M03 Max was officially launched. Its assisted driving supports voice control of left and right lane changes, and the wake-up phrases are strictly designed to avoid false triggers. At the same time, it debuts seamless human-machine co-driving, enabling the intelligent assisted driving to adapt to the driver's style. NGP does not compete with the driver for control rights, and the AI intelligent assisted driving does not exit when turning the steering wheel, achieving smooth human-machine cooperation. The driver can adjust NGP trajectory and speed by turning steering wheel and lightly stepping on the accelerator, making lane changes and acceleration at will, bringing users a smoother intelligent assisted driving experience.
In July 2025, Li Auto i8 was officially launched, debuting the VLA driver large model. It can make the vehicle's assisted driving behavior more human-like, and users can even directly command the vehicle to drive by voice, which is equivalent to having a dedicated driver. For example, users can directly tell the vehicle their intentions, such as turning left, turning right, or pulling over.
Lixiang Tongxue intelligent agent can call the apps and sensor devices on the vehicle to independently complete task requirements, making the voice assistant a real assistant rather than a robot.
According to Li Auto's plan, the evolution of Lixiang Tongxue will go through three stages:
The first stage is "Enhancing my capabilities". At this stage, Lixiang Tongxue can provide users with auxiliary functions, and realize L3 supervised intelligent driving in the field of autonomous driving.
The second stage is called "Becoming my assistant". AI will advance to Lixiang Tongxue in the stage of L4 autonomous driving and Agent (intelligent agent).
The third stage is upgraded to "Becoming my silicon-based family member". Humans do not need to issue any instructions, and AI can make independent decisions and take the initiative to perform multiple tasks, becoming an important member of the family. Silicon-based family members are the ultimate product of general artificial intelligence.
In the future, seamless human-machine co-driving will become one of the main development directions of cross-domain integration scenarios.
With the help of AI large models, large language models are used to understand user semantics. For example, through the voice command "Overtake for me", the system can recognize and execute the expected overtaking behavior. The autonomous driving system can recognize the surrounding environment through visual large models and respond at the appropriate time, such as decelerating, turning left or right, or parking nearby. In addition to voice commands, applying a slight force to the steering wheel can also make the vehicle understand the driver's intention and independently complete actions such as lane changes, realizing more timely and natural human-machine interaction.
Cross-Domain Integration Innovative Function and Scenario Exploration 3: Cross-Domain Collaborative Integration of Intelligent Driving, Chassis, and Powertrain to Achieve More Efficient Parking Assistance, etc.
Through the cross-domain integration of powertrain and chassis domains to coordinately control acceleration and deceleration, steering, braking, and suspension, it can improve energy efficiency, increase driving pleasure and comfort experience, while simplifying the system and reducing costs. At the same time, it can adapt to diverse needs through scalable and modular design.
With the continuous application of new technologies such as chassis by-wire and intelligent driving, and driven by the continuous upgrading of AI large models and software system algorithms, the evolution of cross-domain integration of intelligent driving/chassis/powertrain has created a variety of innovative functional scenarios.
In September 2025, Huawei Qiankun Intelligent Driving ADS4 parking assistance released the latest function, realizing the coordinated control of parking assistance and rear-wheel steering. Through the coordinated control of parking algorithms and rear-wheel steering, it accurately identifies parking spaces and plans paths, and the rear-wheel steering enables quick parking. It is not afraid of narrow passages and narrow parking spaces, making parking easier and more convenient.
In April 2025, BYD released Yisifang(e-4WD) Parking 2.0 system, realizing transverse parking. That is, equipped with Yisifang four-wheel independent torque vector control technology, through four-wheel independent drive, it achieves precise control of the torque of each wheel, so that the vehicle can move transversely without relying on the steering system.
It can achieve a transverse movement of +-30cm, which is sufficient to meet the parking needs of most narrow parking spaces. Combined with the bidirectional 20-degree rear-wheel steering system, the flexibility is improved. In narrow spaces, it can move transversely like a crab, easily completing parking and other operations.
In actual tests, Yangwang U7 takes only 8 seconds to park in a mechanical parking space with a width of 2.1 meters, which is 60% shorter than traditional models.
Cross-Domain Integration Innovative Function and Scenario Exploration 4: Active Anti-Carsickness
The active anti-carsickness function is an innovative technology launched by automakers in recent years to alleviate carsickness. It mainly realizes the adaptive adjustment of vehicle motion status according to the occupant's status through the cross-domain integration and linkage of intelligent driving, chassis, and cockpit, and coordinates the differences between vision and body perception to reduce carsickness symptoms.
In September 2025, Denza N9 is equipped with the Dingxuan Intelligent Anti-Carsickness System. With Yunnian intelligent body control system as the core, it links the three major systems of chassis, power, and cockpit, and coordinately controls kinesthesia, vision, smell, and touch, allowing users to bid farewell to carsickness.
With Yunnian system as the core, it deeply integrates the steering, braking, and electric drive systems, real-time monitors the vehicle status, and significantly suppresses the pitch during acceleration and deceleration, roll during cornering, and bounce on bumpy roads through millisecond-level adjustment of damping, stiffness, and body posture.
Visually, the system innovatively equips a vehicle-mounted dynamic dot matrix screen. The dynamic floating points will float synchronously with the vehicle's movement to alleviate dizziness caused by inconsistent vision and vestibular perception.
Denza exclusive "Lemon Snow Scene" fragrance, mainly composed of natural limonene, purifies the air in the car, reduces anxiety and motion sickness symptoms, and alleviates carsickness from the sense of smell, especially suitable for users sensitive to odors. Combined with the comfortable body feeling brought by the 24°C constant temperature fresh air conditioning system in the tactile dimension, the anti-carsickness effect is optimized.
The Dingxuan Intelligent Anti-Carsickness System has passed the rigorous certification of "China Automotive Technology and Research Center" and "PLA Air Force Characteristic Medical Center".
Cross-Domain Integration Innovative Function and Scenario Exploration 5: Intelligent Driving Perception Linked with Airbags to Achieve Active Risk Prediction
Based on the cross-domain linkage between the automotive perception system and airbags, it realizes active risk prediction and responds in advance, allowing safety protection to take the lead to reduce accident injuries.
In September 2025, Tesla's front airbag system was upgraded again, launching the enhanced function of the front-facing front airbag system. Through the Tesla Vision visual processing system, it optimizes the deployment timing of the front-facing front airbag system, that is, pre-inflates in advance through pure visual prediction, greatly reducing the injury to occupants in forward collision accidents.
In general, the cross-domain integration of intelligent vehicles is a comprehensive and in-depth integration from the underlying hardware architecture to the software ecosystem and then to the top-level AI applications. It ultimately points to a mobile intelligent space that is actively safe, with continuous scenarios, accompanying services, emotional, and evolvable.