Picture

Questions?

+1-866-353-3335

SEARCH
What are you looking for?
Need help finding what you are looking for? Contact Us
Compare

PUBLISHER: ResearchInChina | PRODUCT CODE: 1187324

Cover Image

PUBLISHER: ResearchInChina | PRODUCT CODE: 1187324

China Autonomous Driving Algorithm Research Report, 2023

PUBLISHED:
PAGES: 220 Pages
DELIVERY TIME: 1-2 business days
SELECT AN OPTION
Unprintable PDF (Single User License)
USD 4000
Printable & Editable PDF (Enterprise-wide License)
USD 6000

Add to Cart

Autonomous Driving Algorithm Research: BEV Drives Algorithm Revolution, AI Large Model Promotes Algorithm Iteration

The core of the autonomous driving algorithm technical framework is divided into three parts: environment perception, decision planning, and control execution.

  • Environment perception: convert sensor data into machine language of the scenario where the vehicle is located, which can include object detection, recognition and tracking, environment modeling, motion estimation, etc.;
  • Decision planning: Based on the output results of perception algorithm, the final behavioral action instructions are given, including behavioral decisions (vehicle following, stopping and overtaking), action decisions (car steering, speed, etc.), path planning, etc.;
  • Control actuation: according to the output results of decision-making level, the underlying modules are mobilized to issue instructions to the core control components such as accelerator and brake, and promote vehicle to drive according to the planned route.

BEV drives algorithm revolution

In recent years, BEV perception has received extensive attention. BEV model mainly provides a unified space to facilitate the fusion of various tasks and sensors. It has following advantages:

BEV unifies the multimodal data processing dimension and makes multimodal fusion easier

The BEV perception system converts the information obtained from multiple cameras or radars to a bird's-eye view, and then do tasks such as object detection and instance segmentation, which can more intuitively display the dimension and direction of objects in BEV space.

In 2022, Peking University & Ali proposed a fusion framework of LiDAR and vision - BEVFusion. The processing of radar point clouds and image processing are carried out independently, using neural networks to encode, project to a unified BEV space, and then merge the two in BEV space.

Realize timing information fusion and build 4D space

In the 4D space, the perception algorithm can better complete the perception tasks such as speed measurement, and can transmit the results of motion prediction to the decision and control module.

PhiGent Robotics proposed BEVDet4D in 2022, which is a version based on BEVDet to increase timing fusion. BEVDet4D extends BEVDet by retaining intermediate BEV features of past frames, and then fuses features by aligning and splicing with the current frame, so that time clues can be obtained by querying two candidate features.

Imagine occluded objects to realize object prediction

In the BEV space, the algorithm can predict the occluded area based on prior knowledge, and imagine whether there are objects in the occluded area.

FIERY, proposed by Wayve in cooperation with the University of Cambridge in 2021, is an end-to-end road dynamic object instance prediction algorithm that does not rely on high-precision maps and is only based on aerial views of monocular cameras.

Promoting development of an end-to-end autonomous driving framework

In the BEV space, perception and prediction can be directly optimized end-to-end through neural networks in a unified space, and the results can be obtained at the same time. Not only the perception module, but also the BEV-based planning decision-making module is also the direction of academic research.

In 2022, autonomous driving team of Shanghai Artificial Intelligence Laboratory and the team of associate professor Yan Junchi of Shanghai Jiao Tong University collaborated on paper ST-P3 to propose a spatiotemporal feature learning solution that can simultaneously provide a set of more representative features for perception, prediction and planning tasks.

AI large model drives algorithm iteration

After 2012, deep learning algorithms are widely applied in autonomous driving field. In order to support larger and more complex AI computing needs, AI large models with the characteristics of "huge data, huge computing power, and huge algorithms" were born, which accelerated the iteration speed of algorithms.

Large Model and Intelligent Computing Center

In 2021, HAOMO.AI launched research and landing attempts on large-scale Transformer model, and then gradually applied it on a large scale in projects including multi-modal perception data fusion and cognitive model training. In December 2021, HAOMO.AI released autonomous driving data intelligence system MANA (Chinese name "Snow Lake"), which integrates perception, cognition, labeling, simulation, computing and other aspects. In January 2023, HAOMO.AI together with Volcano Engine unveiled MANA OASIS, a supercomputing center with a total computing power of 670 PFLOPS. After deploying HAOMO.AI's training platform, OASIS can run various applications including cloud large-scale model training, vehicle-side model training, annotation, and simulation. With the help of MANA OASIS, the five major models of HAOMO.AI have ushered in a new appearance and upgrade.

In August 2022, based on Alibaba Cloud intelligent computing platform, Xpeng Motors built an autonomous driving intelligent computing center "Fuyao", which is dedicated to training of autonomous driving models. In October 2022, Xpeng also announced the introduction of Transformer large model.

In November 2022, Baidu released Wenxin Big Model. Leveraging more than 1 billion parameters, it recognizes thousands of objects, helping to enlarge the scope of semantic recognition. At present, it is mainly used in three aspects: distance vision, multimodality and data mining.

Product Code: WWJ003

Table of Contents

1. Overview of Autonomous Driving Algorithms

  • 1.1. Overview of Autonomous Driving Algorithms
    • 1.1.1. Overview of Environment Perception Algorithms - Vision
    • 1.1.2. Overview of Environment Perception Algorithms - LiDAR
    • 1.1.3. Overview of Environment Perception Algorithms - Radar
    • 1.1.4. Overview of Environment Perception Algorithms - Multi-Sensor Fusion
  • 1.2. Overview of Decision Planning and Control Actuation Algorithms
  • 1.3. Development of Neural Networks
  • 1.4. Autonomous Driving Algorithm Supply Mode

2. Research on Chip Vendor Algorithm

  • 2.1. Huawei
    • 2.1.1. Smart Vehicle Solutions Department
    • 2.1.2. ADS Autonomous Driving Full-Stack Solution
    • 2.1.3. Core Algorithms
    • 2.1.4. Autonomous Driving Algorithm Development Plan and Ecological Partners
  • 2.2. Horizon Robotics
    • 2.2.1. Profile
    • 2.2.2. Cooperation Model
    • 2.2.3. On-board Computing Platform and Monocular Front-View Solution Algorithm
    • 2.2.4. Autonomous Driving Perception Algorithm Design
    • 2.2.5. Core Algorithm Model
    • 2.2.6. Pilot Assisted Driving Solution and Super Driving Solution Algorithm
    • 2.2.7. Software Open API
    • 2.2.8. Mass Production Results and Algorithm Planning
    • 2.2.9. Cooperation
  • 2.3. Black Sesame
    • 2.3.1. Profile
    • 2.3.2. Perception Algorithm
    • 2.3.3. Latest Algorithm Achievements
    • 2.3.4. Shanhai Tool Chain
    • 2.3.5. Partners
    • 2.3.6. Cooperation
  • 2.4. Genesys Microelectronics
  • 2.5. Mobileye
    • 2.5.1. Profile
    • 2.5.2. Object Recognition Technology
    • 2.5.3. Chip Algorithm Development Process
    • 2.5.4. Vision Algorithms
    • 2.5.5. Current Development and Cooperation
  • 2.6. Qualcomm Arriver
    • 2.6.1. Intro of Arriver
    • 2.6.2. Arriver Visual Perception Algorithm
  • 2.7. NXP
  • 2.8. NVIDIA
    • 2.8.1. Profile
    • 2.8.2. Cooperation Model
    • 2.8.3. Autonomous Vehicle Software Stack
    • 2.8.4. Perception Algorithm
    • 2.8.5. Perception Algorithm Model
    • 2.8.6. Latest Cooperation and Partners

3. Research on Tier 1 & Tier 2 Algorithm

  • 3.1. Momenta
    • 3.1.1. Profile
    • 3.1.2. Core Technology and Products
    • 3.1.3. Application of Momenta Algorithm
    • 3.1.4. Cooperation
  • 3.2. Nullmax
    • 3.2.1. Profile
    • 3.2.2. Visual Perception Module and Product Landing Process
    • 3.2.3. Introduction to the Latest Visual Perception Algorithm
    • 3.2.4. The Landing Process of Algorithm Products
    • 3.2.5. Cooperation and Development Plan
  • 3.3. ArcSoft
    • 3.3.1. Profile
    • 3.3.2. ADAS Technology
    • 3.3.3. BSD and AVM Technologies
    • 3.3.4. One-Stop Vehicle Vision Solution
    • 3.3.5. Recent Dynamics and Major Customers
  • 3.4. JueFX
    • 3.4.1. Profile
    • 3.4.2. Visual Feature Fusion Positioning Solution
    • 3.4.3. Development History of BEV Perception Technology
    • 3.4.4. LiDAR Fusion Location Solution
    • 3.4.5. LiDAR-based Fusion Solution
    • 3.4.6. Cooperation
  • 3.5. ThunderSoft
  • 3.6. Holomatic
    • 3.6.1. Profile
    • 3.6.2. HoloPilot and Its Main Algorithms
    • 3.6.3. HoloParking and Its Main Algorithms
    • 3.6.4. Middleware
  • 3.7. Enjoy Move
    • 3.7.1. Profile
    • 3.7.2. Autonomous Driving Software
    • 3.7.3. Cooperation
  • 3.8. Haomo.ai
    • 3.8.1. Profile
    • 3.8.2. Product Portfolio
    • 3.8.3. Latest Dynamics
    • 3.8.4. MANA system
    • 3.8.5. MANA System - Vision, LiDAR Perception Module
    • 3.8.5. MANA System - Fusion Sensing Module
    • 3.8.5. MANA System - Cognitive Module
    • 3.8.6. Evolution of Perception
    • 3.8.7. Evolution of Cognitive Abilities
    • 3.8.8. New Technology Practice
    • 3.8.9. Recent Algorithm Achievements
  • 3.9. Huanyu Zhixing
    • 3.9.1. Profile
    • 3.9.2. Autonomous Driving Software
    • 3.9.3. Athena 5.0
    • 3.9.4. Development Achievements and Planning
  • 3.10. Valeo
    • 3.10.1. Profile
    • 3.10.2. Typical Algorithm Models
  • 3.11. StradVision
    • 3.11.1. Profile
    • 3.11.2. Vision Product Category & Customers & Timeline
    • 3.11.3. Autonomous Driving Algorithm
    • 3.11.4. Development Trends of Vision Products

4. Algorithm Research of Emerging Automakers and OEMs

  • 4.1. Tesla
    • 4.1.1. Profile
    • 4.1.2. Tesla Algorithm
    • 4.1.3. Multi-camera Fusion Algorithm
    • 4.1.4. Environment Awareness Algorithm
    • 4.1.5. Latest Planning and Decision-making Algorithm
  • 4.2. NIO
    • 4.2.1. Profile
    • 4.2.2. Evolution of NIO Autonomous Driving System
    • 4.2.3. Comparison of NIO Pilot System and NAD System
  • 4.3. Li Auto
    • 4.3.1. Profile
    • 4.3.2. Intelligent Driving Route
    • 4.3.3. Algorithm History
    • 4.3.4. AD Max Intelligent Driving Algorithm Architecture
    • 4.3.5. Layout in Intelligent Driving
    • 4.3.6. Future Development Plan
  • 4.4. Xpeng
    • 4.4.1. Profile
    • 4.4.2. Algorithm and Autonomous Driving Ability Evolution Route
    • 4.4.3. Autonomous Driving Algorithm Architecture
    • 4.4.4. New Perception Architecture
    • 4.4.5. Data Collection, Labeling and Training
  • 4.5. Rising Auto
    • 4.5.1. Profile
    • 4.5.2. RISING PILOT
    • 4.5.3. Full Fusion Algorithm
    • 4.5.4. Full Fusion Algorithm: Application Effect
  • 4.6. Leapmotor
    • 4.6.1. Profile
    • 4.6.2. Full Domain Self-Research
    • 4.6.3. Algorithm Capabilities and Future Planning
  • 4.7. ZEEKR
    • 4.7.1. Profile
    • 4.7.2. ZEEKR's Mobileye Solution
    • 4.7.3. Cooperation between ZEEKR and Waymo and Self-Developed Algorithm Solution
  • 4.8. BMW
    • 4.8.1. Profile
    • 4.8.2. Algorithms for BMW
    • 4.8.3. Cooperation in Autonomous Driving
  • 4.9. SAIC
    • 4.9.1. SAIC Motor Autonomous Driving Layout
    • 4.9.2. Introduction to Z-ONE Tech
    • 4.9.3. Z-ONE Tech Computing Platform
    • 4.9.4. SAIC Artificial Intelligence Laboratory
  • 4.10. General Motors
    • 4.10.1. General Motors Autonomous Driving Layout
    • 4.10.2. Introduction to Cruise
    • 4.10.3. Cruise perception Algorithm
    • 4.10.4. Cruise Decision Algorithm
    • 4.10.5. Cruise Autonomous Driving Development Tool Chain
    • 4.10.6. Cruise's Robotaxi and Future Plans

5. Research on Robtaxi Algorithm for L4 Autonomous Driving

  • 5.1. Baidu Apollo
    • 5.1.1. Profile
    • 5.1.2. Driverless Technology Architecture History
    • 5.1.3. Introduction to Perception Algorithm
    • 5.1.4. Autonomous Vehicle Positioning Technology
    • 5.1.5. Latest Highlights Technology
  • 5.2. Pony.ai
    • 5.2.1. Profile
    • 5.2.2. Main Business and Business Model
    • 5.2.3. Core Technology and the Latest Autonomous Driving System Configuration
    • 5.2.4. Sensor Fusion Solution
    • 5.2.5. Cooperation
  • 5.3. WeRide
    • 5.3.1. Profile
    • 5.3.2. WeRide One
    • 5.3.3. Algorithm Modules for WeRide One
    • 5.3.4. Cooperation
  • 5.4. Deeproute.ai
  • 5.4. Deeproute.ai
    • 5.4.1. Profile
    • 5.4.2. Technology
    • 5.4.3. Self-Developed Algorithm
    • 5.4.4. Cooperation and Latest Dynamics
  • 5.5. QCraft
    • 5.5.1. Profile
    • 5.5.2. Products
    • 5.5.3. Hyperfusion Perception Solution
    • 5.5.4. Prediction Algorithm
    • 5.5.5. Planning Algorithm
    • 5.5.6. Classical Algorithm Model
    • 5.5.7. Cooperation
  • 5.6. UISEE Technology
    • 5.6.1. Profile
    • 5.6.2. U-Drive Intelligent Driving System
    • 5.6.3. Visual Positioning Technology
    • 5.6.4. Latest Algorithm
    • 5.6.5. R & D Planning and Partners
  • 5.7. AutoX
    • 5.7.1. Profile
    • 5.7.2. Self-Driving Technology
    • 5.7.3. Self-Driving Fusion Perception System xFusion
  • 5.8. DiDi Autonomous Driving
    • 5.8.1. Profile
    • 5.8.2. Autonomous Driving Technology
  • 5.9. Waymo
    • 5.9.1. Profile
    • 5.9.2. Sensor Product Portfolio
    • 5.9.3. Technology
    • 5.9.4. Behavior Prediction Algorithm
    • 5.9.5. Latest News

6. Development Trend of Autonomous Driving Algorithms

  • 6.1. Algorithm Trend I
  • 6.2. Algorithm Trend II
  • 6.3. Algorithm Trend III
  • 6.4. Algorithm Trend IV
  • 6.5. Algorithm Trend V
  • 6.6. Algorithm Trend VI
  • 6.7. Algorithm Trend VII
Have a question?
Picture

Jeroen Van Heghe

Manager - EMEA

+32-2-535-7543

Picture

Christine Sirois

Manager - Americas

+1-860-674-8796

Questions? Please give us a call or visit the contact form.
Hi, how can we help?
Contact us!