MLCommons' and the Autonomous Vehicle Computing Consortium (AVCC) have achieved the first step toward a comprehensive MLPerf' Automotive Benchmark Suite for AI systems in vehicles with the release of the MLPerf Automotive benchmark proof-of-concept (POC). The POC was developed by the Automotive benchmark task force (ABTF), which includes representatives from Arm, Bosch, Cognata, cKnowledge, Marvell, NVIDIA, Qualcomm Technologies, Inc., Red Hat, Sacramento State University, Samsung, Siemens EDA, Tenstorrent, and UC Davis amongst others.
The demand for AI-based systems in vehicles is exploding - and not just for controlling fully autonomous cars. Automotive manufacturers (OEMs) are incorporating AI in a number of other in-car systems to enhance the driving experience and increase vehicle safety, with features including:
- Speech-controlled infotainment systems and online vehicle user manuals
- Route/direction guidance systems
- Optimizing stops for charging, refueling, and maintenance
- Collision avoidance systems
- Driver monitoring for drowsiness or lack of attention
Each of these features requires trained AI models and appropriate input sensors, plus underlying computing infrastructure powerful enough to meet performance demands.
Establishing common reference points
Automotive OEMs must decide which combinations of features to include in their vehicles, and need to choose where to source both the AI systems and the underlying computing infrastructure they will run on, all while ensuring that the systems can run simultaneously with acceptable performance. As they issue requests for information (RFIs) and requests for quotation (RFQs) to Tier 1 and other suppliers for system components, they need common reference points to understand the collective computing demand of the systems and the resources required to meet it. The MLPerf Automotive benchmark will provide those common reference points. This will enable OEMs to select or design the most suitable solution for their system requirements with provisioned resources.
“The adoption of AI in automotive is helping to enhance user experiences and enable safer vehicles, but it is also one of the most complex and compute intensive parts of an automotive software stack.” - Kasper Mecklenburg, ABTF Co-Chair and Principal Autonomous Driving Solution Engineer, Automotive Line of Business, Arm
“The ABTF aims to help OEMs and Tier 1s determine what hardware and software is most suitable for these applications by providing a comprehensive benchmark suite, and this POC is the first step towards delivering on this goal.” - Kasper Meckenburg, ABTF Co-Chair and Principal Autonomous Driving Solution Engineer, Autonotive Line of Busienss, Arm
The POC includes a subset of the full v1.0 MLPerf Automotive Benchmark Suite, which is targeted for release at the end of 2024. It focuses on a camera-based object detection capability, which is commonly found in collision-avoidance systems and autonomous driving systems. Reaching the POC is a critical milestone that allows the task force to gather additional feedback from the automotive industry to ensure a comprehensive approach for the v1.0 release.
The POC release includes a fully-functioning reference implementation for a trained object recognition system, including:
- A SSD-ResNet50 object detector model trained on the Cognata dataset and a runtime engine
- The Collective Mind automation system for scripting and managing the execution of the model
- A small subset of the Cognata dataset for demo purposes
- Additional software components necessary to run the benchmark in a PC-based environment
The model is an adapted version of the SSD+ResNet50 residual neural network system first introduced in 2015 which is representative of the majority of visual object recognition systems in use today. SSD+ResNet50 provides a baseline for measuring performance that is well-known and understood by the industry. The version included in the POC has been retrained to work with 8-megapixel images, which is the emerging standard for camera-based systems in vehicles, to ensure the benchmark is future-proof for years to come. The model and runtime are paired with the software framework and dependencies necessary to conveniently execute in a Docker container, providing support for a wide range of computing environments.
As part of the commitment to delivering a rigorous and robust benchmark suite, the POC includes a small subset of the MLCommons Cognata dataset containing 120,000 8-megapixel images that was used to train the model. The images are synthesized street-level views from a vehicle, representing scenarios that a typical collision-avoidance system would need to process. Using synthesized images allows for the inclusion of scenarios that are too dangerous to capture in reality, such as a child running in front of a car. In addition to the subset of images included in the POC, MLCommons members will have access to the full MLCommons Cognata dataset.
“AVCC and MLCommons have invested in dedicated compute to train the full suite of models planned for the v1.0 release.” - David Kanter, MLCommons Executive Director
“The release of the POC and the acquisition of the Cognata dataset and training resources demonstrate our full commitment to building a comprehensive automotive benchmark suite.” - David Kanter, MLCommons Executive Director
Collective Mind, also included in the POC, is a portable, extensible framework for automation and reproducibility. It is used to execute the complex machine learning and AI applications used to manage the process of running the benchmark.
Suppliers will run the automotive benchmark optimized on their own products and share the results with their potential customers. Automotive OEMs and suppliers can independently verify benchmark results; they can also re-run the benchmark after assembling their own combination of system components. Additionally, they can substitute in their own proprietary models and/or data and generate their own benchmark results.
Seeking comprehensive input for a v1.0 release
The POC release is an initial step toward a complete v1.0 release that allows for benchmarking a full set of AI components representative of those included in vehicles. The POC is not optimized for performance as suppliers will have their own optimized solutions. It is intended to provide an opportunity for automotive OEMs, Tier 1 and other suppliers, and tech industry stakeholders to provide feedback on components and benchmark key performance indicators.
“MLCommons and AVCC partnered to deliver a community-driven and open approach to benchmarking for automotive systems.” - James Goel, ABTF Co-Chair
“This affords all stakeholders an opportunity to access and run the POC, provide feedback and incorporate the latest industry requirements to create the best performance benchmark suite for AI systems in vehicles.” - James Goel, ABTF Co-Chair
The ABTF invites the community of automotive OEMs, Tier 1s and other component suppliers, and tech industry stakeholders to join the conversation by downloading the Automotive POC, evaluating it, and providing feedback through the ABTF working group. Additional AVCC technical reports on benchmarking AI systems in an automotive environment are also available for reference.
About AVCC
AVCC is a global automated and autonomous vehicle (AV) consortium that specifies and benchmarks solutions for AV computing, cybersecurity, functional safety, and building block interconnects. AVCC is a not-for-profit membership organization building an ecosystem of OEMs, automotive suppliers, and semiconductor and software suppliers in the automotive industry. The consortium addresses the complexity of the intelligent-vehicle software-defined automotive environment and promotes member-driven dialogue within technical working groups to address non-differentiable common challenges. AVCC is committed to driving the evolution of autonomous and automated solutions up to L5 performance. For additional information on AVCC membership and technical reports, please visit www.avcc.org or email outreach@avcc.org.
About MLCommons
MLCommons is the world leader in building benchmarks for AI. It is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI Safety.