Data Labeling as a Continuous Service 

Data Labeling as a Continuous Service 

6 September 2024

Artificial Intelligence (AI) is unquestionably the prevailing subject of interest in the realm of software development, including the automotive industry. Progressively becoming an integral component of our everyday lives, ChatGPT has been getting a lot of attention from companies and individuals looking to find out more about the world or to learn how to improve processes and flows.

I asked AI to summarize the current major trends in automotive software for the year 2024 and this is what came out:

  1. Data and AI: Data and AI continue to be vital enablers for automakers.  
  1. Electric Vehicles (EVs): In 2024, electric vehicles are expected to dominate the automotive market more than ever before. 
  1. Sustainability in Automotive Manufacturing: With an emphasis on eco-friendly practices, the automotive industry is making strides toward a more sustainable future. 
  1. Connected Vehicles: The integration of technology into vehicles goes beyond the engine and safety features. 
  1. Autonomous Vehicles, AI, and ML: The majority of automotive developers (75%) are extensively working on autonomous vehicles. 
  1. Cybersecurity: As vehicles become more connected, the risk of cyber-attacks increases. 

The answers received only demonstrate the increasing importance of data in the future of autonomous driving for our vehicles. Let´s take a closer look at its history:

ADAS & Autonomous Driving  

As we contemplate the potential of autonomous vehicles, it is essential to acknowledge the achievements of the past. The forerunners of Autonomous Driving are encapsulated in what is known as Advanced Driver Assistance Systems (ADAS). These systems represent a suite of technologies designed with the primary objective of aiding drivers in the safe operation of vehicles. 

The journey of ADAS began in 1948 with the invention of cruise control by Ralph Teetor, who was the president of The Perfect Circle Co., an Indiana-based manufacturer of piston rings. By the mid-1950s, the automotive industry shifted its focus towards the development of new technologies, leading to the invention of power steering and automatic transmission systems. 

The 1970s marked a significant era characterized by the introduction of high-tech devices and practices into the automotive industry. This period saw the development of early versions of electronic stability control, blind-spot information systems, adaptive cruise control, and traction control. 

In the 1980s, the “VaMoRs” (Vorfeldfahrzeug-Monitoring und -Rechner-System) project was initiated, laying a foundational stone for today’s ADAS systems. Ernst Dickmanns, a German scientist and professor at the University of Munich, embarked with the assistance of his team on the development of this futuristic project. “VaMoRs” aimed to create a computer vision system capable of detecting and tracking road markings and vehicles. 

By the 2000s, digital camera technologies had advanced and become accessible enough for the automotive industry to start using them for vision-based detection of the surrounding environment. 

Fast forward to the present day, the industry is primarily focused on integrating existing ADAS systems with Artificial Intelligence (AI), paving the way for a new era of smart and autonomous vehicles. 

The evolution of the Autonomous Driving initiative is deeply rooted in the past four decades. The first significant initiative originated from a collaboration between Carnegie Mellon University, DARPA (US Department of Defense), Mercedes Benz, and Munich University. In the 1980s, they established the Navlab semi-autonomous projects. 

Nearly 15 years later, in 1995, the Navlab 5 prototype embarked on its inaugural autonomous journey across the US. It maintained an average speed of 63.8 mph (102.7 km/h), with 98.2% of the trip being fully autonomous. 

 Fast forward to 2019, the European Commission adopted the legislative framework for Autonomous Transport: STRIA (Strategic Transport Research and Innovation Agenda) – a roadmap for Connected and Autonomous Transport. 

Presently, driverless taxi services are operational in certain regions of the US, and the entire automotive industry is committed to fostering innovation and laying the foundation for a future where autonomous vehicles are more than 50% of the transportation fleet. 

Overview of the Machine Learning Process in Autonomous Driving 

The Machine Learning (ML) process leverages Data & AI across the entire processing chain of autonomous driving. The process commences with an AI Developer, who collaborates with a Requirements Specialist to prepare a comprehensive list of technical requirements for the data necessary for the ML process. 

Data Collection follows this, and subsequently, Data Labeling, where the previously collected data is labeled under the defined technical requirements. The labeled data is then stored in Datasets, which are subsequently utilized for AI training through feeding the algorithm. 

This systematic approach ensures a seamless and efficient Machine Learning process in the realm of autonomous driving. 

Enabling Autonomous Driving through Environment Perception 

The Labeled Data, which is utilized for training and validating the Machine Learning model, is derived from a blend of semantic information gathered by various sensors. The primary categories of information that serve as inputs include: 

  • Metadata, such as weather, road condition, recording info and GPS. 
  • Environmental data, such as traffic signs, curbstones, poles & fences, objects on the road and built surroundings.
  • Traffic participants, such as pedestrians, two-wheel vehicles, scooters, and so forth. A significant advancement in this field is the recognition of actions performed by these traffic participants. It’s not merely about identifying the presence of a pedestrian on the road, but also beneficial to discern if, for instance, they are distracted while talking on the phone or are exiting a vehicle to load groceries. This level of detail significantly enhances the effectiveness and safety of the Machine Learning model in autonomous driving scenarios. 
  • Driver stats: through in-cabin sensing technology where we could have, for example, 2D/3D cameras facing the driver and a radar sensor in the backseat area for occupant monitoring. 

Data Labeling: What, How and Why? 

The most used data for labeling comes from different types of car sensors, such as cameras and lidar/radar.  

The formats used are either single image frames, sequences (bundle of images on multiple video frames), or point-cloud data. These types of data are labeled in either a 2d or 3d environment, using proprietary or supplier tools.  

The purpose of pushing research and innovation in this field is clearly backed-up by today’s evolution trends in the industry and the desire to contribute to the ADAS & Autonomous Driving future for our vehicles. 

Data Selection & Preparation 

During the Data Labeling process, the initial phase involves the selection of data, which is immediately followed by the preparation of the chosen data.  

Two primary types of data could potentially be utilized:

  1. Real-world data: recorded from specially equipped vehicles, designed explicitly for this purpose, using the most advanced sensors available.
  2. Synthetic data: generated for testing purposes, intended to mimic real-world data as accurately as possible.

Data extraction is typically performed on demand, often through a proprietary tool that offers several filtering and sorting options, including a sensor-based one. 

The method of data selection can vary greatly and is highly specific to each individual set of technical requirements and the sensor used. This tailored approach ensures the most effective use of data in the Machine Learning process.  

Here are a few examples:  

  • An automatic selection, where the general current state of the ML model is used and the data being inputted into the model is specific for the areas where false-positives were identified. 
  • A manual selection, where the development team identifies areas of underperformance, after analyzing the performance of the ML model, and decides to concentrate on enhancing one of these areas. Utilizing, for example, a search-by-image or search-by-description method, the team extracts specific data from the existing data bank and forwards it for labeling. This specifically labeled data is then reintroduced into the ML model, and its performance is reassessed to gauge improvement. This iterative process ensures continuous refinement and optimization of the ML model’s performance.  

Next step in the data flow is the conversion & post-processing. Depending on the types of data several operations can happen at this level. These potentially include:  

  • converting frames to sequences (for sequenced-based labeling projects) 
  • preparing reference data for 3d labeling (quite often, the point-cloud images generated by 3d sensors need supporting visual 2d images for a better understanding of the situation) 
  • pre-labels (AI-generated detection for relevant objects, environment or traffic participants having the main purpose of reducing the manual effort of data labeling) 
  • anonymization (all human faces and license plates need to be anonymized to respect individuals’ privacy) 

Case Study: Traffic Sign Recognition  

Traffic sign recognition is a highly intricate component of Data Labeling for Autonomous Driving. Labeling data for such a component is usually a manual process, and companies engaged in this activity often prefer to use proprietary tools that can be easily customized and configured to meet the technical requirements, as well as the specific rules and constraints of the country and region where the data to be labeled was collected. 

The task of labeling traffic signs is commonly undertaken by highly skilled professionals who possess an in-depth understanding of the complex technical requirements. The end product of their work is a collection of files in one of the Advanced List Formatter (ALF) formats. These file types are primarily used to store structured data, which can later be easily manipulated and formatted as input for various parts of the ML algorithm. 

The most pertinent sensor type for this kind of data labeling is the “front camera” sensor, which outputs 2D image data in the form of single frames that can subsequently be bundled into sequences. A significant challenge in using this type of data for the ML algorithm pertains to the accuracy of the position of each labeled object (traffic sign), due to the perspective view generated by the “front camera” sensor. 

The Machine Learning Model for Traffic Signs Recognition  

The Machine Learning model has typically two main parts:  

  • the training of the ML model, where the labeled data is the key factor in ensuring the model performs according to the pre-established parameters
  • the validation of the ML model, where different selected datasets are used to validate the performance parameters of the model

The cycle of inputting new labeled datasets to the model and observing and monitoring the improvements is usually going to be repeated until the model’s performance has reached the agreed threshold values. 

For traffic sign recognition, a possible effective way of obtaining good results is to use two different ML models in parallel: one that is focused on detection and another one that is focused on classification. 

The challenge of precisely determining the position of each detected object, as previously mentioned, can be addressed by employing a complex detection model.  

This model receives input from other sensors, which map the surrounding environment of the Ego vehicle and assist in determining the position of the labeled traffic signs. This type of detection model can be trained using frame-based labels of the complete camera view (frontal/rear/side). 

Given that the labeled data provided is often in the format of sequences composed of multiple frames, these sequences must be deconstructed into individual frames, and label views are subsequently created from them.  

Traffic sign recognition is merely one aspect of what this model can learn, as it also includes all other environmental components and traffic participants. This complex model could then be deployed in an embedded environment where it recognizes accurately and relays the position and bounding box of the existing traffic signs with a precision of a few dozen pixels. 

Specifically for recognizing traffic signs, a classifier model could be used, to itentify the types of Traffic Signs present in the vicinity of the Ego vehicle. This model could be trained using image crops with specific traffic signs, as opposed to the full camera view used by the complex detection model mentioned earlier.  

Here are some examples of the properties of each traffic sign that this model would need to recognize: 

  • type of traffic sign (defined per each country’s legislation) and the action category it is part of: restriction, warning, danger, directional, road equipment
  • text & values inside the sign (for example the value of a speed-limit sign) 
  • relevance for the Ego-vehicle (if the sign directly applies to the Ego vehicle or to other traffic participants) 
  • possible defects (structural or environmental damage, faded paint, stickers, graffiti) 

The final results would be a combination of what the complex detection model is outputting and what the classifier model does, stabilized between multiple frames. This should be tested in a stabilized environment to later be deployed to consumer vehicles. 

How Might the Future with Autonomous Vehicles Potentially Alter Our Daily Lives? 

At present, we are in the process of instructing the potential autonomous vehicles on how to adhere to our infrastructure and traffic rules and regulations. However, have you considered the possibility that we might need to adapt our current rules?  

The fact that we have been following the same traffic rules for decades suggests that change is not easily embraced, and the old rules were never initially conceived with such a futuristic vision in mind. 

Let us consider a simple example: the traffic light. It was first introduced in the US in 1914, and by 1930, all major American cities had such lights. Subsequently, no significant changes or updates have been made to most of them, as more than half of the US traffic light infrastructure still operates on fixed timers for each light, without sensors or cameras that can adjust the duration to accommodate the flow of cars. 

A recent proposal from a team at North Carolina State University, led by Associate Engineering Professor Ali Hajbabaie, encourages us to rethink our current traffic lights.  

The proposal begins with the premise that we will be living in an era of autonomous driving and a network of interconnected vehicles (comprising at least 40% of the existing fleet). This opens new possibilities to update and adapt our current infrastructure to facilitate the safe deployment of these autonomous vehicles.  

The proposed solution is the introduction of a “white phase” – a potential fourth light in the traffic light system. Its primary function is to signal to autonomous vehicles that it is safe to follow the movements of the car in front. This innovative approach could significantly enhance the efficiency and safety of our future transportation systems. 

Final Considerations 

To conclude, let us revisit our initial discussion: the current major trends in automotive software. What future insights could we obtain by asking the AI to summarize the current major trends in automotive software for the year 2044 (20 years later)?  

  1. Fully Autonomous Vehicles: By 2044, we could see the widespread adoption of Level 5 autonomous vehicles, which can operate without human intervention under all conditions. The software for these vehicles would be incredibly complex, capable of making split-second decisions for safe operation. 
  1. Vehicle-to-Everything (V2X) Communication: Cars will not only communicate with each other (V2V) but also with infrastructure (V2I), pedestrians (V2P), and networks (V2N). This will allow for more efficient traffic management, improved safety, and a smoother driving experience. 
  1. AI and Machine Learning: Advanced AI and machine learning algorithms will be commonplace in automotive software, enabling vehicles to learn from their experiences, predict potential hazards, and even personalize the driving experience for each user. 
  1. Cybersecurity: As cars become more connected, the importance of automotive cybersecurity will increase. We can expect sophisticated software solutions to protect vehicles from hacking attempts and ensure the privacy and safety of passengers. 
  1. Software Over-the-Air (SOTA) Updates: The ability to update automotive software wirelessly will become standard, allowing for real-time updates, bug fixes, and even new feature rollouts without the need for a physical service appointment. 
  1. Integration with Smart Cities: Automotive software will likely be integrated with smart city infrastructure, contributing to traffic management, energy efficiency, and urban planning. 
  1. Green and Energy-Efficient Software: As the push for sustainability continues, we can expect to see software designed specifically for electric vehicles (EVs) and hybrid vehicles, focusing on energy management, efficiency, and reducing environmental impact. 
  1. Digital Twins: The use of digital twins, or virtual replicas of physical vehicles, will become more prevalent. These digital twins can be used for testing, simulation, and optimization in a risk-free virtual environment. 
  1. Augmented Reality (AR) and Virtual Reality (VR): AR could be used to provide drivers with enhanced information about their surroundings, while VR could be used for training and simulation purposes. 

The responses underscore that data continues to be a pivotal factor in the evolution of the automotive sector. Autonomous driving is becoming more accessible, enhancing the safety conditions for all traffic participants. Undeniably, this progress necessitates significant modifications to our existing road infrastructure, regulations, and driving legislation. 

Data Labeling as a Continuous Service 
Back to top