Home

How Engineers Can Prevent Bias in AI Based Autonomous Vehicles

Olutayo Adegoke / Jun 13, 2024

Anton Grabolle / Better Images of AI / Autonomous Driving / CC-BY 4.0

Autonomous vehicles (AVs) are becoming a reality, aided by the incorporation of artificial intelligence (AI) into their systems. Depending on the level of automation, AVs perform driving tasks with either partial or no human intervention. The technology functions by utilizing sensors and cameras to retrieve real time traffic and environmental data. The information from this perception layer is processed by an AI system to propose real time driving tasks and traffic paths. These decisions are subsequently executed through an activation subsystem for steering the vehicle, engaging the gears, brakes etc.

AVs rely on complex machine learning (ML) AI systems for real time driving decision making. Vehicles used to be the domain of conventional fields like mechanical engineering. These days, innovative computer systems powered by ML AI are fostering new robotic applications. Engineers developing these systems have traditionally employed advanced methods and tools to solve problems related to quality, reliability, safety etc. A highly regulated transportation industry benefits from such tools while ensuring compliance. However, the current culture of embedding AI introduces new challenges hence require new expertise. It is worth noting that scholars have identified fairness / bias as a major reason for AI failure. Applying fairness / bias criteria involves the division of the population into privileged and unprivileged groups. Here, the problem is that the unprivileged groups do not enjoy the benefit of AI proportionately to those enjoyed by privileged groups of people. This also means that unprivileged groups are differentially exposed to risks and harms.

For example, the susceptibility of women and Black people to AI risks is documented in a growing literature. Research has demonstrated that facial recognition systems highly discriminate against Black females. Hence, a question is: how do we prevent an increased susceptibility of vulnerable groups to AV accidents? Surely regulations must recognize and address this question. Other stakeholders, including vehicle manufacturers, must therefore be cautious and ensure that vehicles are equipped with complete safety features that prevent discrimination. This question is particularly relevant considering the new EU AI Act, which classifies AI applications in transport as high risk.Quality tools have always been available to engineers to test for safety. One such tool is quality function deployment (QFD). QFD documents customer requirements and matches them against the design features that incorporate these requirements. In this way, the customer needs are considered early in the product development process for the purpose of satisfying those requirements. To address risks related to AI bias in AVs, fairness can be incorporated into QFD.

A QFD process should identify customer needs. In the case of AVs, there are some likely customer requirements. They include quality, safety, affordability, reliability, security, performance and comfort. Additional requirements that are connected to AI include: transparency, trust, privacy, cybersecurity, and fairness. This list is non-exhaustive; engineers would be interested in researching these requirements to aid their judgment. Fairness should be included in these requirements.

The next stage in the QFD is to define the technical features that will satisfy these requirements. The various subsystems of the perception, processing and activation are the relevant technical features that must be designed such that the customer requirement can be attained. The task here is to prioritize and quantitatively rank the level at which the technical features are correlated to the customer requirements. For instance, the capability of the camera to capture images of dark skin at a comparable level of performance to which it captures images of lighter skin is crucial to fairness. Therefore, the correlation between camera functionalities and fairness should be weighted high (for example 10 on a scale of 1 to 10). This high weight could possibly mean that the camera will be designed to meet a defined performance level. Cheap alternatives, from cost saving measures, that compromise safety would be less attractive.

The machine learning AI processing step also has a significant relationship to fairness. If the concept or data utilized in training the AI is not representative of a certain race or gender, it may be limited in its ability to predict the correct outcomes for that demographic group. The activation system, however, is not likely to have a strong relationship to fairness and should be weighed accordingly. Similar activities are performed for the other customer requirements and technical features.

By performing this exercise, the key design understanding necessary to build a quality and beneficial autonomous vehicle system can be learnt early in the development phase. Like the QFD, other engineering quality tools can be adjusted to incorporate fairness / bias criteria. This will increase the chance that eventual regulations that reflect these requirements are complied with to the benefit of all including the vulnerable groups. It is also recommended to apply these improvised tools to other AI systems like those utilized in health, finance, education, security, defense, and other important applications.

Authors

Olutayo Adegoke
Olutayo Adegoke is the founder of Decolonise AI, a Sweden based social innovation Lab that specializes in developing processes for auditing bias in AI systems. He holds a PhD in Production Technology and has over 10 years industry experience.

Topics