Defined Crowd

Building the Future: Fairness, Bias & Ethics in AI

The recent rapid adoption of Artificial Intelligence (AI) in organizations of all sizes and across industries is exciting for the AI industry, but it must also come with a side of caution. 

AI professionals must ensure that AI models not only act in ways that fulfill their use cases, but also account for fairness, bias and ethics. Here’s a closer look at these three parameters from the talk on “Fairness, Bias and Ethics in AI” by DefinedCrowd CTO João Freitas at Microsoft’s 2021 Building the Future Event.

Fairness is mainly a social concept, based on individual perception and cultural norms. A fair system performs in a comparable way no matter the situation and does not favor one person over another. There are many examples of unfairness in AI, including image recognition, customer service, and automated processes such as job application screening.

It’s important to note that unfairness can be introduced at any point within the data collection and annotation process – which then propagates throughout the rest of the model. 

Bias is based in mathematics, and comes from the data we use to train our AI models. Issues with bias arise when a model is trained with data from the real world – data that is most likely already perpetuating a bias.

Ethics is based in philosophy and brings accountability into the conversation and questions of right and wrong. When examining ethics in data collection, the PAPA framework (Privacy, Accuracy, Property and Accessibility) is a strong standard to base decisions on. 

Watch the full video below for more insights on fairness, bias and ethics in AI.