A company is about to implement a use case in the data science area. Now nothing stands in the way of a data-driven business. However, fairness is often overlooked in the models.
Imagine that your company is at the end of its journey towards a first successful use case in data science. The infrastructure has been created, the models trained, and the process transformed. The results are excellent, and the step towards a data-driven business is proudly announced. What can go wrong now? Most data science use cases have a crucial blind spot: the fairness of the models and the resulting consequences.
Table of Contents
Almost all companies are faced with creating added value from their data. Many still fail when faced with the challenge of selecting, implementing, and operationalizing the right use cases for data science and artificial intelligence. Often the results are not good enough due to the quality of the data. Because there is a lack of expertise in data science, the processes have not been adapted accordingly. Therefore, it is not surprising that whether the algorithms’ decisions take fairness into account is left out in most cases.
On the one hand, there is a lack of awareness of the topic. On the other hand, in the know-how of how algorithms can be designed based on fairness. The question arises as to how companies can ensure that their data science & AI use cases are free from discrimination.
The basic assumption is often that algorithms make objective decisions, i.e., based on numbers, data, and facts. This assumption is not wrong, but it ignores the fact that the data basis on which algorithms are trained often contains real existing discrimination and is thus transferred during training. If, for example, an algorithm is to filter applications according to which of them are promising, then the algorithm will be based on the previous settings in the company, which form the database for the training.
As early as 2014, Amazon had the experience that the algorithmic system for the company’s recruitment process did not evaluate applicants in software development in a gender-neutral manner. And thus followed the previous recruitment pattern in which male applicants and recruits were overrepresented. The discovery of this fact caused an outcry in the media and led to an enormous loss of reputation for Amazon.
Whether made by a human or a machine, the fairness of a decision is not always clear. For example, one could consider a decision to be fair if all the groups felt they received a positive or negative decision in equal parts. One could think of a credit decision here and classify it as suitable. So when both men and women get a loan with similar odds. It is also conceivable to judge the decision as fair if the quotas of commitments for both groups are at the same level. They have provided that the people qualify for it.
In the loan example, this would mean that the decision as to whether a person receives a loan is not evenly distributed across both groups across the board, but only for those who qualify for a loan. In this case, this represents a more realistic and more economic definition of fairness. This definition is also not optimal for the example described. The whitepaper ” Relevance of fair algorithms for Company ” explains how the correct explanation is selected for the respective use case is presented in the “Relevance of fair algorithms for Company.”
The credit example shows that different criteria for fairness can be used for the same question. The choice of the respective standard by which the right of a model is to be measured depends heavily on the business context. The sensitive attributes that need to be protected can also differ depending on the application.
To ensure the fairness of algorithms, there are technical possibilities in development on the one hand and organizational levers in the company on the other. Specialized options are essentially based on the following three steps of a classic machine learning pipeline consisting of data preprocessing, modeling, and (result) post-processing:
In the company organization, too, there are important levers and guard rails for fairness awareness, which pave the way for non-discriminatory algorithms even before the actual solution development:
The aim should be to develop a company and business-specific bias impact statement, which is an integral part of algorithmic solution developments in the company. This can be adjusted depending on the organization’s requirements but must be followed consistently and stringently for all development processes. It ensures responsible handling of decision templates created by algorithms or fully automated decisions in the company. In this way, it can be avoided that the algorithms used act unethically or violate applicable law.
Automated fairness is work and should be checked regularly. It is true that even human-led decision-making processes contain errors and sometimes systematically disadvantage groups or individuals. However, these are often accepted or classified as individual cases, especially because, in contrast to AI, they are usually not systematically recorded or analyzed. Algorithms can make decisions more fairly and more transparently than humans. But this requires intensive observation and constant adjustment of the models. If a company neglects this, existing inequalities in the data can be exacerbated. A risk that companies can no longer afford. Because it is also to be expected that the regulation in the field of artificial intelligence will intervene more strongly at the national and transnational levels in the future, this is also shown by the recent action of the EU.
As someone who’s spent years working in an office setting, I’ve seen firsthand how energy… Read More
Background checks are a staple in the hiring process. They can make or break a… Read More
There's so much talk about AI at the moment, with a lot of opinions on… Read More
Improving user experience (UX) is not just about making things look pretty; it's about creating… Read More
In the incessant whirlwind of technological advances, where new smartphone launches follow one another at… Read More
What should organizations consider while searching for answers to secure their cross-breed server farm? Against… Read More