top of page
drishdeycaullychur

Equitable AI - Addressing Algorithmic Biases in Talent Screening

Updated: Mar 8, 2023


The contents of this article do not constitute legal advice, they are our interpretation and are provided for general information purposes only


The rapid development of AI with large powerful transformer models like ChatGPT will not only amplify the performance of AI tools that leverage them but also exacerbate the inherent biases consequently reintroducing them in important decision-making processes like candidate screening. The Society for Human Resource Management (SHRM) found that 88% of companies globally already use AI in some way for HR purposes. That said, most AI solutions that are currently in use are built on foundation models that have been trained on large unlabeled datasets which as a result, most certainly host biases.


Can we eat the cake and have it too?


Various de-biasing techniques can alleviate the problem outside the foundation model but it's important to look at the context to see which technique is a good fit. TEXpert AI took a different approach to reduce biases in the talent screening process by deviating from existing solutions that explicitly ignore protected information such as gender, ethnicity, and sexual orientation in their selection process to promote equal treatment. Instead, TEXpert AI's screening tool strives to create more equitable outcomes in the talent screening process with an Equitable AI.


What’s an Equitable AI according to us?


The term “Equitable AI” does not have a legal definition yet but can be thought of as AI that is developed and deployed in a manner that recognises each group’s different circumstances and allocates opportunities proportionately to reach an equal outcome in fairness, and transparency while respecting privacy rights.


Equitable AI in the context of a workplace

Equitable AI in the workplace aims to produce fairer outcomes, increase opportunities, and improve workplace success for all groups including underrepresented groups owing to their race, ethnicity, disability, age, gender identity or expression, religion, sexual orientation, or economic status.


How does TEXpert AI achieve this?


TEXpert AI is a trusted third-party data intermediary that collects, analyses, and stores sensitive data including sexual orientation, gender, socio-economic background, and ethnicity to facilitate legitimate and secure data analysis. The data collected is further leveraged in removing potential biases ingrained in candidate screening algorithms. In the last Open Algorithms Network meeting on Inclusive AI, the UK Government suggested using third parties to handle protected data as a potential solution to enable more gender-informed data to create inclusive datasets that would ultimately lead to inclusive AI in the public sector. TEXpert AI is a third-party diversity data intermediary currently providing this solution in the private sector.


Is there a framework for Equitable AI?


As of now, the UK does not have an AI framework that addresses Equitable AI. The current landscape presents fragmented legislation that deals with different aspects of the ethical & responsible use of AI. The key regulations include the UK Data Protection Act and the General Data Protection Regulation (GDPR) which relate to the data side of AI models. In the US, biases in algorithms have long been a core issue and are being addressed in the White House’s Blueprint for an AI Bill of Rights. In Europe, the EU AI Act proposal takes a deeper look at the model side of AI and classes CV scanning tool that ranks job applicants as a high-risk application of AI, although 40% of companies globally are actively using AI in that manner.


Frameworks seem to take different shapes and colours in different jurisdictions, but the general tendency is to agree on the equitable design, development, deployment and monitoring of AI to achieve ethical and responsible outcomes.


The World Economic Forum (WEF) issued a Blueprint on Equitable AI off the back of “growing concerns about bias, data privacy and lack of representation [which means we must ensure] that all affected stakeholders and communities reap the benefits of the technology, rather than any harm.”


How is TEXpert AI achieving this?


Embracing diversity, inclusion & belonging

The WEF Blueprint on Equitable AI stressed the importance “to acknowledge the wide spectrum of human identity across dimensions of diversity, including race, gender, age, sex, socio-economic status, and religion for employers to support inclusive AI ecosystems. It is also necessary to create space for workers throughout all levels to explore their own implicit and explicit biases”. TEXpert AI gives employers visibility on their diversity landscape across seniority levels and captures inclusion levels and feedback through its DEI data collection and analytics solutions.


Transparency in Data Collection & Processing

TEXpert AI handles data in a manner that conforms with the applicable local regulations such as country-specific data protection acts and GDPR.


Data Security in Sourcing

An Equitable AI is not built on data scraped from the internet. TEXpert AI is a trusted third-party data intermediary that collects, analyses, and stores sensitive data including sexual orientation, gender, and ethnicity in a non-personally identifiable manner and shares data insights to promote diversity and inclusion.


Moral & Ethical

Ethical use of AI entails far more than adhering to legal requirements. Morality is not codified in law but ensures that the human experience is balanced with an economic benefit when developing and deploying AI. TEXpert AI has been designed to enable companies to take an objective, proportionate and targeted data-driven approach in implementing DEI for workforce and recruitment.


Contextual Application

TEXpert AI works towards promoting equity in the workforce context by offering an AI solution that matches skilled diverse candidates with employment opportunities. This is done by countering biases against groups that are identified as underrepresented in the workforce enabling organisations to implement targeted DEI strategies more efficiently. The model not only highlights underrepresented candidates but also indicates their matching skill set which can be compared against the matching skill sets of overrepresented candidates who are in a “tie-break situation”.


Legal Compliance

Employers can then select skilled underrepresented candidates fairly and legitimately under Positive Action from the Equality Act 2010 in the UK, similar principles in the EU and Affirmative Action in the US.


If equality is the destination then equity has to be the path.









Mercy Wambui Mungai

Co-author

Q-Legal

LLM in Technology,

Media & Telecommunication Law








Drishdey Caullychurn

Co-author

Founder

TEXpert AI


80 views0 comments

Comments


bottom of page