On behalf of

Tata Consultancy Services

Ethical AI: A New Strategic Imperative for Recruiting and Staffing

On behalf of

Tata Consultancy Services


The content on this page was commissioned by our sponsor, TCS.

MIT SMR Connections

MIT SMR Connections is an independent content creation unit within MIT Sloan Management Review. We develop high-quality content commissioned and funded by sponsors. We welcome sponsor input during the development process but retain control over the final product. MIT SMR Connections operates independently of the MIT Sloan Management Review editorial group.

Learn More

AI systems have the potential to greatly reduce the unconscious bias inherent in recruiting and hiring processes while increasing efficiency and freeing up human recruiters to work on higher-value tasks. At the same time, it’s crucial to ensure that AI is being used responsibly, fairly, and ethically. In this Executive Conversation, thought leaders from TCS and Randstad discuss how to successfully achieve both goals.

Artificial intelligence already touches our lives daily, given its use in everything from entertainment to e-commerce. But not everyone knows that companies are using AI for recruitment and staffing as well. This approach presents an enormous opportunity to improve the fairness of current hiring processes, especially when it comes to removing unconscious bias. AI systems also promise to automate tedious processes, such as job-description creation and interview scheduling, allowing the people involved to spend more time with candidates to improve both the accuracy of the hires and the experience of the job seekers.

There’s no question that humans are subject to unconscious bias in the hiring process. In fact, there are more than 180 different cognitive biases that people are susceptible to, according to the Cognitive Bias Codex created by John Manoogian and Buster Benson. Training to root out these biases helps address this problem but, as other research indicates, doesn’t entirely resolve it.

Software, on the other hand, is not subject to the same kind of cognitive biases that plague humans. Using an AI algorithm trained by a machine learning (ML) data set is a major advance — as long as the system does not introduce its own biases and is used to make insights-driven decisions.

Whether it’s reviewing a job applicant’s information, selecting candidates to interview, making assessments, or conducting interviews, every step of the traditional hiring process is vulnerable to unconscious human biases. By applying appropriate, ethical, and responsible AI to those different stages, you can reduce and even eliminate the unconscious bias inherent in those processes. This is an important goal because companies are increasingly seeking fairer and more inclusive outcomes in terms of who’s being selected and who’s being hired so that, in turn, they have much more representative workforces. Research shows companies that are more diverse and inclusive perform better than others and have high levels of employee engagement as well.

In addition to reducing bias, AI-driven systems for hiring can improve job applicants’ experience by giving them more visibility into the candidate selection process. Historically, each job opening might receive hundreds of applicants; the vast majority would hear nothing back until the position was closed (and often not even then). AI can augment human capability to interact with and even provide assessments for these individuals and automate notifications to potential candidates, including the news that they don’t meet the required minimum qualifications for the role. This is a dramatic improvement in the job-seeker experience.

AI can also highlight internal candidates who are good fits for open positions but who might otherwise have been overlooked. This capability also helps increase employee satisfaction by enabling mobility and supporting their career paths, which will lead to higher engagement and retention.

For HR managers, the benefits are significant by enabling them to sort dozens, if not hundreds, of resumes rapidly, pinpointing the candidates who best meet the job requirements and therefore have the highest probability of being the right match. Rather than simply presenting resumes in the order they arrived, the system can boost recruiter productivity as well as fairness and inclusivity by pushing qualifying resumes that came in later to the top of the pack. The result: a better fit for both the candidate and the company doing the hiring.

Avoiding Unintentional Bias

We all know humans have preexisting unconscious biases that affect hiring. The initial hope was that AI would remove 100% of the bias in recruiting and hiring. But in the past few years, people quickly realized that while machines are not subject to unconscious bias, they can suffer from algorithmic bias. And while some may say that the human brain is the ultimate black box, early research with deep learning systems showed algorithms to be as equally opaque in their processing — that is, it was almost impossible to figure out how the algorithms arrived at the end result.

If bias exists in an AI-driven recruiting system, the potential impact on inclusivity could be significant. For instance, if there were an inherent system bias against any specific group of people, that bias would be consistent across all its decisions — not variably, as is the case with human recruiters. There is potential for damage in terms of lawsuits, regulatory fines (notably from Europe’s General Data Protection Regulation, or GDPR), shareholder and employee concerns, and reputational harm.

Transparent or “explainable” AI systems, by contrast, make it possible to understand how they came to a particular result. (See “Explainable AI Is Ethical AI.”) Key to this concept in terms of recruiting and hiring is being able to explain why Candidate A was selected, while Candidate B was not. The stakes for getting it right are high.

Using Best Practices for Fairness and Inclusion

The fundamental principle is to uphold equality of opportunity for all. That means if two people are equally qualified, they should have an equal opportunity for an outcome (such as being selected for an interview or, eventually, being hired). Organizations should set goals for diversity, equity, and inclusion, striving to make the demographics of a company’s workforce align with that of its constituents, then measuring how well they are meeting those goals. Doing that, of course, requires collecting key demographic data such as gender, age, race, ethnicity, educational credentials, and so on. The data is necessary because, without demographic information, organizations can’t test and measure whether their efforts result in fair and inclusive outcomes.

There are two ways to collect demographic data. The first is to ask job seekers for their demographics and gain their permission to use this information within acceptable guidelines. The second is to use an anonymized data set provided by another source or via a separate AI system that can infer demographic characteristics with a high degree of accuracy.

Many companies tend to skip this step, but it can be extremely helpful to perform baseline measurements of the accuracy and fairness of manual hiring processes before moving to an AI system.

Having a human in the loop for actual decision-making is another best practice — one that is required by GDPR, for example, which places additional requirements on automated decision-making. This requires caution, though, as inserting a person into the process risks adding the problem of unconscious bias back into the equation. It can be useful to measure the fairness and inclusivity of the predictions made by the AI solution against eventual decisions made by humans.

As a safeguard, it’s important to identify the stages where the potential for human bias is highest, then seek well-designed solutions that aim to systematically minimize that bias. For instance, one area that’s especially susceptible to human bias is the creation of job descriptions, which can result in discouraging historically underrepresented groups from applying. An AI-powered solution that streamlines keywords and qualification requirements can help create more equitable and inclusive job descriptions.

Interviewing is another area where unconscious bias can enter. Traditionally, many hiring decisions are made simply based on whether the interviewer liked the candidate — it can be challenging to avoid having unconscious bias influence otherwise seemingly “fair” decisions. AI systems can’t like or dislike people, of course, and thus can add value as a more neutral source working in tandem with humans.

The search-and-match stage can also be problematic, depending on the keywords used to find potential candidates. People can unconsciously exclude candidates solely based on the terms that they decide to search for, which can sometimes be as simple as a title search. A basic search-and-match solution can only return exact keyword matches, so unless recruiters can think of and search for all of the equivalent titles and keywords for a given job, they will unknowingly exclude qualified candidates who simply use titles and terms in their resumes other than the ones the recruiter searched for.

AI-powered search-and-match solutions can help prevent such unconscious exclusion by expanding queries to include equivalent and relevant terms to maximize inclusion. Solutions that anonymize individual names and demographic features such as age, gender, race, ethnicity, and national origin can prevent human bias in deciding who gets selected for an interview. These solutions can also be set to “blind” recruiters from viewing a candidate’s specific educational institution, as there may otherwise be unequal access to graduates of elite universities and colleges. Organizations can tune these solutions to blind these and other demographics according to their diversity and inclusion goals.

Apply Tried-and-True Techniques to Mitigate Bias

Because it’s highly unlikely that any AI solution is bias free “out of the box,” it’s necessary to use bias-mitigation techniques. For instance:

  • Preprocessing aims to remove bias features from a data set before the processing is done. Ensuring that the data you use to train your AI system is a fair and representative sampling is one method of preprocessing. If you’re going to use historical data for training, for instance, you should ensure that, for at least 80% of the roles or job categories for which you hire, you collect demographic information for a statistically significant number of people within each role/category. This will allow you to measure for and mitigate bias during testing and before use in production.
  • Adversarial debiasing involves using one algorithm to help mitigate bias present in another algorithm. For example, one algorithm selects a candidate to recommend for a job within a certain data set. Then the second algorithm runs a process to “guess at” the underlying sensitive attributes (gender, age, race, and so on) of the person recommended. If the second algorithm is able to successfully classify that person’s sensitive attributes, you can then ideally tune the first algorithm to produce candidate recommendations where the second algorithm can no longer accurately predict those sensitive attributes no better than doing so randomly. This process can be used to tune out an algorithm’s ability to systematically favor or disfavor a candidate’s gender, race, or other demographic factor.
  • Post-processing is designed to take inherently biased results and recast them in a fair and representative way. For example, if the applicants to a software engineer position are 70% male and 30% female, based on the differences between how men and women tend to represent their experience in resumes there is a danger that the female candidates might be ranked far down in the list of matching candidates and may not be reviewed and considered. The remedy is to apply a post-processing rule to ensure that every block of results (for example, 10) reviewed by a recruiter would contain 70% male applicants and 30% female applicants.
  • Combining behavior and decision sciences can help mitigate unconscious bias in decision-making. This includes the use of color theory, speech processing, and physiopsychological parameters as permitted by government and industry regulations. These emerging techniques challenge conventional methods and help the decision maker with insights that are designed to be ethical and free of any unconscious bias.

Reaping the Benefits of Automated Hiring Processes

AI can also be used in conjunction with other systems to automate many recruiting and hiring processes. If candidates apply for jobs outside normal business hours, for example, an automated system could assess their applications and flag them for interviews if they meet the minimum job criteria. Via chatbot, the system could even schedule an interview with a good prospect on the spot. That, in turn, represents a great increase in efficiency (no more back-and-forth phone calls and emails) as well as a more pleasant, empowering experience for top job candidates. Research indicates that millennial job candidates, in particular, appreciate the convenience and instant gratification provided by chatbot service.

At the same time, automated systems can also improve interactions with candidates who do not meet the minimum job requirements or whose preferences do not match with the position. Candidates appreciate not wasting their time when a particular job is not right for them. An AI-powered system can immediately notify these applicants and recommend other open jobs for which they are qualified. The addition of natural language processing allows candidates to speak their replies to a system rather than struggle with a form on a mobile phone, for example. There is huge potential to provide a smoother, more efficient, and enjoyable experience for all candidates while increasing recruiter efficiency and job satisfaction.

In this way, AI can help create a consumer-grade experience, both for employees within the enterprise and for job candidates across their entire tenure with an employer. The vision is for a unified experience that offers multiple channels of interaction and lots of self-service options. With well-designed user personas and journeys, AI can also help provide just-in-time nudges to guide users to the next step, adding up to a more fulfilling experience. For example, if promising candidates have uploaded their resumes but still need to provide additional required documentation, the system can prompt them to follow up so their applications don’t fall out of consideration.

AI can also enable an organization to create talent pools that treat recruitment and hiring as a continuous process, as opposed to discrete, unconnected events. When an organization plans for its hiring needs over a period of years, for example, it can use AI to create curated groups of talent that it can source from over time, as opposed to all at once. This enables recruiters to build relationships that support more productive recruiting as job opportunities arise. AI can, therefore, enable an organization to take a much more strategic and holistic view of its hiring needs.

Building an AI Team for Staffing and Recruiting

In our experience, it is important to have a cross-functional team involved in your AI-enabled staffing and recruiting efforts, including:

  • AI experts. Depending on an organization’s size, scale, and resources, it may not need (or be able to find or afford) its own AI experts. But in an ideal scenario, having in-house AI domain experts is certainly an advantage, as these highly trained technologists will be able to understand the outcomes to be delivered — and whether the solution is capable of them. AI experts, who are more familiar with relevant tools and how they work, can confirm whether a proposed AI solution is transparent, ethical, and in compliance with equal opportunity laws and other regulations. Toward that end, the largest organizations should have dedicated ethical AI experts on the team and may want to consider upskilling from within as well as searching outside.
  • Data scientists and data engineers. As with AI experts, it can be difficult to find and hire specialists in these roles, but it’s worth the effort to try (at least in larger organizations). People in these roles are experts in all things data. They’ll be able to say whether your data is representative — a necessary task — and will help identify what data resides in the organization, whether it’s of high-enough quality to use, and which outside data sources to tap, if necessary. They also ask tough questions, such as how a vendor trains its algorithm and how it ensures the AI solution is ethical and transparent. (For more on working with external partners, see “Fair and Inclusive AI: 10 Questions to Ask Vendors.”) This role may include responsibility for data modeling, another important function.
  • Lawyers. It’s critical to involve legal experts to ensure anything you do with regard to AI and automation is in compliance with all laws and regulations in each location where the solutions are proposed to be utilized.
  • Human resources. When considering the use of AI in recruitment, HR professionals will be able to provide important guidance with regard to various aspects of employment, including compliance with labor laws, employment standards, diversity, equity, and inclusion.
  • External auditor. This role is key for organizations developing their own AI solutions. Third-party algorithmic auditing for compliance with ethics and explainability is essential. If you’re using a vendor’s solution, you will need to ask about its auditing framework, including auditing technology and processes.
  • Organizational psychologists. With emerging focus on human behavior, it’s important to consider people dynamics right from the start. Organizational psychologists can help give an outside-in perspective during recruitment.

As with any technology project, your AI team should also include a project manager, business analysts, information security specialists, an executive sponsor, and user champions.

Leveraging AI to Benefit Talent, Clients, and Employees

Randstad looks for opportunities to automate routine tasks where the human element doesn’t add incremental value, with one great example being that of interview scheduling. This type of automation enables our employees to spend more quality time engaging with and serving talent and clients. We aim to personally connect with our stakeholders, especially when there is added value to such interactions; otherwise, intelligent automation can be leveraged to perform routine functions.

We also seek to address traditional pain points within stakeholder experience, such as the application process. For instance, we know that many people apply to jobs outside of business hours, when our recruiters aren’t in the office. To address this, we have used several technology-enabled solutions to provide an interactive applicant experience, accessible 24/7, to more quickly connect qualified applicants with our recruiters. Our own chatbot has conducted nearly 1.4 million conversations with applicants, has scheduled more than 480,000 interviews, and has facilitated more than 135,000 hires in just one year.

At this significant scale, it’s worth noting that the average talent satisfaction rating of the chat experience has been 4.6 out of 5, representing a nearly 20% improvement over the legacy process. Furthermore, 76% of all interviews scheduled by the chatbot occur within 72 hours of a completed job application, with 22% scheduled the very same day of the application — effectively accelerating the connection of job seekers with job opportunities. Perhaps most interestingly, people who are hired after using the chatbot-powered application process work an average of 22% longer on assignments than people who don’t.

Randstad is committed to the ethical and responsible use of AI. We have been cautious and prudent with its application to ensure it serves the best interests of our stakeholders and mitigates possible pitfalls associated with the technology. To that end, we have developed our own AI principles, which include: “human forward,” (that is, using AI to benefit society as a whole), human oversight, transparency and explainability, fairness and inclusivity by design, privacy and security, and, of course, accountability. These principles guide us in the fair use of AI in support of our customers.

Reimagining a Talent Ecosystem With Intelligent Automation

With more than 500,000 employees worldwide, TCS understands the talent acquisition, management, engagement, reskilling, and retention space. Since the onset of the Covid-19 pandemic, TCS has revisited and reimagined its entire HR value chain ranging from virtual hiring to onboarding new employees remotely to engaging with associates for all activities. Today, 98% of TCS employees in 46 countries work remotely. Last year, TCS processed millions of job applications, hiring via video interviews and gamification-based hiring contests. At this scale, TCS has adopted intelligent automation in a big way with the Machine First Delivery Model (MFDM) and AI. In late 2019, TCS held India’s first-ever AI talent contest: HumAIn, with more than 30,000 student participants from 1,000 institutions.

TCS Workforce Analytics, a recent offering, is a combination of TCS intellectual property (IP), partner IP, and domain and technology accelerators. It complements human resource management systems with data and analytics solutions on talent experience that includes the analytics on employee life cycle starting from even prior to recruitment. In addition, this innovative offering also provides solutions related to productivity, compliance, and well-being.

The future of work and the required skill sets are being shaped by profound changes in social, geopolitical, and technological arenas, driving staffing and recruiting companies toward lower cost-to-serve agile operating models, technology-led business transformation, and innovative workforce solutions.

The staffing and recruiting industry will be the key driver of growth and prosperity in this new era. With client and candidate experience being paramount, TCS believes that data-driven staffing strategies, leveraging AI/ML, and cognitive automation will shape the new world of work.

With more than a decade of extensive experience and seasoned recruitment domain consultants, TCS has been the leading business and technology partner for the staffing industry, helping some of the largest global staffing companies navigate this time of rapid change and transformation. TCS’s HiTech next-generation solution accelerators and frameworks across front-office, middle-office, and back-office functions, combined with deep expertise in market-leading recruitment technologies, ensures that TCS can fast-track growth and transformation journeys for its staffing clients.

Leveraging its deep contextual knowledge, TCS collaborates with leading industry analysts to define and deliver future-ready data-driven staffing solutions that form the basis for AI/ML-based intelligent hiring to enhance recruitment effectiveness and elevate business performance.

With its Business 4.0 framework, TCS continues to help customers transform into purpose-driven, resilient, and adaptable organizations. This capability was reflected in a recent Everest group report that cites TCS’s credible investments in building a comprehensive portfolio of Al services, underpinned by strategic investments in developing domain-specific platforms and solutions, as a key strength.

MIT SMR Connections

Content sponsored by Tata Consultancy Services