Like Birkin bags for the rich and famous, everyone wants artificial intelligence. From chatbots to autonomous vehicles, AI has taken the world by storm. In the strategic HR market, the applications of artificial intelligence in recruiting technology and talent management are still somewhat new, but demand keeps getting higher. Companies and vendors work every day to better understand how AI will affect their strategies and how it can be used to boost them as well as help out their clients. During a recent conversation, our resident expert in everything AI, Rabih Zbib, spoke about this new world that is emerging and what it brings to everything we do here at Avature. With a Masters in Science and Ph.D. from MIT, Rabih is the Director of Natural Language Processing & Machine Learning at Avature. He works on applying these branches of artificial intelligence to solve challenges in talent acquisition and talent management. Of course, we wanted to rack his brains to understand a bit more about AI. So what are the most critical elements we need to talk about when it comes to AI strategies? Setting the Right Expectations in Your AI-powered Strategy One of the main issues when it comes to AI is how to implement it. Both companies and vendors are struggling with concerns when looking to deploy AI-based technology in a constant “expectation vs. reality” dilemma. “There is an expectation that AI is the magic bullet, but when it comes to the nitty-gritty, setting the expectations is the challenge, and it’s the challenge for the sector in general.” -Rabih Zbib. Director of Natural Language Processing & Machine Learning at Avature. What Can AI Do For You? Picking up on this, Rabih describes these expectations as two-fold: The Can Do. What is realistic in terms of the current capability of the technology? What can it do? What can’t it do? It may seem that we’re in a moment in time where anything is possible. But it’s essential that we remember to have realistic expectations when it comes to developing strategies involving technology; it enhances processes but won’t solve any issues by itself. Write it down: ground yourself with actionable plans rather than just big ideas. When it comes to Avature, Rabih highlights the importance of developing a partnership and feedback loop with the customer, noting that it’s not just about what the technology can do. He and his team are dedicated to listening and taking into account the customer’s real problems, enabling a way in which all this information can be incorporated into a technical approach. What makes this aspect even more complicated is that the capabilities of artificial intelligence in recruiting and TM are advancing and changing all the time. This process of evolution is something that’s happening even as you’re reading this! What Should AI Do For You? The Should Do. What should we allow the technology to do? What shouldn’t it do depending on the application sector? “We may be comfortable letting it recommend which brand of sneakers we should be buying, but are we comfortable with the same technology recommending who we should hire? There’s a lot of nuance there.” -Rabih Zbib. Director of Natural Language Processing & Machine Learning at Avature. In other words, he highlights the importance of an ethical application of AI. You’re probably starting to think about bias here – a critical consideration that we’ll explore in detail below. But there’s also an emphasis on the value of maintaining the human aspect. Let’s take hiring as an example. With today’s technology, you could completely eliminate interaction between applicants and the recruiter, leveraging automated screening, knock-out questions and AI-powered scoring to identify the best candidate based on your pre-defined criteria. With everything set up correctly, you could even automate the offer process and hire someone without even minimal conversation. Efficient, but in practice, what kind of impression does this dehumanized approach give to candidates? And as importantly, what kind of human insight that is hard to quantify will be lost in the hiring process? An ethical application of artificial intelligence in recruiting and talent management should take into account people as well as data, or else your talent process is likely to lead to a dehumanized and clinical approach that damages the candidate or employee experience and will only center on efficiency. Key Considerations When Applying Artificial Intelligence in Recruiting and TM Data and How to Use It “Your technology is as good as your data” is a critical mantra in the fields of artificial intelligence and machine learning. Data is the key component in building your machine learning strategy; it can definitely make or break it. Not every vendor is able to offer experience and wealth in data or even transparency when it comes to how it was gathered. And this should be one of your key concerns when evaluating who to work with. Many start-ups and vendors are using AI as a selling point to address niche ideas and point challenges, offering just one solution to one problem and failing to materialize all of it. When the broader challenge is to move towards optimizing workflows, outcomes and user experience at a platform level, then it isn’t really clear how it can be achieved through outside and inorganic data. While choosing an “off-the-shelf” solution, Rabih points out that you don’t know where the data behind their models comes from. Though many vendors in the market offer AI functionality that solves complex problems related to hiring and managing people, there’s a lot of doubt surrounding where and how the data was obtained, and therefore whether the resulting AI systems are suitable for solving your problems. When it comes to Avature’s machine learning and AI deployment strategy, Rabih considers data to be a main pillar and a competitive differentiator because we are building it in-house. How are we able to do this? “We have access to real data that spans multiple industries, multiple sectors and a long historical period. So that data is an opportunity but also a responsibility for all of the obvious concerns about data privacy and security.” -Rabih Zbib. Director of Natural Language Processing & Machine Learning at Avature. Black-Box and White-Box Approaches to AI Here’s when we come to another issue in many AI deployment strategies: Black box versus white box. Where black box refers to an opaque approach to AI, in which a user has limited control over the decisions being made by the algorithms, a white-box approach differs by offering users the last word. This means giving them complete visibility and transparency of what data the algorithm is using and how suggestions are being made. When it comes to Avature, we offer a platform in which AI is embedded in its architecture. Our experts are just a phone call away, available to explain in detail how everything works. They are the ones building the AI engine at the core and applying AI-powered tools in different parts of the platform. It’s essential to have a clear message and strategy, understanding where AI is being applied, what’s being tracked and what you’re able to do. If not, it won’t be suitable for global organizations searching for the ideal AI technology to implement in their strategies. A “white-box approach” on the other hand is definitely part of Avature’s arsenal platform-wide. Based on a decision-support system approach, Rabih is aware that it’s important to keep the customer informed not only about what the system is recommending, but also how it is forming its recommendations. Transparency Is Key Transparency is one of the main factors here: letting the end-user know what information the system is using. That in turn has implications on the technical approach employed. Take for example semantic suggestions while searching a talent pool. As the sourcer starts looking for candidates with a specific skill, the engine will prompt the user with term recommendations to expand the search to candidates having relevant neighboring skills. This way, the reason why a certain candidate was retrieved can always be explained and the user can, at all times, dismiss recommendations. “How do you build that, respecting the privacy of the users and do so in a way that avoids bias?” This should be a main point to keep present during strategy deployment. If the vendor you choose to work with is building its AI capabilities organically through a white-box approach, Rabih mentions that it’s likely that they’re using relevant data that respects privacy and security. This also means that the process for mitigating bias can start in the early development stages, and due to transparency, it’s easier to trace bias throughout the process. On the contrary, an opaque black-box approach risks not only replicating errors and bias systemically, but also making it tremendously difficult to track and fix problematic input and calculations. Data privacy and bias are both real concerns that are always present when formalizing an AI deployment strategy. Avoiding Bias We can’t talk about artificial intelligence in recruiting and TM without touching on bias. This has been a main concern for companies in any AI strategy they decide to deploy; just look at what happened with GPT-3. So what does Rabih think about the issue of bias in artificial intelligence? Related to the technical approach, Rabih points out the importance of keeping an eye out on the kind of information and data that are being used, as well as the algorithms. A typical use case that he mentions as an example of where bias can occur is in recommendations. This could be in matching or filtering. So why does this happen and how can we work to avoid it? Explicit Bias In order to build these models, there are steps that can be taken to try and avoid unwanted and unintentional bias. For cases related to explicit bias, Rabih mentions a series of more “obvious” actions he and his team take to ward it off, such as not using gender, personal information or race directly in the data you’re collecting. They don’t use personal traits either, such as voice, speech quality or facial recognition. Implicit Bias When it comes to implicit bias, Rabih suggests building a semantic representation of candidates and jobs based on the skill required, rather than using historical data (meaning human-made decisions during a specific moment in time). Why? He says that when using these historical decisions, there’s a chance they could contain bias, which trickles into your current model. In contrast, with semantic representations, similarities can be measured between what you’re looking for and what your candidate or job database contains, therefore allowing for recommendations to be made without the implicit bias that might be contained in the historical data. The Avature Approach to Artificial Intelligence in Recruiting & Talent Management The development of the Avature AI roadmap reflects a pipeline of information, with the technology implemented at several levels. Level One The first level deals with task automation. Firstly, we introduced a resume parser to the platform eight years ago, built to understand several languages, and automatically extract relevant personal information to populate a Person Record within the platform. The system is also capable of detecting similar records, allowing the combination of them to avoid duplicates. This saves time for the user, who no longer has to perform the task manually. We are continuously improving the resume parser. For instance, during the second quarter, we’ve managed to expand our ability to parse resumes and extract the information in languages that include Bulgarian, Croatian, Estonian, Finnish, Latvian, Lithuanian, Polish, Romanian, Slovak and Slovenian. These have joined the list of 17 languages that are already supported by our resume parser engine. Level Two The second level is intended to deal with information at scale, helping the user out by augmenting their knowledge. Here we can find semantics and matching. These capabilities are able to process huge quantities of data and display what is relevant much faster than a human could, as well as noticing patterns that humans may not. It’s all about delivering the right information at the right time. Skills taxonomy and semantics have also come into play at this level. This capability is being developed as the topic of skills is gaining momentum in the TA community each and every day. Such a task is not scalable for humans to perform but is perfect when it comes to leveraging AI. Rabih points out that through machine learning, we’re able to take the data we have and learn how skills relate to each other and to jobs, as well as infer jobs from skills and vice versa. Level Three The third level is related to predictive analytics and intelligence in the platform itself. Agility and flexibility are the main properties of this level. As Rabih notes, of the Avature platform itself: “There are no two instances that are the same”. So the next frontier in our AI deployment strategy is related to improving the user experience for each individual, making sure outcomes are positive, ensuring the process as a whole is as well. In our platform, in particular, one of the examples Rabih mentions is wanting to help the user optimize the usage of workflows and how to put them together. This, he says, can be done by utilizing techniques like reinforcement learning, where the user can be modeled as the agent in a specific environment and take note of interactions with the platform, guiding them through it all. The limits of technology are constantly expanding and outcomes can diminish or improve a view of a particular process. Rabih predicts that a big focus moving forward will be on evolution and figuring out how technology can be used to work with us. “It’s exciting to be taking part in that and have the opportunity to help shape the future.” -Rabih Zbib. Director of Natural Language Processing & Machine Learning at Avature.