S4 E71 | eXpert Tease How tech is being used to level the playing field and mitigate bias with Frida Polli

CXR Announcer 0:00
Welcome to the CXR channel, our premier podcast for talent acquisition and talent management. Listen in as the CXR community discusses a wide range of topics focused on attracting, engaging and retaining the best talent. We’re glad you’re here.

Chris Hoyt, CXR 0:17
Hello, I’m Chris Hoyt, president of CareerXroads. you’ve joined us for another eXpert Tease segment of our CXR podcast that can be found and subscribed to on Spotify, iTunes, Amazon Music, iHeart Radio and all the other usual places. And while we do a few different specials like our Uncorked broadcast, which is sort of like drunk history for HR, and our new Moments, That Matter series, where we kind of talk about tough topics within the realm of prejudice and racism. This is a weekly quick hit conversation with industry leaders and practitioners who are sharing a life or career lesson, sort of a hands on how to, if you will, with us in just 10 to 15 minutes now, the topics that were decided on an advanced by hundreds of talent acquisition leaders from around the world, covering a range of subjects such as innovation leadership, DE&I, employee wellness, or even like today, ethics and talent acquisition and hiring. So if you’ve joined us live, which anyone can do, you can both listen to and participate via the chat window on your screen, we’re hoping you’ll add a question or two. And if we’ve time, we’ll try to get them answered here in the broadcast. If we run out of time, we’ll address them in our free and open to the public exchange hosted by CareerXroads at CareerXroads, CXR.works/talenttalks. Now, today, I’ve got with me, Frida Polli, who has a PhD in neuro psychology and an MBA from Harvard, and is the co founder and CEO of Pymetrics, where their entire mission is to make hiring and internal mobility, more predictive and less bias through advanced technology now, Frida, I have been a fan of your work for years, as you know, we work hard to try to stay in touch and keep up with you. And the difference that you and your teams of pi metrics are having in your space. Welcome to the broadcast.

Frida Polli, Pymetrics 1:54
Thanks, Chris. Happy to be here.

Chris Hoyt, CXR 1:56
So the topic we want to take a stab at today is something that given our current societal challenges, not just in the United States, but around the world has really put a bright spotlight on within the realm of what we do right within talent, and that is ethics and bias in hiring. So I’d really like to get your take and ask for you to share one thing that you believe our listeners should be aware of in regards to how tech is helping to level the playing field for talent and HR leaders today.

Frida Polli, Pymetrics 2:21
Sure. It’s hard to condense, condense it into one thing, Chris? So I’ll try. I think bottom line, I think, I would say this, that technology can be designed to level the playing field and mitigate bias. And, you know, I metrics we we do that we’ve been doing that since the beginning, we’ve built the platform to be a ground up, you know, from the ground up. At the goal, Ai platform, we selected on bias data, we pretest all of our algorithms for disparate impact before we release them, we don’t release algorithms, if they have disparate impact, we test them after they get deployed. We’ve had a third party audit, by Krista Wilson at Northeastern, so we’ve like done, you know, pretty much as as much as you could do in order to build an unbiased platform, however, and therefore, you know, we we feel very confident that we level the playing field in many ways for all of our clients that use us. And you know, that’s what the results show. I think what I will say is that it’s not as bright and optimistic a picture as all that because, unfortunately, there is just and it was just always boggles my mind when this happens. There’s just a lot of false advertising out there, meaning there is a there are a lot of tech platforms. I just came across one yesterday that just claim these outlandish things, including that, you know, this one was like they’re the first independently audited AI platform, you know, and I’m like, well, you must be the second because Pymetrics is the first but but but regardless, the point is there are so many claims out there that lack any substantiation. And I think that’s what concerns me the most is that while the possibility for leveling, the playing field absolutely exists, the possibility for exacerbating inequality also exists. And so my my whole thing, and I don’t know if we’ve spoken about this, but we’re actually supportive of certain policy efforts and legislative efforts at the local level and at the federal level, that essentially would require more transparency around what technology providers are actually doing, you know, potentially mandated disparate impact reporting things of that nature, because I think, absolutely, technology can be a massively important factor in leveling the playing field, however, without any knowledge or transparency on the part of on the part of vendors Pymetrics and others. I don’t know it’s very hard, I think for the public to trust that that’s what the, that that’s what the technology is doing. And personally, I wouldn’t trust it either. So I think That’s my sort of, I think, hopefully somewhat balanced response to how I see technology leveling the playing field, it’s 1,000,000% possible. And yet I really call upon companies, vendors, and so on to be far more transparent. If they’re making certain claims, I think you need to, you need to be able to back it up. And I just don’t see a lot of that. So

Chris Hoyt, CXR 5:23
Well Frida so I, you know, I remember when AI hit the the talent scene, right, and all of a sudden, the next years, HR tech, everybody had AI, like, everything was AI, everything, you know, whatever it was, whether it was complete, or whether it was predictions or whatever. What would you tell a talent leader, who’s literally standing there and has five vendors or five solution providers to pick from? And they all claim that they are the first at x, y or z? Like, what? How would they navigate through the smoke and mirrors?

Frida Polli, Pymetrics 5:53
Yeah, I mean, I think it’s tough, I’m not gonna lie, like I’ve done, you know, so I sit on a working group for I Tripoli, I sit on another Working Group, for the World Economic Forum on AI and HR. I mean, I sit on a bunch of these working groups, and part of what folks are trying to do is is literally develop a procurement guideline, you know, for how to answer this exact question. And I think it’s, quite frankly, I mean, having sat on a lot of these groups, I think it’s, it’s a struggle, because, and again, this is because, you know, essentially, at the end of the day, an artificial intelligence system is very complicated. And, you know, it takes folks with machine learning experience to really sometimes understand, you know, some of the some of the issues, and quite frankly, even people with machine learning experience may then have opinions about how something should be developed that are not consistent with employment law, I guess, my point I’m trying to make is that using a technology like AI, you know, in hiring, just, you know, presents, I think more just opens a lot of it begs a lot of questions. And so I guess the only thing I can provide, I think, is that in your procurement efforts, or you know, when when you’re vetting vendors, I would try to establish how much external proof they have of what they’re saying. And that can come in a variety of different ways, right? Like, if you’re saying that you’ve been audited, we’ll have them procure whatever documentation they have from a third party saying, yes, indeed, we’ve been audited. If they claim that, you know, they don’t have bias in their algorithms, then have them procure as much, you know, documentary evidence that they do this stuff. So just basically, whatever claims people are making, I would expect that a good vendor, and we hold ourselves to the standard would have a lot of documentation that they would be able to provide a team with, in order to, to validate any of those claims. So I think that’s one option. Another option, I think, is definitely to utilize independent experts in the field, to sort of get their broad view on, you know, certain technologies, and so on and so forth. And then again, in that case, I would just make sure that those folks don’t have any pre existing relationships with certain companies or vendors, or, you know, and so on, and so forth. But I do think that there are consulting firms that can act, as you know, independent experts that can that can really validate enemy hrpa, and things like that. And, you know, you guys as well, I think, provide a layer of sort of, you know, validation for, for different things that that companies might be claiming. So,

Chris Hoyt, CXR 8:20
Yeah. So Frida, I want to come back to the ethics piece, right, so the leveling the playing field a little bit within that world of hiring and selection, what is and not not not not asking for, it’s gonna feel like it not asking for a product pitch from you. But what is the approach that Pymetrics is sort of taken to level that from a selection? So how are you trying to selection?

Frida Polli, Pymetrics 8:45
Well, so any tool so it’s not just unconscious bias? It’s all bias, right? So unconscious bias just means that I’m helping, only people can have unconscious bias, right? machines can. So if I’m using an algorithm, by definition, I’m removing unconscious bias, right. But I may still have lots of bias in general, right algorithms, you know, may have bias. And so the way that we do that, and again, other companies do this, too, so it’s not a product pitch for metrics, it’s sort of like understanding how you can arrive at unbiased or, you know, lacking in disparate impact outcomes. So what we do is several fold. First we take, we use soft skilled data, right? So that’s one massive advantage we have we look at cognitive, social and emotional aptitudes, soft skill data overall, is far more equally distributed in the population than hard skills, right. So if you take engineering degrees are some hard scale, generally speaking, they have sort of, you know, demographic proxy variables associated with them, meaning they’re just not as evenly distributed. Soft skills for a variety of reasons are far more evenly evenly distributed. So you can find the same soft skill profile in men and women, people of different ethnic backgrounds, people of different ages. So that’s one sort of starting point advantage that we have right? Then we deliberately slowly That soft skills that we knew didn’t have additional sort of gender, ethnic biases or age biases for that matter. And then so that’s point number one, the data really matters. If you’re trying to work with resume data can produce, it can really present a lot more challenges. So that’s one thing. The second thing is that every time we build an algorithm, we will do what we call pre testing. So we’ll test it on a representative sample of 1000s of people 10s of 1000s people actually, to say, okay, is it producing disparate impact on gender with regards to gender, you know, and ethnicity. And we can test that ahead of time. And if you see that it is, we can actually select an algorithm that has just as much validity, but is less problematic from from the perspective of disparate impact. So essentially, you’re doing like you’re testing the least biased alternative in a very iterative, automated way, which, by the way, is what you know, sort of title seven and, and federal statutes around hiring, say that you should do, you should always be using the least biased alternative. And so that’s what we do. And then obviously, once we’ve deployed an algorithm, we then test it again, on real candidates right on real applicants, and then ensure that it’s not having any impact there as well. So really, it’s a multi step process that we do. And again, we’re familiar with other vendors that do something similar. They may use different data, but they are pre testing and post testing their algorithms, which we just think is critical to ensuring fair outcomes. So that was a mouthful. I know. I think you might be on mute.

Chris Hoyt, CXR 11:33
There we go. Sorry, I was on mute. I couldn’t unmute Sorry, about that.

Frida Polli, Pymetrics 11:36
I was like, Your mouth is moving and I didn’t hear anything. I’m like, either,

Chris Hoyt, CXR 11:39
Which is really odd. If you know me. Yeah. So I got you know, when done well, and this is executed well, upon whose job? Do you think this impacts the most within the hiring workflow? Is it the recruiter? Or is it the hiring manager? Is it? Is it within HR? Like, what’s the most positive impact? And where does it sit? Do you think?

Frida Polli, Pymetrics 11:58
Um, I think it has positive impact for for everyone. Really, I mean, I think it has tremendous positive impact for obviously, both the candidates and the employers, because I mean, I will say, like, I metrics was mandated tomorrow, everyone would be better fit and more fairly fit to jobs everywhere, right? There’s nobody that really benefits from a bad match being made, you know, the person usually quits or gets fired, and the company’s you know, looking for someone new, and, you know, the the candidate or the applicants, like, Oh, that was a bad idea. So I think definitely employers and candidates, you know, benefit. And then you know, in terms of the recruiting team, they benefit as well, because, you know, at the end of the day, like, using a system that can help guide your decisions really should be, you know, like, I don’t know, anyone that loves to go through stacks of resumes, it’s a pretty mindless job, right? So if you can have something that’s helping you with that, in a way that you can feel confident is then going to produce, you know, not only more diverse people, but also people that are going to perform better, I think it’s, you know, it’s a win for everybody, I really, I really, truly believe that. And again, we’re not suggesting that automation should replace human recruiters, that’s not what we see, when we deploy our system. It’s a decision making tool that someone can use in the organization to it’s like a doctor or an algorithm, you’re not going to have a robot telling someone they have cancer, you’re going to have the doctor and the algorithm, you know, combine giving that patient an outcome. And that should be the same thing here in recruiting, you know,

Chris Hoyt, CXR 13:23
Nice, I love it. Look, we’ve got two questions that have come in. But we’re gonna we only have time for one. So you pick up here it is right, here we go. So in your efforts to develop an unbiased platform, has it been difficult to combat against learning disability impacts, or other ways of learning impacts that might impact the individual’s ability to perform on any particular type of assessment?

Frida Polli, Pymetrics 13:43
Yeah, so that’s a great question. So I think I’m pretty open about this, my my family history of dyslexia, including my one of my daughters. So I’m very mindful of that, because but but she’s been dealing with it now for over a decade. And so what we did was essentially, so we’ve developed accommodations for a number of disabilities, certainly not all and we’re continually working on them, but certainly for you know, ADHD, dyslexia and other sort of more traditional learning disabilities, we’ve implemented accommodations, and then what we do to ensure that they’re not having a negative impact is we actually looked for differences in match rates between people who select accommodations and those who don’t. So it’s kind of similar to disparate impact testing. And you know, today, we haven’t seen any, which makes us feel very comfortable that with the accommodations being provided, people with disabilities are not, or at least the ones that we can accommodate are not having not being put at a disadvantage. And then we’re very clear that if we don’t have accommodations for the disability that you have, really that the employer should be providing you an alternate path and you shouldn’t be going through pemetrexed because we would never want to have a you know, negative impact on folks with any kind of disability. We’re about inclusion of all all, all types. So

Chris Hoyt, CXR 14:55
Nice. Thanks Frida. Frida, it’s it’s fun to connect and get a chance to hear your thoughts and What you’re working on, I really appreciate you joining us. Thank you.

Frida Polli, Pymetrics 15:03
It was great to be here. Thanks for the conversation.

Chris Hoyt, CXR 15:06
You bet. Hey everybody that’s gonna wrap up this segment and this year’s delivery of our expertise. So be sure to join us when we return from the holiday break and when we’ll share our new lineup of leaders and practitioners that will continue to impress a promise. So until then, we hope we’ll see everybody online at www.cxr.works/talenttalks

CXR Announcer 15:23
Thanks for listening to the CXR channel please subscribe to CXR on your favorite podcast resource and leave us a review while you’re at it. Learn more about CXR at our website CXR.works facebook.com and twitter.com /careerxroads and on Instagram @careerxroads. We’ll catch you next time.