How Can Employer Brand Stay Human in the Age of AI?
Can AI scale your employer brand without killing what makes it real? Alicia O’Brien of Wilson shares how to stay distinctive β and why governance is the word of 2026.
play_arrow
When Your βPerfect Hireβ Isnβt Real Cami Grace
play_arrow
Where You Work Matters Cami Grace
play_arrow
Whatβs Driving Hiring in 2026 Cami Grace
play_arrow
Featured Guests:
Ritu Mohanka, CEO, VONQ
Hosts:
Chris Hoyt, President, CareerXroads
Episode Overview
Chris Hoyt speaks with Ritu Mohanka, CEO of VONQ, about the principles and practices of responsible AI screening at the top of the recruiting funnel. The conversation covers the shift from candidate scarcity to application abundance, how AI-generated content is eroding recruiter trust, the distinction between screening in versus screening out, and what separates successful AI implementations from those that quietly stall. Mohanka also addresses what employers owe candidates in terms of transparency and why treating responsible AI as a compliance exercise misses the point.
Key Topics
Screening in vs. screening out: VONQ’s foundational design principle β using AI to surface signal-rich candidates rather than filter down to a shortlist as quickly as possible
The application abundance problem: Organizations receiving 800β2,000 applications for a single role, with recruiters unable to trust the signal in what they’re seeing
AI-generated applications and recruiter trust: How polished, AI-assisted CVs are creating a “two artificial systems interacting” problem before any human steps in
Transparency and the candidate experience: What employers owe candidates in terms of communicating how AI is used in screening, and the cost of opacity
Surfacing AI use as a signal, not a flag: Flagging potential AI use in the candidate dossier so recruiters can make more informed decisions, rather than auto-rejecting
Redesigning early touchpoints: Using structured, role-specific interactions β conversational prompts, short skills tasks β to create authentic engagement that AI-polished CVs can’t replicate
Pilot design and organizational adoption: Why most AI pilots test technology rather than workflow change, and why the successful ones are designed to build organizational trust
Human-in-the-loop as a non-negotiable: The principle that AI builds signal, humans make decisions β and that line should never move
The CXR Recruiting Awards: Open to TA practitioners using AI to measurably improve any part of the recruiting workflow; submissions due April 24th at cxrrecruitingawards.com
Notable Quotes
“Responsible AI isn’t a cautious path. It’s the high-performance path.” β Ritu Mohanka
“AI should never be making the hiring call. It should be informing the recruiter’s judgment. The moment you remove human accountability from a hiring decision, you’ve created a fairness risk and a legal risk that no efficiency gain is worth.” β Ritu Mohanka
“The CV was always an incomplete document. AI has just made that incompleteness impossible to ignore.” β Ritu Mohanka
“Most pilots are designed to test the technology. The successful ones are designed to build organizational trust. Those are two very different experiments.” β Ritu Mohanka
“The first authentic moment shouldn’t be waiting until the interview β that’s already too late.” β Ritu Mohanka
“TA leaders: if you’re still trying to bust and ban AI use, that is the wrong approach. It’s time to rethink it.” β Chris Hoyt
Takeaways
Responsible AI at the top of the recruiting funnel is less a compliance question than a leadership decision β one with real consequences for candidate trust, recruiter effectiveness, and hiring outcomes. Organizations that are succeeding are redesigning early screening touchpoints to create authentic, role-specific interactions rather than trying to detect or penalize AI use in applications. Adoption stalls not when the technology fails, but when organizational commitment does; the teams scaling successfully treat AI as core infrastructure, keep humans accountable for every hiring decision, and measure recruiter confidence alongside efficiency.
Want more conversations like this? Subscribe to the CXR podcast and explore how top talent leaders are shaping the future of recruiting. Learn more about the CareerXroads community at cxr.works.
**Chris Hoyt:** Welcome, everybody, to the Recruiting Community Podcast. My name is Chris Hoyt. I’m the president of CXR, and I’m flying solo today with our guest. Jerry couldn’t join us β I think he’s hat shopping somewhere β but we are typically your hosts for a podcast we like to think brings you industry insights in the form of a fun conversation, brought to you by the CXR Recruiting Community at cxr.works.
Today we’re going to be talking about using AI responsibly at the top of the funnel within recruiting. We’re joined by Ritu Mohanka, who is the CEO of VONQ, and we’re going to talk about TA leaders using AI screening responsibly β and believe it or not, effectively. We’ll be covering the idea of balancing efficiency with fairness, transparency, and candidate trust. If you’re a leader navigating AI, working on scaling this kind of work, or dealing with the risks and expectations that come with implementing AI in recruiting, this conversation is for you.
A couple of things up front: we’re streaming on the socials β YouTube, Facebook, LinkedIn. You can check out cxr.works/podcast for past and future episodes. We have a new design on the site this year, and we’re coming up on about 600 interviews with really cool folks like our guest today. You’ll also find easy ways to like, subscribe, and let us know if you’d like to join the conversation.
One more thing β I’d be doing a disservice if I didn’t mention this. We have launched the CXR Recruiting Awards. You can check it out at cxrrecruitingawards.com. If your TA team has been building and experimenting with AI, now is the time to show off that work. We’re not talking about vendor partnerships or implemented AI solutions β we’re talking about practitioners who have used AI to measurably improve something within the recruiting workflow or function. Any form, any scale. It could be a killer prompt or a complete workflow change. Nothing is too big or too small.
The submission deadline is April 24th. We’re looking for a quick demo β up to 15 minutes. The only real requirements: any candidate or company data must be fake or anonymized. We’ll pick three finalists and host them at the Marketplace Live event in June in Louisville, Kentucky, with a VIP dinner and award presentation. If you haven’t checked it out yet, head to cxrrecruitingawards.com.
—
**Announcer:** Welcome to the Recruiting Community Podcast β the go-to channel for talent acquisition leaders and practitioners. This show is brought to you by CXR, a trusted community connecting the best minds in the industry to explore topics like attracting, engaging, and retaining top talent. Hosted by Chris Hoyt and Jerry Crispin. We are thrilled to have you join the conversation.
—
**Chris Hoyt:** Ritu, welcome. Happy to have you here today.
**Ritu Mohanka:** Happy to be here, Chris.
**Chris Hoyt:** Before we get into it, I want to dig into who you are and the work you’ve done β including what brought you to VONQ. But first, what do you think about the CXR Recruiting Awards? Are we a little off our rocker, or are we going to find that nobody’s actually using AI in talent acquisition?
**Ritu Mohanka:** Oh my gosh, that won’t be a problem at all. You’ll have too many applying, I hope. It’s a great step forward. In fact, when we received the link, someone suggested VONQ should enter β and I’ll tell you why we’re not going to, and also why I think every vendor should.
Every vendor building tools around responsible AI to improve the candidate experience, the recruiter experience, or recruiter enablement β they should enter and ask, “Can we be customer zero?” We are using our own technology and ask whether we can be customer zero β learning from and understanding the experience before we ask our customers to go through it. So I did have a moment where I thought, yes, we should enter. But we already have plenty of customers in the wild, and I think there are many others who are more deserving of this recognition than we are.
**Chris Hoyt:** You raise a really good point. We had originally said no vendors, thinking about market-ready products. But what hadn’t occurred to me β and I feel a little silly about it now β is that there may be AI usage that isn’t a polished product at all. It could be a master prompt or a really smart efficiency win that has nothing to do with anything being sold.
**Ritu Mohanka:** Absolutely.
**Chris Hoyt:** My favorite example so far: we had an organization getting ready for a big vendor implementation, and in order to prepare for it, they used AI to organize, categorize, and pull together all the materials they’d need for a successful project. I’ve been through a few of those massive implementations where two months in you’re thinking, “I wish we’d been better organized.” It’s really impressive.
**Ritu Mohanka:** Very cool.
**Chris Hoyt:** Alright, cxrrecruitingawards.com β enough of that for now. We’re excited to see submissions come in. We’ve already got a handful covering everything from screening to notifications to interview prompts to benchmarking salaries for competitive conversations. Really cool stuff. But Ritu, let’s talk about you. You’re the CEO of VONQ, you’re here to talk about AI at the top of the funnel, and we’re going to get into standards β even without Jerry. Can you give us an elevator pitch on who you are, what you do, and how long you’ve been doing this?
**Ritu Mohanka:** I’ve spent most of my career building and scaling B2B work tech businesses across Europe, often in transformation situations where you’re shifting the model while still running the engine β and we’ll talk a lot about that in the context of VONQ. I’ve led commercial, product, and general management roles, and I’ve always gravitated toward spaces where the market is changing fast and customer pain is becoming more real.
What pulled me to VONQ β and this is almost two years ago now β was a combination of timing and mission. Hiring is getting harder, noisier, and more expensive, yet businesses still treat it like a marketing spend line. I wanted to lead a company willing to rewire how hiring decisions get made β not just add another tool to an already messy stack. That’s what drew me to VONQ.
**Chris Hoyt:** I love it. And look, you’re going to be humble about this, but you’ve got an impressive 25-year background across some wildly successful organizations. When we start talking about this topic, it’s important that people know you have the receipts. You’ve done the work.
**Ritu Mohanka:** Thank you. That’s very kind.
**Chris Hoyt:** Of course. Okay, let’s dive in. The first thing I wanted to get into is responsible AI β a phrase that’s getting thrown around everywhere right now. You’re actually building a product around this thesis. When you talk about using AI responsibly at the top of the funnel, what does that mean to you in practice, versus how most other vendors are using that language in our space?
**Ritu Mohanka:** When we were designing top-of-funnel screening, we were dealing with an enormous amount of noise β probably more noise than anywhere else in the hiring process. Our fundamental design principle was screening in versus screening out. Traditional hiring has always been about sifting out as quickly as possible to get to a shortlist. Our philosophy is the opposite: how can we responsibly screen in as many candidates as possible with relevance and the right fit, rather than screening out? How do we close the gap between what a CV says and what a candidate actually has to offer?
When I think about responsible AI, it is not a compliance exercise β it’s a leadership decision. Leaders who treat it as anything less are taking on risks they haven’t fully priced. We are at an inflection point right now. AI is embedded in how the majority of organizations screen, score, and assess candidates. I’m sure you’ve seen the World Economic Forum stat putting that figure at close to 90% of employers using some form of AI in hiring. That’s no longer a trend β it’s become the baseline. And yet the governance infrastructure around those tools is nowhere near as mature as the adoption curve.
For me, responsible AI is about closing that gap between adoption and accountability. The risks are real β regulatory, reputational, and human. Organizations that close it proactively, building fairness and explainability in from the start rather than retrofitting under pressure, will have a structural advantage in the next phase. Not just because they’ll avoid regulatory action β though they will β but because candidates will trust them more, recruiters will use the tools more effectively, and hiring outcomes will be genuinely better. Responsible AI isn’t a cautious path. It’s the high-performance path.
**Chris Hoyt:** I love that you bring up the regulatory piece. We had a really interesting discussion with an organization building custom GPTs and custom gems for their teams, and a question came up around governance: who’s watching what recruiters are building? That quickly escalated to conversations about government regulations and some of the interesting β and occasionally off-the-mark β guidelines coming out that are adding significant delays to hiring timelines. Historically, policy tends to be reactive. Politicians move when constituents are impacted. I was talking to another executive recently who reminded me that we rarely see proactive change in this area. I do think there will be a windfall of policy over the next year or so. It’s going to be a fascinating time to be in this space.
**Ritu Mohanka:** Indeed. Absolutely.
**Chris Hoyt:** So VONQ has talked publicly about moving from 200 random applicants to 20 what you’d call signal-rich candidates β which sounds like a huge win for TA leaders. But I suspect some of them will immediately think about who gets filtered out before a human ever sees them. How do you make sure that efficiency gain doesn’t become an equity problem?
**Ritu Mohanka:** Let’s go back to the fundamental problem at the top of the funnel, because it has fundamentally changed. Five or ten years ago, recruiters were dealing with scarcity β not enough applicants, struggling to build pipeline. That problem still exists in some pockets, but the dominant problem today is the opposite. It’s abundance β more specifically, abundance without confidence.
We have customers hiring for a single software engineer role in India who receive 1,000 applications. Others receive 800, 1,000, 2,000 applications for a single position. The uncomfortable truth is that recruiters don’t trust most of these applications β not because the candidates are bad, but because the signal is so weak. AI has accelerated that problem significantly.
When we introduced our model last year at VONQ, it was entirely about removing noise at the point of advertising to give recruiters some signal. Where we do that initial sifting, AI is not making any decisions. We still give recruiters the full application pool β all 2,000 β but we surface a recommendation: these 20 candidates, for example, have the right to work, can commute to the location, have the required skills, and are available for the shift pattern if relevant. That gives recruiters what they were actually hired for: the ability to be more strategic and build relationships, rather than spending their time sifting through thousands of applications. It’s not about getting to a shortlist quickly β it’s about removing noise and giving recruiters more signal.
**Chris Hoyt:** The term “signal” is really having its moment. We’ve always talked about signal-to-noise and how the funnel works, but with AI, I find myself reaching for it constantly. I want to call out something specific here: the idea of treating AI detection as signal rather than a flag. Surfacing indicators of AI use or fraud to recruiters, rather than auto-rejecting candidates. Can you share more on why that distinction matters β and how we prevent it from becoming a loophole that teams just ignore?
**Ritu Mohanka:** It really comes down to what I’d call the authenticity paradox. A lot of organizations know that candidates use AI to optimize their applications β and why shouldn’t they? But then the employer uses AI to screen, and you get this moment where two artificial systems are interacting before a human has stepped in. The issue isn’t that AI is involved. The issue is how it’s being used. If AI is simply evaluating an AI-polished CV, you’re assessing a layer of optimization, not the person β and that’s where trust breaks down.
The more progressive organizations are using AI not to filter faster, but to create earlier moments of real signal. Instead of screening out based on keywords, they’re designing short, role-specific structured interactions β conversational prompts where candidates have to demonstrate how they think, how they approach a problem, and how they communicate. This isn’t about catching candidates for using AI. Candidates absolutely should use AI. I actually asked a candidate during an interview just this afternoon to share their screen and show me how they’d prompt AI to build a compelling outbound approach to a prospect.
The responsibility is on employers to be clear about when and how AI should be used. And exactly to your point about flags versus signals β we do surface this in our screening. When we send the completed candidate dossier to the recruiter, we signal where a candidate may have used AI during the process. That helps the recruiter see the fuller picture: yes, the CV is polished, but the candidate also went through skills testing, behavioral assessment, and a conversation. It helps the recruiter make a better-informed decision about how AI was used in that early phase.
**Chris Hoyt:** That’s an exceptional point, and one I think gets missed by TA leaders who are still struggling in this area: the importance of transparency around how your organization uses AI β not just as a company, but as a function. How much is being used in the interview process? Organizations like ThoughtWorks and CarMax do a really good job of this β openly talking about how they’ve changed the way they recruit. To your point: I don’t care that you used Claude to get to that response. Show me how you did it, and walk me through the rationale. That changes how we interview β and it helps establish or maintain trust upfront.
**Ritu Mohanka:** Couldn’t agree more. It comes down to trust β two-way trust. From the candidate’s perspective, the recruiter’s perspective, and the employer’s perspective on how AI is used, where it’s used, and why.
**Chris Hoyt:** Let’s talk about the candidate piece for a minute. Candidates are overwhelmingly aware that they’re being screened by AI before a human ever sees them. To your point, trust is eroding in some segments of the market. What do you think a TA leader actually owes a candidate at the top of the funnel in terms of transparency? And do you think we’re anywhere close to a standard on that as an industry?
**Ritu Mohanka:** We are getting there. I don’t want to be brutal about it, but we are getting there. Candidates are using AI tools to optimize their applications at scale, and I want to be clear: that’s completely understandable. Candidates are navigating a brutal market. They’re applying to hundreds of roles and using every tool available to them. Why shouldn’t they?
But what’s created on the recruiter side is a really disorienting experience. You’re looking at a stack of applications that have been polished in the same way β different names, different people, but language crafted identically. We had a customer in retail hiring warehouse associates who noticed that hundreds of CVs had almost identical structure and tone. Recruiters started asking a very simple question: “Am I evaluating a candidate, or am I evaluating ChatGPT?”
That’s a real trust problem β and it’s a profound one, because hiring is fundamentally about trust. You’re making a decision about a human being. You’re making a promise to them. And if the information you’re basing that on feels unreliable, the whole thing starts to shake.
Here’s what I think gets missed in this conversation: the answer isn’t trying to catch AI-assisted applications. That’s a long, losing game β and frankly, it’s the wrong frame. The responsibility sits with employers and TA leaders to design processes that validate capability, rather than continuing to rely on self-reported information that was already a weak signal before AI. The CV was always an incomplete document. AI has just made that incompleteness impossible to ignore.
Organizations that are responding well aren’t panicking about AI-assisted applications. They’re redesigning early touchpoints to create moments of authentic engagement that are harder to replicate by running something through ChatGPT β short, structured, role-specific questions that require candidates to demonstrate actual thinking; conversational screening that asks someone to explain their reasoning, not just describe their experience; small skills-based tasks directly relevant to the role. The goal isn’t to trick candidates β it’s to give them a better opportunity to show up as who they actually are.
And the final point: one of the things that most surprises people when we share our data is that candidates are far more accepting of AI in the hiring experience than most organizations assume β but with one critical condition. They need to feel it’s fair, and they need to understand what’s happening. What destroys trust immediately is opacity: applying, hearing nothing, being assessed by something you can’t see, receiving a rejection with no explanation. That experience is what creates negative sentiment around AI in hiring β not AI itself.
**Chris Hoyt:** Yes to everything you just said. I had a conversation at a dinner not two weeks ago where a TA leader was asking how to create some kind of blanket ban on candidates caught using AI β overly assisted resumes, overly assisted interviews, that kind of thing. My argument was: maybe you’re looking at the wrong problem. Maybe what we really need to be digging into are processes that are already long outdated β the way we interview, the way we post jobs, the skills we do or don’t ask for, and how we look for those skills. That’s the opportunity a lot of TA leaders are still missing.
**Ritu Mohanka:** Absolutely. The first authentic moment shouldn’t be waiting until the interview β that’s already too late. If we can demonstrate through AI screening that we’re trying to close the gap between what the CV says and what a candidate actually has to offer β less about experience and logos, more about fit to this specific role β we’ve already won the candidate’s trust. We’re already doing a better job at the top of the funnel than traditional hiring ever did.
**Chris Hoyt:** TA leaders: if you’re still trying to bust and ban AI use, that is the wrong approach. It’s time to rethink it.
**Ritu Mohanka:** Absolutely.
**Chris Hoyt:** Ritu, I want to talk about your experience in this space. You’ve seen AI screening implementations up close across organizations globally, and I’d guess you’ve watched a fair number of them stall out. What actually separates the teams where this is working from the ones that get shelved after 90 days?
**Ritu Mohanka:** I’m going to be very direct here, and there’s an intent behind that. Most organizations when they think about pilots are testing the technology β but they don’t pilot the workflow change. They treat AI as a side experiment. A small group of recruiters tries it out, but performance metrics don’t change, habits don’t change, incentives don’t change. So when pressure builds β a hiring spike, a difficult requisition, a skeptical hiring manager β they revert to what feels safe, and the pilot quietly dies. Not with a bang, just a gradual fade. And it’s never about technology failure.
We had an enterprise customer who ran a six-month proof of concept that was technically performing really well. Screening quality was demonstrably better. Time savings were real. Candidate feedback was great, with far fewer drop-offs at the top of the funnel. But adoption completely stalled. When we looked at what was actually happening, recruiters were manually double-checking everything the AI produced. They didn’t fully trust the system yet, so they were doing the AI’s work and their own work simultaneously β which obviously defeats the entire purpose.
Once leadership stepped in and made AI screening part of the standard workflow β not optional, not parallel, but simply the way things were done β adoption accelerated dramatically within weeks. The technology hadn’t changed. The organizational commitment had.
The organizations that are scaling successfully have figured that out. Leadership treats AI as infrastructure β as core to how the business hires β not as an experiment with an exit option. That changes everything about how teams behave around it. And I’m not saying pilots are a waste of time. They absolutely aren’t. But only if they’re designed to scale. Most pilots are designed to test the technology. The successful ones are designed to build organizational trust. Those are two very different experiments, and the governance of a pilot asking that second question looks completely different. You’re measuring recruiter confidence, hiring manager satisfaction, adoption quality β not just efficiency metrics. If you’re only measuring speed, that’s a vanity metric.
**Chris Hoyt:** A little provocative β I love it. Pilots have long been the secret weapon for TA leaders bringing in new solutions. What you said resonates deeply from an adoption and change management standpoint. I’ve said it a thousand times: poorly orchestrated change management is the Achilles’ heel of an otherwise successful TA leader. Great call out.
**Ritu Mohanka:** And Chris, if you don’t mind, I want to add one more thing. I feel very strongly about the human-in-the-loop design β overused term, I know, but I’ll say it again. AI should never be making the hiring call. It should be informing the recruiter’s judgment. The moment you remove human accountability from a hiring decision, you’ve created a fairness risk and a legal risk that no efficiency gain is worth.
Our principle at VONQ is very clear: AI builds signal, humans make decisions, and that line should never move. Having a human in the loop as you design to scale AI screening within your organization is critical.
**Chris Hoyt:** I love it. And I’ll be honest β I struggle a lot with organizations that are suddenly positioning themselves as “AI forward” or “AI first.” Why aren’t we still customer first? Why aren’t we still leading with transparency and trust? We can be AI-assisted β I understand we all have board members and stakeholders we’re accountable to. But “AI first” feels like the wrong message. It really should be about trust, relationships, and the longevity of the work we’re doing.
**Ritu Mohanka:** Absolutely. It’s about elevating people by removing the mundane, so they can do what they were actually hired to do. I don’t know if you’ve seen the recent Anthropic study β about 81,000 people on AI usage β and this fascinating dichotomy of fear and optimism. I live in that space. Who knows, in two years there might not be a role for a CEO β and yet I use AI constantly to learn, listen, understand, and stay curious, so I can lead the organization with a clear view of where we should use AI responsibly and where we shouldn’t. We shouldn’t use it just because it creates speed and efficiency. We should use it when it makes our jobs genuinely better and helps us do what we were hired to do β not the mundane tasks.
**Chris Hoyt:** I love the human-in-the-loop point. And you mention Claude β there’s so much in the news this week. The source code has now been downloaded on GitHub faster than anything else; it’s incredible. You know, when I think about human in the loop, it reminds me of that flag inside Claude Code. There’s a setting β I’m blanking on the exact name, something like “dangerously skip permissions” β where instead of Claude asking “I want to look at this file, is that okay?” before every action, you can flip a flag and it just runs. Sometimes it goes off the rails a bit. I feel like there’s a t-shirt β or at least a plaque β somewhere that says don’t use that flag. At least not in TA. At least not in HR.
Well, Ritu, this has been a lovely conversation, and we’re going to have to have you back. Before I let you off the hook β we ask everyone this β if you were going to write a book about what’s happening in this space, what do you think you’d title it?
**Ritu Mohanka:** You know, Chris, I’m not always great with words, but I’ll say this: I haven’t read many books written about the experience of being a first-time CEO. And as a CEO, the one thing I spend the most time on is talent acquisition. So if I could converge those two things β how do you become a truly great first-time CEO, and what does it mean to treat talent acquisition as your first and most important job in that role β I think the book lives somewhere in there. I don’t have the title yet, but when I figure it out, I’ll let you know.
**Chris Hoyt:** I love that. Okay, follow-up question: present company excluded, who gets the first signed copy?
**Ritu Mohanka:** My boys β my 16-year-old and my 14-year-old.
**Chris Hoyt:** I love that.
**Ritu Mohanka:** They are my biggest rock.
**Chris Hoyt:** That is wonderful. Thank you so much for joining us. I know you’re incredibly busy, and you dialed in all the way from London β we really appreciate you taking the time. And hopefully we haven’t pushed you too far past happy hour.
**Ritu Mohanka:** Not at all. This was an absolute pleasure. I look forward to coming back soon.
**Chris Hoyt:** Lovely. We’ll be happy to have you, Ritu. For those sticking with us β cxr.works/podcast for past and future episodes. Let us know if you have a guest suggestion or would like to be on the show yourself. And I have to say it one more time: we’ve only got a couple of weeks left for submissions at cxrrecruitingawards.com. It does not have to be a massive overhaul or major AI implementation to qualify. Reimagined a complete workflow with AI? Perfect. Have a single prompt that cuts 30 minutes off a screening process? Chef’s kiss β we want to see that too. So get your submissions in, go back and listen to this one again β we covered a lot of ground fast β and we’ll see everyone next week.
—
**Announcer:** Thanks for listening to the Recruiting Community Podcast, where talent acquisition leaders connect, learn, and grow together. Be sure to visit cxr.works/podcast to explore past episodes, see what’s coming up next, and find out how you can join the conversation. Whether you’ve got insights to share or want to be a guest on the show, we’d love to hear from you. If you’re interested in learning more about becoming a member of the CXR community, visit us at cxr.works. We’ll catch you in the next episode.
Tagged as: AI screening, CXR Recruiting Awards, CarMax, Marketplace Live, ThoughtWorks, AI, AI transparency, ChatGPT, CareerXroads, applicant volume, Claude, Anthropic, Change Management, AI governance, skills-based hiring, recruiter enablement.
Can AI scale your employer brand without killing what makes it real? Alicia O’Brien of Wilson shares how to stay distinctive β and why governance is the word of 2026.