While legislation often lags behind innovation, governance around AI-hiring is beginning to roll out in the United States. A recent article published by TechRepublic highlighted upcoming AI-recruiting regulations in New York and the steps organizations should take as they work to implement AI into their recruitment processes.
In 2023, the state of New York will require a bias audit for all automated employment decision tools. This law will aim to ensure that no group is adversely affected by the AI software.
What can organizations do to prepare for potential laws on artificial intelligence? TechRepublic suggests keeping these things in mind:
- “Data is never neutral.” Therefore, it’s imperative to rely on experts to remediate biased data before deploying AI into your recruitment process.
- Data sets must be ample and diverse. Without this, you can expect biased outcomes and potential regulatory issues.
- While AI has its role, decisions about candidates should still be human-led.
- Repeated and ongoing testing is critical in working to avoid adverse impacts.
While the benefits of implementing AI into recruiting processes are clear, it’s imperative that organizations also consider and plan for the potential pitfalls.
Check out the full source article here. For more on AI, head over to our recent podcast about The Humanity (Needed) in AI with Barb Hyman of Sapia.