RFI/RFP Questions Regarding Generative Artificial Intelligence Claims

Generative AI in the context of Recruiting generally refers to the rapidly evolving capability of Artificial Intelligence to improve productivity and efficiency in the hiring process.

This template, developed with the help of Recruiting Industry leaders from CareerXroads Talent and Service Suppliers communities* is offered as a supportive document to help employers thoroughly evaluate the claims, quality, compliance, and potential risks associated with generative AI solutions in recruiting.

Permission to use or modify this document as needed is granted. However, it is expected and understood that this template will not be sold and, if shared with others, credit noted.

We agree with the AI principles espoused and recently published by Google that AI.

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.

If you as an employer or service provider wish to add your support, additional insights and improvements to this document, or are interested in being involved in future collective action to promote a baseline of policies and practices for our industry, please contact CXR.

Suggested RFI/RFP Questions Regarding Generative AI Claims


  1. What is your definition of AI with regards to the solution your product(s)/service(s) offers?

Quality Control [Internal/External]:

  1. How do you ensure the quality and relevance of the data used to train your generative AI model for talent acquisition and hiring?

  2. Do you contract with an independent technology service to verify your AI claims? If you do, please supply their reports that include a description of their credentials, criteria used, methodology and assessment.

Operational Considerations [Scalability, Security, Privacy, Compliance, Training, Documentation]:

  1. Describe your methodology for collecting, processing, and verifying data from resumes, job descriptions, feedback, interviews, or other data sources relevant to your solution.

  2. What are the sources of data on which your AI is trained and to what degree to you have the rights to this data?
  3. Describe how your software handles large data sets and high volumes of requests.

  4. Describe how your solution handles and protects sensitive personal data aimed at protecting end users’ privacy. Provide specific methods used to anonymize data, how data retention policies are administered including the right to be forgotten, and any other measures taken to protect users’ privacy.

  5. Describe your approach to complying with relevant laws and regulations in multiple jurisdictions, such as GDPR, CCPA, or EEOC?

  6. Are your algorithms, as used in your solution, static? If not, do they continue to learn within our use or, more broadly? How do you support your solution over time?

  7. What Information Security policies and procedures do you have in place? At a minimum, please share your policies for data protection, encryption, access control, audit logging, and incident response.

  8. How do you support the integration and customization of the generative AI solution for talent acquisition and hiring?

  9. Can you adjust the creativity, level of detail, and style of the generated outputs?

  10. Is it transparent and explainable?

  11. What level of ongoing support and maintenance is included? How are updates and bug fixes handled?

  12. How do you provide documentation, training, and technical support for users?

Bias Mitigation/Misuse:

  1. How do you address the issues of data diversity and bias in talent acquisition and hiring?

  2. How is human input used to guide & refine the generative process, integrated into recruiter workflows? Is there an inbuilt “verify & override” workflow built into the design?

  3. How do you prevent or mitigate potential harm or abuse of generative AI, such as plagiarism, inaccurate or misleading information (i.e. hallucinations) or manipulation in talent acquisition and hiring? Describe the guardrails you have put in place.

  4. How will you support us if we are audited?

  5. How do you address the ethical and social implications of using generative AI for talent acquisition and hiring?


  1. How do you measure and evaluate the effectiveness, accuracy, and reliability of the generative AI model for talent acquisition and hiring?

  2. What are the metrics and benchmarks used to assess the quality and suitability of the generated content, such as candidates, profiles, messages, and reports?

Unintended Consequences:

  1. How do you handle errors, anomalies, or inconsistencies in the generated content or the hiring process?

  2. Are there any judgements pending against current, previous clients or yourself with respect to your product/service?

*The Source of the questions were curated, corrected and expanded in the Fall of 2023 by +100 Employer members of CareerXroads’ Talent Acquisition Leaders Community (TLC) and current and former members of CareerXroads Talent Solutions Community (TSC).

The TSC includes: AppCast, Aspen Technology, Cielo Talent, Compa, DHI Group, Findem, Hackajob, iCIMS, JobSync, Paradox, Ph.Creative, Phenom, Plum, Shaker Recruitment Marketing, Social Talent, SparcStart, Tatio, and Wilson HCG.

Additional input from Aspen Analytics, FairNow, HiredScore, Oracle, and Rocket-Hire are also noted and appreciated.

Contributing Organizations

Download the RFI/RFP Document

Note: This RFI/RFP is a work in progress. When you download it, we’ll add you to a list to receive updates to the document when they happen.