There are generally going to be three types of compliance concerns when it comes to the use of AI:
- General concerns that apply to all AI product:
- Where do you source the data for your AI features and models?
- Will my information be used to train the model?
- Who made the model and using what data?
- Is this an Automated Employment Decision Tool (AEDT)?
- For our tools that are AEDTs:
- Which laws for AEDTs apply to our customers?
- How can customers stay in compliance?
- What has been done to ensure the tool does not discriminate?
- Have you audited this tool?
For all Gem AI features, no customer data is used to train the LLM models. We are using off the shelf models provided by Azure OpenAI. For additional information, you can send customers to our Gem AI Legal, Privacy, & Compliance FAQs, which also includes links to the relevant Azure OpenAI help center articles.
For Sequence Template Generation, Custom Token in Sequences, Skill Suggestion for Application Review, and Prospect Search assistant:
None of these tools are directly involved in the decisional element of hiring. They simply enable you to more quickly populate fields, generate relevant text and craft job descriptions, so we believe that they would not fall under any of the new AI-bias laws that require an audit.
Following is text you can share with customers or prospective customers:
In our view, the Gem AI-enabled Sequence Template Generation, Custom Token in Sequences, Skill Suggestion for Application Review, and Prospect Search assistant do not screen or rank candidates for users as defined under the NY AI Bias laws and therefore we do not believe that they would not fall under the type of functionality that is covered by the NY AI Bias laws or any other current regulations. This Gem AI Features Overview covers what each tool does and the data fields that are shared with Azure OpenAI as part of providing the services.
For AI Sourcing:
The NYC Bias Audit Law is the regulation that most customers will be using as a reference when evaluating our AI features. The NYC regulation covers the use of Automated Employment Decision Tools (AEDT). An AEDT is a customer based tool that (1) uses machine learning, statistical modeling, data analytics, or artificial intelligence, and (2) helps employers and employment agencies make employment decisions, and (3) substantially assists or replaces discretionary decision-making. The NYC regulation also makes it clear that their requirements apply to “candidates” only, and not the prospects that are the focus of Gem’s AI Sourcing tool. For this reason, we do not believe that our AI Sourcing tool falls under the scope of the NYC Bias Audit Law.
Following is text you can share with customers or prospective customers:
In our view, the Gem AI Sourcing tool does not qualify as an AEDT under the NY AI Bias laws and therefore does not require Gem to undergo a bias audit. However, Gem is still mindful of compliance and will continue to apply high standards of review via our internal governance group.
For AI Ranking (AI Application Review):
Our AI Ranking tool will most likely fit within the definition of an AEDT under the NYC Bias Audit Law. We worked with a leading audit firm (Babl) to complete our first audit as of November 2024. The audit report is available to customers and prospects under NDA.
Following is text you can share with customers or prospective customers:
While we are still developing our AI Ranking feature, we do believe the functionality will most likely fit within the definition of an AEDT. We have conducted a bias audit with leading audit firm Babl, which is now available to customers and prospective customers under NDA.
For customers or prospective customers that would like to offer candidates the ability to opt-out of having their data processed by AI Ranking:
We do not currently offer an opt-out option for candidates submitting applications to Gem customers utilizing our AI Ranking feature. Because the Gem AI features are not making any automated hiring decisions and none of the candidate data is retained by our subprocessors or used to do any training for the LLM models, there should be minimal privacy concerns from applicants around the use of the Gem AI tools.
For customers with additional concerns regarding upcoming or potential AI regulations that may impact their use of Gem’s AI tools:
Thank you for sharing your legal team’s concerns about Gem’s AI-powered features. While the regulatory landscape for AI in hiring is evolving, Gem has taken a proactive approach to compliance that aligns with the core principles underlying these emerging regulations.
Our current compliance framework addresses the fundamental requirements across these jurisdictions: (1) we’ve completed third-party bias audits with BABL for our AI-powered App Review, (2) our platform deliberately maintains human oversight by allowing recruiters to control inputs and edit criteria, (3) we implement PII redaction before processing, and (4) we maintain enterprise-grade security protocols including SOC2 compliance.
We’re actively working to implement any needed compliance ahead of regulatory deadlines - the Colorado AI Act provisions will be phased in through 2026, EU AI Act compliance deadlines extend to 2026, and the California CPPA regulations on automated decision-making are still being finalized (expected to be effective in 2025).
We would welcome the opportunity to discuss specific concerns with your legal team to demonstrate how our current approach, combined with our compliance roadmap, addresses their requirements.
For customers requesting changes to limitations of liability or indemnification in order to use AI tools:
Thank you for sharing your concerns about Gem’s AI-powered features. While the regulatory landscape for AI in hiring is evolving, Gem has taken a proactive approach to compliance that aligns with the core principles underlying these emerging regulations.
Gem has made substantial investments to build a tool designed to enable compliance. We established an AI Governance Group, conducted comprehensive risk assessments following NIST frameworks, designed product features specifically to mitigate bias risks, and engaged BABL Inc. to conduct independent bias audits. We make these audit results available to customers and provide extensive documentation of our governance processes. However, we do not assert that our audit definitively satisfies your specific regulatory obligations, because that determination depends on your implementation approach, jurisdictional requirements, and how you integrate the tool into your broader hiring practices—variables you control and we cannot observe. This liability allocation is standard across regulated software. Healthcare record systems do not guarantee HIPAA compliance; financial trading platforms do not warrant securities regulation compliance. In each case, the provider builds compliant-enabling technology while the customer bears responsibility for lawful implementation and use. Our contract language must reflect this reality: we provide a tool designed to enable compliance and support your efforts with audits and documentation, but you remain responsible for ensuring your specific use complies with applicable law.
For customers who have questions about how we test our AI features:
How accurate is Gem’s AI, and how do you test it? Our goal is that the AI insights that Gem surface look like what a seasoned recruiter would choose for a given role. We verify this in two ways:
- Offline benchmarking: We keep a representative set of profiles and role criteria and have experienced recruiters label the “right” answers. We then benchmark the AI against those expert labels and require it to pass a series of quality and safety checks before release.
- Human-in-the-loop in production: In your workflow, you stay in control—recruiters review Gem’s AI scores before taking any action on candidates. We sample results, watch for edge cases, and use customer feedback to keep improving.
What models do you use? We use standard Microsoft Azure OpenAI models. We don’t train the foundation models on your data. We configure Azure so your prompts/responses aren’t used to train those models and operate within enterprise security controls. What should you expect? Accuracy varies by role and inputs, but our process aims to align closely with how your team would judge candidates—while ensuring nothing bypasses human review.
- For Sequence Template Generation, Custom Token in Sequences, Skill Suggestion for Application Review, and Prospect Search assistant:
- For AI Sourcing:
- For AI Ranking (AI Application Review):
- For customers or prospective customers that would like to offer candidates the ability to opt-out of having their data processed by AI Ranking:
- For customers with additional concerns regarding upcoming or potential AI regulations that may impact their use of Gem’s AI tools:
- For customers requesting changes to limitations of liability or indemnification in order to use AI tools:
- For customers who have questions about how we test our AI features: