AI TRANSPARENCY

How our AI works — and what it does not decide.

HireSift is a decision-support system, not an automated hiring machine. Here is a transparent explanation of what role the AI plays, what it does not, and how we actively exclude bias and automated individual decisions.

Last updated: April 2026

What the AI does — and what it does not

Our AI analyzes application materials against the job criteria defined by the employer. The output is a structured summary plus a match score — never a decision.

What the AI does

Extraction of structured data from CV PDFs (education, work experience, skills, languages), matching against the configured job criteria, generation of a numeric match score (0–100) and a textual rationale. Everything is transparent and visible to the recruiter.

What the AI does NOT do

No automated rejection, no automated hiring, no rejection emails without human trigger, no algorithmic filtering below a score threshold. Every candidate remains visible in the recruiter dashboard regardless of score. All decisions stay with the human.

What the recruiter sees

The recruiter sees both the match score and the AI rationale — not just "Candidate X: 72%" but "Candidate X: 72% — matches 4 of 5 criteria, missing experience in area Y". This lets the recruiter interpret the score and override it when needed.

DSGVO Art. 22

GDPR Art. 22: No automated individual decisions

GDPR Art. 22 prohibits decisions "based solely on automated processing" that produce legal or similarly significant effects. HireSift deliberately does not make such decisions. The AI score is a recommendation, not a decision. The platform has no function that rejects candidates without human review.

Right to human review

Every candidate has the right to have their application reviewed by a human. HireSift technically ensures that every candidate remains visible in the recruiter dashboard and is not algorithmically filtered out.

Right to explanation

The textual AI rationale next to the score documents which criteria were met or not met. On request, the customer (employer) can share this rationale with a candidate.

Right to object to AI processing

Candidates can request via the employer that their application be processed without AI pre-screening. HireSift supports this workflow — the recruiter can manually review every candidate.

Bias management

Handling bias and discrimination

Large language models are trained on internet text and can reproduce societal biases. HireSift implements several concrete measures to minimize this risk:

  • 1
    The AI evaluates exclusively along the criteria configured by the employer (e.g., work experience, skills, languages). Demographic attributes like gender, age, or nationality do not factor into the scoring logic.
  • 2
    The system prompt sent to the AI is documented and can be reviewed by enterprise customers as part of an audit.
  • 3
    Blind recruiting mode (planned Q3 2026): recruiters can hide demographic fields in the dashboard.
  • 4
    Periodic bias testing: HireSift runs structurally identical CVs with varied demographic attributes against the same criteria to detect systematic bias early.
  • 5
    The recruiter configures the weight of each criterion — the AI uses the weights defined by the customer, not its own judgment.
Infrastructure

Technical facts

Full transparency about our infrastructure. For enterprise audits we additionally provide the specific model details and the system prompt on request.

AI provider (default)
Mistral AI — Paris, France
AI provider (alternative)
Vertex AI — europe-west4, Belgium
Candidate data leaves EU
No
Model training on your data
No (contractually excluded)
Use for other customers
No
Automated decisions (Art. 22)
No
EU AI Act

EU AI Act Compliance

The EU AI Act (Regulation 2024/1689) classifies AI systems used to select natural persons in HR as high-risk systems (Annex III point 4a). HireSift acknowledges this classification and is progressively implementing the resulting requirements: documented dataset governance, human oversight (already in place), technical robustness, transparency towards data subjects (this document), as well as risk and quality management. A formal Fundamental Rights Impact Assessment (FRIA) is documented as part of our Data Protection Impact Assessment.

Questions about our AI processing?

We transparently answer how our AI works in your specific use case, how criteria are weighted, and what data flows where. Enterprise buyers additionally receive the system prompt and detailed model information as part of the security questionnaire.