HireSift is a decision-support system, not an automated hiring machine. Here is a transparent explanation of what role the AI plays, what it does not, and how we actively exclude bias and automated individual decisions.
Last updated: April 2026
Our AI analyzes application materials against the job criteria defined by the employer. The output is a structured summary plus a match score — never a decision.
Extraction of structured data from CV PDFs (education, work experience, skills, languages), matching against the configured job criteria, generation of a numeric match score (0–100) and a textual rationale. Everything is transparent and visible to the recruiter.
No automated rejection, no automated hiring, no rejection emails without human trigger, no algorithmic filtering below a score threshold. Every candidate remains visible in the recruiter dashboard regardless of score. All decisions stay with the human.
The recruiter sees both the match score and the AI rationale — not just "Candidate X: 72%" but "Candidate X: 72% — matches 4 of 5 criteria, missing experience in area Y". This lets the recruiter interpret the score and override it when needed.
GDPR Art. 22 prohibits decisions "based solely on automated processing" that produce legal or similarly significant effects. HireSift deliberately does not make such decisions. The AI score is a recommendation, not a decision. The platform has no function that rejects candidates without human review.
Every candidate has the right to have their application reviewed by a human. HireSift technically ensures that every candidate remains visible in the recruiter dashboard and is not algorithmically filtered out.
The textual AI rationale next to the score documents which criteria were met or not met. On request, the customer (employer) can share this rationale with a candidate.
Candidates can request via the employer that their application be processed without AI pre-screening. HireSift supports this workflow — the recruiter can manually review every candidate.
Large language models are trained on internet text and can reproduce societal biases. HireSift implements several concrete measures to minimize this risk:
Full transparency about our infrastructure. For enterprise audits we additionally provide the specific model details and the system prompt on request.
The EU AI Act (Regulation 2024/1689) classifies AI systems used to select natural persons in HR as high-risk systems (Annex III point 4a). HireSift acknowledges this classification and is progressively implementing the resulting requirements: documented dataset governance, human oversight (already in place), technical robustness, transparency towards data subjects (this document), as well as risk and quality management. A formal Fundamental Rights Impact Assessment (FRIA) is documented as part of our Data Protection Impact Assessment.
We transparently answer how our AI works in your specific use case, how criteria are weighted, and what data flows where. Enterprise buyers additionally receive the system prompt and detailed model information as part of the security questionnaire.