Recht & Compliance

Works Council Agreements for AI in Recruiting: What HR Teams Should Define

HireSiftMay 7, 20268 Min read
Works Council Agreements for AI in Recruiting: What HR Teams Should Define

AI in recruitment is not only an HR technology project. Once software reads CVs, compares profiles or calculates match scores, it touches sensitive candidate data. It can also change how recruitment teams work.

That is why employee representation matters early. In Germany and Austria, a works council may have participation rights. In the UK and other markets, similar conversations can happen with employee forums, unions or internal governance bodies.

A good agreement does not block AI. It makes the process clear. It defines what the system may do, who remains accountable and how risks are checked. That clarity helps HR move faster without losing trust.

This guide explains what to prepare before rolling out AI-assisted screening. It is not legal advice. It is a practical checklist for a structured conversation.

Why AI Recruiting Needs Written Rules

Most teams start with a real operational problem. Applications arrive faster than recruiters can review them. Hiring managers wait too long for shortlists. Good candidates drop out because the process is slow.

AI can help with that. It can parse CVs, summarise profiles, compare skills with job criteria and highlight likely matches. Used well, it reduces manual work and improves consistency.

The risk is that AI can look more objective than it really is. A score may feel precise. A ranking may appear neutral. But the result still depends on data quality, criteria design and human judgement.

Written rules keep the system in the right role. The software should support recruiters. It should not silently replace human decision-making.

For organisations with works councils, that boundary is especially important. The tool may affect candidates. It may also affect recruiters, because their workflow becomes measurable and system-guided.

Start With the Exact Use Case

Avoid broad statements such as “we will use AI in recruitment”. They are too vague. A useful agreement starts with specific use cases.

List what the system will actually do. Common examples include CV parsing, criteria matching, duplicate detection, profile summaries, shortlist support and interview preparation.

Separate those activities. Parsing a CV is not the same as rejecting a candidate. A summary is not the same as a final suitability decision. Each step has a different risk profile.

A simple table helps. Include the function, purpose, data source, output and human control point. This makes the discussion concrete for HR, legal, data protection and employee representatives.

If you use HireSift, this part can be described clearly. HireSift extracts candidate profiles, compares them with role-specific criteria and shows transparent scores. The final hiring decision stays with your team.

Define Which Data May Be Used

The agreement should state which candidate data can enter the system. This prevents scope creep later.

Typical data includes contact details, work history, education, skills, certificates, languages and salary expectations. Documents may include CVs, cover letters and references.

Not everything that is technically visible should be assessed. Photos, age, family status and private hobbies are rarely relevant. Special category data requires extra care and should not be actively evaluated unless there is a clear legal basis.

Also consider proxy data. A school name, graduation date or postcode can indirectly suggest protected characteristics. The system should not rank candidates on features that are not job-related.

Retention matters as well. Candidate data should not remain in the system forever. The agreement should describe deletion rules, exceptions for talent pools and responsibility for clean-up.

Keep Human Decision-Making Explicit

A strong rule is simple: AI provides input, not final decisions.

That principle must be built into the workflow. Recruiters should be able to review scores, inspect criteria and override recommendations. They should also understand why a candidate was ranked in a particular way.

Automatic rejection based only on a score is risky. It can be unfair, hard to explain and damaging to candidate trust. A better workflow uses low scores as a signal for review, not as a final outcome.

High scores also need human judgement. A strong CV match does not prove motivation, communication style or team fit. AI can help order the work. It cannot run the hiring process alone.

Define who makes each decision. Make it clear when a recruiter or hiring manager may disagree with the system. That disagreement should be allowed and documented where useful.

Be Transparent With Candidates

Candidates should know when technology supports the hiring process. This is both a legal and trust issue.

Your privacy notice should explain the use of AI in plain language. It should cover the purpose, data categories, legal basis, retention period and contact routes for questions.

Avoid vague wording such as “automated analytical methods may be used”. It sounds defensive and tells candidates very little.

A clearer version is: “We use software to help structure and compare applications against job-related criteria. A human reviews the information and makes the decision.”

The agreement can define which candidate information is required. It can also describe how access requests, correction requests and complaints are handled.

Consistency matters. Your careers site, privacy notice, internal process and recruiter scripts should say the same thing. Inconsistent messages create suspicion.

Set Criteria Before Screening Starts

AI-assisted screening becomes risky when criteria are vague or changed after candidates have applied. The process then becomes difficult to defend.

Define criteria before screening starts. They should come from the job requirements. Examples include relevant experience, required certifications, language level, location constraints or specific technical skills.

Weight criteria intentionally. A must-have requirement should not carry the same weight as a nice-to-have skill. The logic should be visible enough for review.

Also define who creates criteria. HR and the hiring team should usually work together. The works council does not need to approve every vacancy in many setups. It should understand and agree the procedure.

HireSift supports this approach because criteria are visible per role. That makes later review easier. It also reduces the risk of a score becoming an unexplained black box.

Build Bias Checks Into Operations

An agreement is only useful if it affects daily operations. Bias control should not be a one-off slide in the implementation deck.

Schedule regular sample reviews. Check whether certain groups are consistently ranked lower. Review whether criteria remain job-related. Look for patterns that may need investigation.

Not every difference proves discrimination. But unexplained patterns should be examined. Someone needs responsibility for that monitoring.

Create a feedback loop. Recruiters should report wrong extractions, missing skills or confusing scores. Hiring managers should report when shortlists do not match role reality.

Vendor changes also matter. If a provider changes a model, adds a feature or modifies data flows, you may need a fresh review. The agreement should define when that review is triggered.

Limit Access and Logging

Not everyone needs access to all candidate data. AI tools can make profiles easier to search and share, so role control is important.

Define user roles. Who can create jobs? Who can edit criteria? Who can view CVs? Who can export data? Who can delete records?

Logging also needs balance. Security logs can help detect misuse and prove compliance. But logs should not become hidden performance monitoring of recruiters unless that is clearly agreed.

The agreement should separate security from performance control. It should describe what is logged, who can see logs and how long logs are kept.

Supplier governance belongs here too. Check data processing agreements, subprocessors and hosting regions. If data may leave the EU or UK, document the safeguards.

A Practical Structure for the Agreement

The document does not need to be huge. It needs to be specific enough to guide real work.

A pragmatic structure includes:

  • purpose and scope
  • description of AI functions
  • permitted and excluded data
  • roles and access rights
  • human review and decision rules
  • candidate transparency
  • retention and deletion rules
  • monitoring and correction processes
  • vendor changes and review triggers
  • training for users

Add a review date. Three to six months after rollout is often useful. By then, the team has real cases and practical feedback.

The review should not be treated as failure. It is how you improve the system responsibly.

How to Prepare the Conversation

Do not approach employee representatives with a completed purchase and a request for quick approval. That creates resistance.

Start with the process. Explain the recruitment problem, the current workload and the candidate impact. Then show where AI support would fit.

Use examples. An anonymised CV, a sample criteria list and a sample score are more useful than abstract slides. They show what the system does and where humans stay in control.

Be honest about limitations. AI can extract data incorrectly. It can overvalue certain wording. It cannot fix poor job criteria. That honesty makes the governance discussion stronger.

Training is also important. Recruiters need to know how to read scores, check outputs and handle exceptions. A tool is safer when users understand its limits.

Conclusion: Clear Governance Makes AI Usable

AI in recruitment needs trust. Trust does not come from vendor promises. It comes from clear rules, visible criteria and human accountability.

A works council agreement, or a similar internal governance document, helps create that structure. It forces the organisation to define purpose, data, access, control and review.

Start with the exact use case. Keep the final decision with people. Check the system regularly. Make candidates aware of how technology is used.

If you want AI screening to be practical and explainable, HireSift can support that workflow. You define criteria, review transparent scores and keep decisions inside the hiring team. That turns AI from a black box into a controlled recruiting tool.

Less screening. More hiring.

HireSift analyzes 100 CVs in minutes — with two transparent scores, EU AI Act compliant, no credit card required.

Try free for 7 days

Related Articles