Back to Blog
Hiring Guide|14 min read|

Pre-Employment Testing: A Hiring Manager's Guide to Assessments That Predict PerformanceTypes, legal requirements, and how to build a testing process that improves quality of hire

Most hiring decisions rely heavily on interviews. Interviews feel reliable, but decades of research say otherwise. Pre-employment testing, when done right, adds a layer of objectivity that interviews cannot match. The challenge is doing it right.

A 2023 SHRM analysis found that organizations using structured pre-employment assessments made faster hiring decisions and reported higher new-hire retention at the 12-month mark. The mechanism is simple: a test measures something specific and consistent. An interview, even a structured one, introduces subjectivity that varies by interviewer, day, and mood.

That does not mean testing is a silver bullet. A poorly designed assessment wastes everyone's time, introduces legal exposure, and damages your employer brand. The wrong test at the wrong stage loses good candidates who won't sit through a 90-minute cognitive battery before they've spoken to a human. Good pre-employment testing is about fit, timing, and legal validity. Not slapping a quiz in front of applicants and hoping for insight.

This guide focuses on employers and hiring teams. If you're already investing in structured interviews and skills-based hiring, pre-employment testing is the logical next layer. It gives you data points that complement what your interviewers observe. Combined with a good interview scorecard, assessments turn hiring from gut-driven to evidence-driven.

The EEOC guidelines on employment tests and selection procedures are clear: tests must be job-related and must not cause adverse impact without strong business justification. That is not a reason to avoid testing. It's a reason to be thoughtful about which tests you choose and how you apply them.

Assessment Types

Six types of pre-employment tests and when to use each

The taxonomy of pre-employment testing is sprawling. Vendors have incentives to make it feel complicated. It isn't. At the functional level, there are six types worth knowing, and each has a specific use case.

The most commonly misused is the personality assessment. DISC, Big Five, and similar instruments have moderate predictive validity in isolation, but employers often over-weight them. My view is that personality tests are most useful as a conversation-starter with the hiring manager, not as a screen. Use them to inform your interview questions, not to filter candidates out.

Six assessment types, matched to hiring stage and predictive strength

Cognitive Ability

Predictive validity: High

Examples: Verbal reasoning, numerical reasoning, logical thinking

Best used: Early funnel, after resume screen

Work Sample / Skills Test

Predictive validity: Very High

Examples: Coding challenge, writing sample, spreadsheet task

Best used: Mid-funnel, before final interviews

Personality Assessment

Predictive validity: Moderate

Examples: DISC, Big Five, Hogan, Predictive Index

Best used: After first interview, for team fit

Job Knowledge Test

Predictive validity: High

Examples: Accounting principles, regulatory knowledge, tool-specific

Best used: Early to mid funnel

Situational Judgment

Predictive validity: High

Examples: Scenario-based decisions, prioritization exercises

Best used: Mid-funnel, especially for leadership roles

Integrity / Honesty Test

Predictive validity: Moderate-High

Examples: Overt integrity scales, reliability assessments

Best used: Early funnel for trust-sensitive roles

The Research

What actually predicts job performance

The seminal meta-analysis by Schmidt and Hunter (1998), with over 85 years of research behind it, ranked selection methods by predictive validity, meaning how well each method predicts actual job performance. Work sample tests and cognitive ability tests topped the list. Unstructured interviews and reference checks ranked near the bottom.

The honest answer about why so many teams still lean on unstructured interviews is that they feel richer. A conversation feels like it reveals something deeper than a score on a screen. Sometimes it does. But that feeling often reflects the interviewer's biases more than the candidate's actual capability.

The chart below gives a rough sense of comparative validity. The big takeaway: combining a structured interview with a cognitive ability or work sample test gives you substantially better signal than either method alone.

Predictive validity of selection methods, higher is better (meta-analysis composite)

Work Sample Tests

54%

Cognitive Ability + Structured Interview

63%

Job Knowledge Test

48%

Structured Interview

51%

Personality (Big Five)

31%

Unstructured Interview

38%

Reference Check (unstructured)

26%

Years of Experience

18%

Based on Schmidt & Hunter (1998) validity coefficients, adjusted for range restriction and measurement error.

Implementation

How to build a pre-employment testing process that holds up

Most teams that try pre-employment testing fail not because the tests are bad, but because the process around them is weak. There is no scoring rubric. Tests go out to some candidates but not others. Nobody knows what a passing score means. Results sit in a folder nobody looks at before the panel interview.

The fix is treating the testing process as a formal hiring stage, not an optional add-on. That means six specific steps, done in order.

Six-step pre-employment testing process

01

Define role competencies

3-5 skills that actually predict success

02

Select assessment type

Match test to competency and funnel stage

03

Set scoring criteria

Clear pass/review/fail thresholds before candidates start

04

Deploy consistently

Every candidate for the role gets the same test

05

Score + calibrate

Two reviewers for borderline results

06

Track and validate

Compare test scores to 90-day performance quarterly

Step six, tracking test scores against later performance, is the one most teams skip. It is also the most important. Without it, you have no idea whether your assessment is actually predicting anything. Run a simple correlation quarterly: pull test scores for hires from six months ago and compare them to manager performance ratings. If there is no correlation, the test is not working.

This is also how you defend the process if it ever gets challenged. Documented validation beats anecdote every time.

Vendor Selection

How to choose a pre-employment assessment vendor

The market for pre-employment testing tools is large and noisy. Vendors include Criteria Corp, HireVue assessments, TestGorilla, Codility (for technical roles), Pymetrics, and dozens of smaller players. Each has strengths. None is right for every context.

Four questions to ask every vendor before signing:

01

What is the validated predictive validity for this test, for roles like mine?

Ask for published data, not a testimonial. A reputable vendor has this.

02

Has the test been validated for adverse impact?

Get the adverse impact data by race, gender, and age. If they won't share it, walk away.

03

How is the test scored: by algorithm, by your team, or both?

AI-scored tests need human calibration checks, especially for subjective items.

04

Can we run a pilot before full rollout?

Pilot on 20-30 candidates in the role. Compare scores to interview outcomes. Validate before scaling.

For technical roles, consider building your own work sample tests. A take-home coding task or a short data analysis exercise is often more predictive than a standardized cognitive test, and cheaper too. The investment is in designing a clear rubric and calibrating it with your senior engineers. That process also clarifies what you actually want in a hire, which is valuable on its own.

Common Mistakes

Four testing mistakes that cost you good candidates

Pre-employment testing has a bad reputation in some circles. Most of that reputation is deserved. Not because testing doesn't work, but because it gets implemented badly. These are the four mistakes I see most often.

Testing every candidate before any screen

Test after initial screen to protect both sides' time

Using the same generic test for all roles

Map each assessment to role-specific competencies

No defined scoring threshold before launch

Set pass/fail criteria before the first candidate starts

Treating test results as the sole hiring decision

Use test scores as one data point, not the verdict

The third mistake, no defined threshold before launch, is the sneakiest. Teams launch a test, collect scores, and then debate what the scores mean after they've seen them. That is not a hiring process. That is rationalization dressed up as rigor. A score of 72 looks different when you decide beforehand that 70 is your threshold versus when you see it next to a candidate you already like.

Set your criteria before your pipeline is live. It takes ten minutes, saves hours of internal debate, and gives you defensible documentation if a rejected candidate ever questions your process.

Legal & Compliance

Legal requirements for pre-employment testing in the US

The legal framework for pre-employment testing in the US sits under Title VII, the ADA, the ADEA, and EEOC guidance. The short version: any selection procedure that has a discriminatory effect on a protected class must be demonstrably job-related and consistent with business necessity.

In practice, the highest-risk tests are those that produce statistically different pass rates across racial or gender groups without corresponding evidence of job relevance. Cognitive ability tests, for instance, often show adverse impact on race, but they also show strong predictive validity for most roles. Courts have generally allowed them when employers can demonstrate the validity link.

Three things you should always do:

Use the same test, in the same way, for every candidate applying to the same role

Document the job analysis that connects the test to specific role requirements

Monitor pass rates by demographic group at least annually

For companies with 100+ employees, the EEOC's Uniform Guidelines on Employee Selection Procedures (1978) remain the authoritative reference. They are old but still binding. If you are scaling your hiring significantly, a one-time review with an employment attorney is worth the cost.

One area getting increased scrutiny: AI-based assessments that screen candidates based on video, facial analysis, or voice. Illinois, Maryland, and other states have passed specific laws around AI in hiring. If you are using any AI-driven assessment, check the laws in your hiring locations before you roll it out. The EEOC Q&A on uniform guidelines is a useful starting point.

Workflow Integration

Integrating pre-employment tests into your hiring workflow

A test that lives outside your hiring system creates friction. Candidates get an email with a third-party link, forget to complete it, and disappear. Recruiters manually track who finished what. Scores live in a spreadsheet nobody updates consistently.

The better model is triggering assessments automatically when a candidate reaches a specific stage in your pipeline. Candidate passes your initial resume screen? Assessment link fires automatically. Score comes back? It populates a field in the candidate profile. Recruiter sees score before the phone screen. Hiring manager sees it before the panel debrief.

This matters because the data is only useful if it is in front of decision-makers at the right time. A test score that nobody looks at is a test score that changes nothing. Systems like Prepzo's AI Screening module can automate the trigger and surface results directly in the candidate pipeline, so the data actually influences decisions instead of collecting dust.

Pay attention to completion rates by stage. If you send a 45-minute assessment after a 10-minute screen and half your candidates drop off, the test is too heavy for that stage. Move it forward in the process, shorten it, or rethink whether it belongs there at all. A test that drives away 50% of your qualified pool is not a filter. It is a liability.

Performance Validation

How to know if your assessment is actually working

A pre-employment test earns its place in your process by predicting something real. The metric is predictive validity: does a higher test score correlate with better job performance at 90 days, six months, one year?

You do not need a statistician to run this check. Pull a list of all hires from the last six to twelve months who completed your assessment. Pull their manager performance ratings from the same period. Run a Pearson correlation in Excel or Google Sheets. A correlation above 0.3 is modest but meaningful. Below 0.2, the test is probably not adding signal, and you should either recalibrate the scoring or replace the tool.

The Harvard Business Review's analysis of personality testing found that many employers use assessments without ever checking whether scores predict anything. That is common, expensive, and unnecessary to fix. The quarterly validation check takes two hours and tells you more about your hiring process than almost anything else.

Building this feedback loop also improves your quality of hire measurement. When you can show that candidates who scored above threshold in your assessment retained at higher rates and ramped faster, the business case for a rigorous hiring process writes itself.

Frequently Asked Questions

Is pre-employment testing legal?

Yes, but with conditions. Tests must be job-related, consistently applied to all candidates in the same role, and validated to predict actual job performance. The EEOC requires that any selection procedure that causes adverse impact on a protected group be validated as a business necessity. Using a reputable, validated test and applying it uniformly eliminates most legal risk.

What types of pre-employment tests are most predictive of job performance?

Work sample tests and cognitive ability tests show the strongest predictive validity in research. Structured situational judgment tests and job knowledge tests also perform well. Personality tests have weaker individual predictive power but improve predictions when combined with other methods. Unstructured interviews alone are among the least predictive selection tools, which is why adding a structured assessment usually improves hiring outcomes.

When in the hiring process should pre-employment tests happen?

Most employers run lightweight assessments after the initial screen, before investing time in full interviews. Skills tests or cognitive assessments filter the pool at the resume-to-phone-screen stage. Personality or culture-fit assessments typically come later, after a first interview, to inform panel discussions rather than act as gatekeepers.

How long should a pre-employment assessment take?

Keep it under 45 minutes for entry-level and individual contributor roles. More than an hour at the early funnel stage noticeably increases candidate drop-off, particularly for passive candidates. Executive or senior technical roles can justify longer assessments, say 60 to 90 minutes, because the stakes and compensation justify the ask. Always communicate expected time upfront.

Can I use the same assessment for all roles?

No. Each assessment should be mapped to the specific competencies required for the role. Using a general cognitive test for every hire makes sense as a baseline but misses role-specific signals. A sales role needs different competency validation than a data scientist role. Build a short list of 3-5 core competencies per role, then choose or design assessments that directly measure those.

What is adverse impact, and how do I avoid it?

Adverse impact occurs when a selection practice disproportionately screens out candidates in a protected class (race, gender, age, disability, among others). The EEOC's four-fifths rule flags this when the pass rate of one group is less than 80% of the highest-passing group. To mitigate risk: use validated tests with published adverse impact data, avoid tests with known disparate impact unless they're strongly job-related, and track pass rates by demographic segment during your own hiring.

Build a hiring process that's faster and more defensible

Prepzo automates candidate screening, structures your interview process, and surfaces assessment data where your team actually makes decisions.

Try Prepzo free
Abhishek Singla

Abhishek Singla

Founder, Prepzo & Ziel Lab

RevOps and GTM leader turned founder, building the future of hiring and talent acquisition. 10 years of experience in revenue operations, go-to-market strategy, and recruitment technology. Based in Berlin, Germany.