One-Way Video Interviews: How to Run Them Fairly and at Scale Setup, question design, scoring, and the bias problem most teams ignore
One-way video interviews save hours per role when designed well. They fail when teams treat them as a box to check, write lazy questions, skip scoring rubrics, and then wonder why completion rates are low and the signal is weak. This guide covers how to build an async video screen that actually works.
A one-way video interview, also called an asynchronous video interview or async screen, replaces the live phone screen with a recorded format. Candidates receive a set of questions and record video responses on their own time, typically within a three-to-five-day window. Hiring teams review recordings when convenient, score responses against a rubric, and advance or decline without ever scheduling a call.
The efficiency case is straightforward. A phone screen for one candidate takes thirty to forty-five minutes when you include scheduling overhead, prep time, and the call itself. With a one-way video, that same screen takes eight to twelve minutes to review. For roles receiving fifty or more applications, the math becomes hard to argue with. According to SHRM research on screening practices, companies using structured async video screens report 50% to 65% reductions in time-to-screen compared to individual phone screens.
The problem is that most teams set these up poorly. They use generic questions copied from interview guides, skip scoring rubrics entirely, and give reviewers no guidance on what good looks like. The result: reviewers fall back on gut feel, and gut feel in video screening is demonstrably biased toward appearance, accent, perceived confidence, and other signals that have weak correlation with job performance. The evidence from Google re:Work's structured hiring research is clear: structured evaluation criteria applied consistently dramatically outperform unstructured reviewer impressions.
This post builds on what we covered in our guides on structured interviews and phone screen question design. The same principles apply to async video, with a few extra considerations specific to the format.
Before going into setup, one clarification worth making: U.S. Bureau of Labor Statistics data shows HR specialist roles are growing faster than average, and recruiting volume is increasing with it. Async video screening is becoming a standard tool not because it is trendy, but because the alternative, scheduling individual calls with every screened candidate, does not scale.
By the Numbers
What the data says about one-way video screening
60%
reduction in time-to-screen
vs. scheduling individual phone screens for each candidate
40–70%
typical completion rate
Higher for strong employer brands with clear instructions
3–5
questions per interview
The range that balances signal with candidate experience
2x
reviewers minimum
Independent scoring before discussion cuts bias significantly
When to Use Them
One-way video interviews work in some situations. They are the wrong tool in others.
Async video works best when you have volume, clearly defined screening criteria, and a role where verbal communication is something you actually need to assess before investing in a live conversation. It is a reasonable replacement for the phone screen, not for the hiring manager interview or the final round.
High-volume roles with clear criteria
When you receive thirty or more qualified applications per role and the screening criteria are well-defined, async video cuts screening time without sacrificing signal quality. Sales, customer success, operations, and support roles are common fits. If you can describe what a strong answer looks like before the screen starts, async video will work.
Roles where communication matters early
Client-facing roles, roles requiring regular executive communication, or any role where written screening alone misses the signal you need. Seeing how someone structures a verbal answer to a situational question gives you data that a resume and cover letter cannot. This is more useful at the screen stage than you might expect.
Distributed or global hiring
Async video eliminates timezone coordination entirely at the screening stage. Candidates in different countries can complete the screen at their convenience. Reviewers can watch responses during their normal working hours. For distributed teams, this is a practical advantage that often goes underweighted.
When not to use it:
Senior and executive roles. For VP and above, asking someone to record themselves answering pre-set questions sends the wrong signal about how you view the candidate. These roles warrant a direct recruiter conversation first.
Niche technical roles with small candidate pools. When you have five qualified candidates, the scheduling overhead of a phone screen is not the bottleneck. Async video here creates friction without meaningful time savings.
When you have not written scoring criteria yet. Running async video without a rubric means reviewers use gut feel. At that point you have added friction for candidates with no improvement in decision quality. Do the rubric first.
For a broader view of where async video fits in the hiring funnel, the post on recruitment funnel optimization covers stage design across the full process.
Process Flow
The four-step async interview process that produces signal
Write 3–5 questions tied directly to role requirements. Decide scoring rubrics before any candidate records.
Give candidates 3–5 business days, explain the format, and include a warm note. Generic invitations have 15% lower completion rates.
Two reviewers score every response against the written rubric before discussing. This is the single biggest bias-reduction lever.
Compare scores. Discuss gaps above 2 points. Advance, decline, or flag for live interview within 2 business days of completion.
Setup
Six decisions to make before you send the first invitation
Define the screening criteria before writing a single question
Start with what you need to know, not with question templates. Write down three to five things a candidate must demonstrate to pass the screen: communication clarity, relevant domain experience, motivation for this type of work, and whatever is specific to your role. Each question should map to at least one criterion. If you write a question and cannot name which criterion it addresses, cut the question.
Set question count and time limits appropriately
Three to five questions with two to three minutes per response keeps total candidate time under fifteen to twenty minutes. Give two to three retake attempts per question. A single take is too high-pressure and rewards polish over substance. Unlimited retakes lets candidates script and rehearse until the response feels artificial. Most platforms let you set these parameters per question.
Write the invitation like it matters
The invitation email is the first touchpoint after the application. A generic “please complete this video screen” email reads as impersonal and drops completion rates measurably. Include: why you are interested in the candidate, what the format is (question count, time per question, deadline), and what the next step looks like if they advance. Candidates who understand the context complete screens at rates 20% to 30% higher than those who receive templated invitations, based on LinkedIn Talent Solutions research on candidate communication.
Give candidates three to five business days
Forty-eight hours is too short for candidates who are currently employed and cannot record at work. Beyond five business days and your pipeline slows without meaningful benefit. State the deadline explicitly in the body of the invitation email, not buried in the platform interface where most candidates will not find it before they are already past the cutoff.
Assign two reviewers and set review timing
One reviewer is a bias risk. Two reviewers scoring independently, before comparing notes, is the minimum. Decide in advance when reviews happen: within two business days of screen completion is a reasonable target. If you let recordings sit for a week, you will find reviewers rushing reviews, skipping the rubric, and reverting to impressionistic scoring. A review SLA matters.
Test the candidate experience before going live
Have a team member who is not involved in the hiring process complete the screen as a candidate. Note anything confusing, any friction in the recording interface, and whether the question wording is clear. Fix these before sending to real candidates. Poor candidate experience at the screen stage damages employer brand with people who are not even employees yet. The post on candidate experience covers why this matters for offer acceptance rates.
Question Design
Weak vs. strong prompts for async video screening
Avoid
“Tell me about yourself.”
Use instead
“Walk me through the project in your last role where you had the most direct ownership. What were you responsible for, and how did it go?”
Generic openers produce rehearsed non-answers. Role-specific prompts surface real experience.
Avoid
“Why do you want to work here?”
Use instead
“What specifically about this type of role interests you right now, and what are you hoping to learn in the next 12 months?”
Motivation questions about the company reward research, not fit. Questions about goals reveal genuine career intent.
Avoid
“What is your greatest weakness?”
Use instead
“Tell me about a time a project did not go as planned. What happened, and what would you do differently?”
The weakness question gets theater. A failure story gets real information about judgment and self-awareness.
Avoid
“Are you comfortable working in a fast-paced environment?”
Use instead
“Describe a situation where priorities shifted mid-project. How did you handle it?”
Yes/no prompts get yes. Behavioral prompts get evidence.
Question Design
How to write questions that produce usable signal in two minutes of video
Async video gives each candidate two minutes to answer. That means you need questions specific enough to constrain the answer, behavioral enough to require real examples, and clear enough to understand in one reading. Most generic interview question lists fail at all three.
Start behavioral, not biographical
“Walk me through your background” wastes two minutes on information you can read from the resume. Behavioral prompts that ask for specific past situations give you information the resume does not contain: how candidates handle ambiguity, conflict, failure, and complexity. Start with “Tell me about a time” or “Walk me through the last time you” for at least two of your questions.
Make one question role-specific
Include at least one question tied directly to a scenario the person will face in this role. For a customer success role: “Tell me about a time a customer was dissatisfied with a product feature and you had to manage that relationship without being able to fix the issue.” This tests role-specific judgment, not just general communication skill, and tells candidates you have thought about what the role actually requires.
Ask one motivation question, phrased carefully
“Why do you want to work here?” rewards research, not genuine fit. A better framing: “What kind of problems do you most want to work on in the next twelve months, and why?” This surfaces authentic motivation without prompting candidates to recite your company's mission statement back to you.
Test the question before using it
Before sending a question to candidates, answer it yourself in two minutes. If you cannot answer it well in two minutes, neither can they. Questions that require extensive context-setting or that have many valid interpretations produce answers that are hard to score consistently. Simplify any question you cannot answer cleanly in the time limit.
For a deeper set of behavioral question examples organized by competency, the post on behavioral interview questions covers the STAR framework and provides category-specific examples you can adapt for async format.
Scoring Framework
Weighted rubric for async video review
| Criterion | Weight | What to assess | Observable signal |
|---|---|---|---|
| Clarity of communication | 25% | Can you follow their reasoning without effort? | Structure, precision, absence of filler |
| Relevance of examples | 30% | Do examples match the role's actual demands? | Domain fit, complexity level, outcomes cited |
| Self-awareness | 20% | Do they own their role in outcomes, good and bad? | 'I' vs 'we', candor about failures |
| Role motivation | 15% | Is their interest in this type of work genuine? | Specificity about the role, not the company |
| Follow-through signals | 10% | Do they finish thoughts and meet the time limit? | Completion quality, not just word count |
Score each criterion 1–5 independently per reviewer. Compare before discussion. Discrepancies of 2+ points require evidence-based justification.
Bias and Scoring
The bias problem in video screening is real. Here is how to reduce it.
My view is that one-way video interviews are no more biased than phone screens by default, and are substantially less biased than live interviews when designed correctly. The problem is that most teams do not design them correctly. Research from the Harvard Business Review on video interview bias found that unstructured video reviews are particularly susceptible to appearance bias, accent bias, and background environment bias because reviewers have more visual cues to work with than in a phone call, and no structured rubric to anchor their judgment.
Three practices reduce video screening bias more than any platform feature or AI scoring tool:
Write scoring criteria before watching any video
The rubric has to exist before the first recording is reviewed. If reviewers define what “good” looks like after watching five candidates, they are defining it based on the first five candidates they saw, not based on the role requirements. The rubric is your anchor. Without it, first impressions run the review process.
Score independently before comparing
Both reviewers should complete their scoring before seeing the other's scores. In a study published in the Journal of Applied Psychology, evaluators who saw a peer's score before rating anchored to that score in roughly 70% of cases. The value of two reviewers disappears when the second reviewer knows what the first thought.
Calibrate on a sample set before full rollout
Before using async video for a live role, have two reviewers score five to ten recordings from past hires (you can use recordings from previous screens or run a calibration exercise with hypothetical examples). Compare scores. Discuss gaps. Agree on anchor examples for each score level. This calibration investment typically takes ninety minutes and significantly tightens inter-rater reliability before the process touches real candidates.
One note on AI-based video scoring tools, which some platforms offer: the EEOC guidance on uniform guidelines for employee selection requires that any selection tool be validated for the roles it is used in and not produce adverse impact. AI scoring tools applied without validation create legal exposure. Human reviewers working from a written rubric are legally safer and, when calibrated correctly, produce more reliable decisions than most current AI scoring approaches.
Common Mistakes
What kills completion rates and wastes reviewer time
Too many questions
Fix: Five questions maximum. Seven is the threshold where completion rates drop sharply. Most screens need three.
Generic questions that reward preparation over substance
Fix: Use behavioral and situational questions that require specific examples. Generic questions produce scripted answers that score 4/5 for every candidate.
No response from the hiring team for a week after completion
Fix: Candidates who complete a screen and hear nothing for seven days withdraw from the process at a high rate. Review within two business days. Send a status update within three.
Skipping the platform test
Fix: Platform recording interfaces vary widely in quality. Test on mobile (many candidates complete screens on phones), on slow connections, and in multiple browsers before going live.
One reviewer evaluating all candidates
Fix: Single-reviewer async video is structurally equivalent to a biased phone screen. Two independent reviewers with a shared rubric is not optional for roles where you care about equitable evaluation.
Using the same question set for very different roles
Fix: A question about managing stakeholder complexity is useful for a project manager and useless for a junior analyst. Role-specific questions are worth the extra thirty minutes of setup time.
Low completion rates are usually a symptom of multiple issues compounding: too many questions, a generic invitation, a platform that is difficult on mobile, and no warmth in the communication. Fixing any one of these moves the needle, but the teams with 65%+ completion rates typically fix all of them. For more on tracking whether your screening process is actually working, see the post on recruitment metrics and KPIs.
Candidate Experience
Async video is impersonal by design. Here is how to counteract that.
Recording yourself answering questions for a hiring team you have never met is an unusual experience. It feels more formal than a phone call and less personal than a live interview. Candidates who are not actively job hunting are more likely to drop out at this stage if the process feels cold or impersonal. Here is what the high-performing teams do differently:
Include a short intro video from the hiring manager
Some platforms let you add a short video from the hiring manager at the start of the screen. A ninety-second introduction explaining who they are, what the team does, and what they are looking for in this hire significantly raises completion rates and improves candidate sentiment. It signals that a real person is on the other side, not just a tool.
Explain the format in plain language
Tell candidates exactly what to expect: how many questions, how long they have per answer, how many retakes, and when they will hear back. Candidates who understand the format complete screens at higher rates and feel better about the process regardless of whether they advance. Uncertainty about format is one of the top reasons candidates abandon async screens partway through.
Commit to a follow-up timeline
State in the invitation when candidates will hear back, and then actually follow the timeline. If you tell candidates you will respond within five business days and you respond in two, that creates a positive impression. If you tell them five days and they hear nothing in two weeks, they have already accepted another offer and told three people about the experience. LinkedIn Talent Trends research consistently shows that follow-up speed is the single highest-impact factor in candidate experience scores.
Async video done poorly creates the impression of a hiring process that is efficient for the employer and impersonal for the candidate. That is not inevitable. The same process, with a warm invitation, clear format explanation, role-specific questions, and timely follow-up, can actually improve candidate experience scores relative to rushed phone screens where the recruiter is clearly reading from a template.
Frequently Asked Questions
What is a one-way video interview?
A one-way video interview is an asynchronous screening method where candidates record video responses to pre-set questions on their own time. The hiring team watches the recordings later, rather than joining a live call. Candidates typically get a set number of attempts and a time limit per answer. Platforms like HireVue, Spark Hire, and Vidyard Hiring are common tools, though many ATS systems now include this natively.
Are one-way video interviews fair to candidates?
They can be, if you design them well. The research on bias in video screening is real: evaluators are influenced by appearance, accent, background, and non-verbal cues in ways that do not predict job performance. The safeguards that matter are structured question sets (same questions, same order for every candidate), written scoring criteria decided before you watch any videos, and calibration sessions where two or more reviewers score independently. Without these, one-way video interviews can amplify bias rather than reduce it.
How many questions should a one-way video interview include?
Three to five questions is the right range for most roles. Under three feels perfunctory and does not give you enough signal. Over five and completion rates drop, candidates rush the later answers, and reviewers lose attention before the end. The exception is highly technical roles where you might include one problem-solving prompt alongside three to four behavioral questions. Keep total candidate time under twenty minutes.
What is the typical completion rate for one-way video interviews?
Industry benchmarks vary by role level and company brand, but completion rates of 40% to 70% are typical for sourced or inbound candidates at the screening stage. If your completion rate is below 40%, the most common causes are: too many questions, confusing instructions, technical friction in the platform, or a poor candidate experience signal (slow follow-up, no brand presence). Top employers with strong candidate experience report completion rates above 65%.
How long should candidates have to complete a one-way video interview?
Three to five business days covers the realistic range. Under 48 hours is too short for candidates who are currently employed and cannot record at work. Over a week extends your time-to-fill unnecessarily. For senior roles where candidates are less likely to be actively job hunting, giving five business days is reasonable. Always state the deadline clearly in the invitation email, not buried in the platform instructions.
Can one-way video interviews replace phone screens entirely?
For many roles, yes. For senior or executive roles, usually not. Phone screens are valuable when you need to gauge a candidate's genuine interest, do quick compensation alignment, or assess conversational communication in real time. For high-volume roles where the screening criteria are well-defined, a well-designed one-way video interview consistently outperforms a phone screen for efficiency. The honest answer is that most teams keep phone screens out of habit, not because they produce better signal.
Resources & Further Reading
Related Prepzo Guides
The principles behind structured evaluation that async video depends on
Question frameworks that translate directly to async video format
Scoring rubric design for consistent async video review
The full arc of candidate experience beyond the screen stage
External Resources
Research-backed case for structured criteria over gut-feel evaluation
Legal and best-practice guidance applicable to video screening
Research on video interview bias and how to counteract it
Legal framework for defensible video screening practices
Run structured async video screens without the setup headache
Prepzo includes built-in async video screening with rubric scoring, dual-reviewer workflows, and automatic candidate communication. No separate tool required.
Try Prepzo free