What To Do When GhostScreen Flags a Candidate
A practical guide for recruiters. These techniques help you verify whether you're talking to a real person — without disrupting the interview.
Understanding the Score
GhostScreen assigns a real-time authenticity score from 0 to 100. Here's what each range means and how you should respond.
Likely Real
No action needed. Continue your interview normally. The candidate's video shows strong indicators of being a real, unaltered feed.
Suspicious
Worth investigating. Use 1–2 of the subtle verification techniques below to gather more signal before making a judgment.
Likely Synthetic
High confidence of AI generation. Use multiple verification techniques. Consider ending the interview early if confirmed.
Verification Techniques
Six proven methods to confirm whether a flagged candidate is real. Each technique exploits a specific weakness in current deepfake technology.
Ask Them to Wave Their Hand Across Their Face
Most EffectiveDeepfakes struggle with occlusion. When a real hand passes in front of a generated face, the face reconstruction often glitches, flickers, or distorts. This is the single most effective test.
How to Ask Naturally
“Could you wave at me? I want to make sure our video connection is stable.”
What to Watch For
- - Face distortion as the hand passes
- - Flickering or visual glitches
- - Reconstruction artifacts around the face
Ask Them to Hold Up an Object
AI face generators don't handle foreign objects near the face well. Holding up a pen, phone, or coffee mug near their face forces the model to render something it wasn't trained on.
How to Ask Naturally
- “Can you hold up your ID so I can verify?”
- “Cheers!” (hold up your own coffee mug first)
What to Watch For
- - Object disappearing or becoming transparent
- - Face warping around the object
- - Object looking “painted on” or flat
Request a Camera Angle Change
Many deepfake systems are optimized for a frontal view. Asking someone to show their profile or look to the side may break the generation model.
How to Ask Naturally
“Could you look to your left for a second? I'm seeing some video artifacts on my end.”
What to Watch For
- - Face reconstruction failures at extreme angles
- - Neck or jawline distortion
- - Sudden quality drops when turning
Ask an Unscripted, Unexpected Question
AI avatars controlled by another person still have a real human behind them — but pre-recorded or looped deepfakes can't respond to unexpected questions. This also tests for lip-sync issues.
How to Ask
- “Before we continue — random question — what's the weather like where you are right now?”
- “Can you spell your last name backwards for me?”
What to Watch For
- - Lip movements that don't match audio
- - Unusually delayed responses
- - Inability to follow unexpected instructions
Watch for Physiological Tells
Real humans have micro-movements that AI can't fully replicate: natural blink patterns, subtle head movements, and skin color changes. No need to ask anything — just observe.
Blinking
Do they blink naturally (10–30 times per minute) or not at all? Deepfakes often blink too rarely or at unnervingly regular intervals.
Micro-Expressions
Do they react naturally to things you say? Genuine surprise, amusement, or confusion is hard for AI to convincingly produce on cue.
Skin Texture
Does their skin look unnaturally smooth or plasticky? AI-generated faces often lack pores, subtle blemishes, and natural skin variation.
Hair
Does hair move naturally or look painted on? AI struggles with individual hair strands, especially at the hairline and around the ears.
Use Two-Factor Video Verification
High-Stakes OnlyFor high-stakes placements, ask the candidate to briefly join from their phone camera simultaneously, or send a selfie holding today's newspaper or a specific object you name on the spot.
When to Use
Reserve this for final-round interviews where the stakes are high — senior roles, client-facing positions, or any placement where the cost of a fraudulent hire would be significant.
What NOT To Do
Avoiding these common mistakes is just as important as the verification techniques above.
Don't accuse the candidate directly
Never ask “Are you a deepfake?” or “Is this really you?” Use the subtle techniques above instead. A wrong accusation damages your reputation and the candidate's experience.
Don't rely solely on GhostScreen's score
Use the score as one signal alongside your judgment. Detection technology is a tool, not a verdict. Always combine it with your own observations.
Don't panic if the score is orange
Many legitimate factors can cause medium scores — poor lighting, low camera quality, virtual backgrounds, or heavy video compression. Orange means investigate, not reject.
Don't skip your normal vetting process because the score is green
A high authenticity score confirms the video feed is likely real — it doesn't confirm the candidate's qualifications, identity, or honesty. Continue your standard interview process.
Reporting & Documentation
If you confirm a synthetic candidate, follow these steps to protect yourself and your client.
End the Interview Professionally
Thank the candidate for their time and end the call. You don't need to explain why. A simple “We have what we need for now, we'll be in touch” is sufficient.
Document the Detection
Screenshot the GhostScreen score panel, note the timestamp, and record which verification techniques you used and what you observed. This creates an evidence trail.
Report to Your Client Immediately
Inform your client about the flagged candidate. Share your documentation. This protects both your agency and your client from a potentially fraudulent hire.
Consider Reporting to Authorities
If the candidate provided fake credentials, consider reporting to relevant authorities. Identity fraud in the hiring process is a serious offense in most jurisdictions.
Need More Help?
Whether you need help interpreting a specific detection or want to understand how our technology works under the hood, we're here.