Contents
Key Takeaways
TL:DR
AI use across HR tasks climbed to 43 percent in 2026, yet the question of whether candidates should use AI during tech interviews divides companies down the middle. Canva now expects candidates to use AI tools like Copilot and Cursor during technical interviews, with almost half of their engineers being daily active users.
Meanwhile, Amazon has issued guidelines that say job applicants can be disqualified if they're found to have used an AI tool during interviews. The stakes are high: get your policy wrong, and you'll either hire people who can't actually code or miss talented engineers who know how to work smarter.
This challenge is what platforms like Utkrusht AI are addressing by embracing transparency over restriction, allowing candidates to use AI while providing hiring managers with exact insights into how it was leveraged. Here's what you need to know before your next technical interview.
Key Takeaways:
Design your AI policy around your actual workflow, not around what interviews used to look like. Test for skills your team genuinely needs.
Communicate your policy explicitly before interviews start. Ambiguity punishes honest candidates and wastes time on both sides.
Invest in training your interviewers on what signals actually matter, whether you allow AI or ban it. Consistency is more important than which approach you choose.
Monitor your outcomes over time and adjust as needed. Your hiring metrics will tell you if your policy is working or just creating theater.
Stay realistic about enforcement limitations. Only 19% of managers are extremely confident their process would catch fraud, so build systems that assume some candidates will cheat.
The Reality Check: AI Is Already in Your Interview Room

You can ban AI from your interviews all you want, but that doesn't mean candidates aren't using it.
One tech leader said that 80% of candidates used a large language model to complete their top-of-the-funnel code test, even though they were explicitly told not to. The number got so high that the company basically gave up and decided to just move top performers forward anyway.
This creates a strange situation. When most candidates ignore your rules, are you actually testing their honesty or just their ability to hide things from you? And if the honest candidates are the only ones following your no-AI rule, you're accidentally punishing the people you'd probably want to hire.
Candidates are increasingly using AI assistance during technical interviews, sometimes covertly through tools specifically designed to avoid detection. Companies now face a choice: keep fighting a losing battle against AI detection, or change what you're testing for.
Why the Old Interview Format Is Breaking Down
Traditional coding interviews were built for a different world. You'd ask someone to reverse a linked list on a whiteboard, and if they could do it, you figured they'd be fine writing actual code at work.
But that was before ChatGPT could explain data structures better than most textbooks. Engineers leverage AI to prototype ideas, understand large codebases, and generate code, which means your interview needs to match that reality.
The disconnect creates problems for everyone involved. Your interview asks candidates to work without the tools they'll use every single day on the job, which is like testing a chef's knife skills but not letting them touch a knife.
75% of interviewers said yes when asked if AI assistance in coding interviews allows weaker candidates to pass interviews they otherwise would have failed. That fear is understandable, but it might be asking the wrong question entirely.
The Trust Crisis Goes Both Ways
Here's where things get messy. Trust is at an all-time low for both job seekers and recruiters, and AI is making it worse.
More than half of managers agreed or strongly agreed that AI has made it harder to trust what they see and hear during virtual interviews. At the same time, candidates feel like they're sending resumes into a black hole where AI filters them out before any human even looks.
72% of hiring professionals have encountered AI-generated resumes during the application process, and 15% have seen face-swapping used in video interviews. Some of this is genuinely concerning, like deepfakes and identity fraud. But some of it is just candidates using the same tools companies use to screen them.
The result is an arms race where nobody wins. Candidates use AI to beat your AI screening, so you add more AI to detect their AI, which pushes them to find better AI tools that can avoid detection. It's exhausting, expensive, and doesn't actually tell you if someone can do the job.
The Case for Allowing AI (With Smart Guardrails)
Some forward-thinking companies have stopped fighting AI and started embracing it. Their logic is simple: if your engineers use AI at work, why wouldn't you let candidates use it during interviews?
What Companies Like Canva Are Learning
Canva piloted a new competency called "AI-Assisted Coding" that replaces their traditional Computer Science Fundamentals screening, where candidates are expected to use their preferred AI tools to solve realistic product challenges.
This isn't about making interviews easier. It's about making them different.
They're evaluating candidates on whether they understand when and how to leverage AI effectively, how well they break down complex requirements, whether they can make sound technical decisions while using AI, if they can identify and fix issues in AI-generated code, and whether they can ensure AI-generated solutions meet production standards.
The questions got harder, not easier. AI questions are 1000-2000 line code bases where you have to add a feature in a short amount of time, and it's making interviews harder and impossible to do without AI.
Think of it like the difference between a closed-book test and an open-book test. Open-book tests are fundamentally different because they're no longer asking you to regurgitate knowledge you've memorized but to synthesize information you should have grokked already and use it to solve difficult, novel problems.
The Real Skills You Should Be Testing
When you allow AI in interviews, you can finally test what actually matters for your team.
At Rippling, coding rounds explicitly state that candidates can use AI tooling including GitHub Copilot and ChatGPT, with prompts being minified versions of actual problems Rippling engineers face. This reveals how someone would actually work on your team, not just whether they memorized algorithm patterns.
Similar to how Utkrusht AI approaches technical assessments, the focus shifts from blocking AI use to understanding how candidates leverage it.
Utkrusht AI's platform video-records assessment sessions and provides exact insights on how candidates used AI, where they used it, how much they relied on it, and whether it actually helped or they just copy-pasted without understanding. This transparency-first approach reflects the reality that real-world engineering work always includes access to such resources.
The key skills you're looking for include:
Problem decomposition: Can they break down a complex task into manageable pieces, even with AI helping them?
Code review ability: When AI generates something, can they spot the problems and fix them?
Technical judgment: Do they know when AI is helping versus when it's leading them down the wrong path?
Communication skills: Can they explain their decisions and walk through their thought process?
These skills matter infinitely more than whether someone can implement quicksort from memory.
When AI Usage Actually Reveals Weakness
Defending decisions to use or not use AI-generated code during interviews was much more difficult than anticipated. One engineer who got an offer out of 30 interviews said he rejected AI suggestions that were bloated or didn't align with best practices, and explaining why became the real test.
Hiring managers can often spot AI-generated answers because they appear too clean and too perfect, lacking the typical clarifications and problem-solving process of a human coder.
If someone just copies whatever ChatGPT spits out without understanding it, that becomes obvious pretty quickly when you ask follow-up questions. But that's actually good information for you as a hiring manager. You want to know if someone blindly trusts AI output or if they're capable of critical thinking.
The Case for Restricting AI (And How to Actually Enforce It)
Not every company is ready to embrace AI in interviews. Some want to see raw problem-solving skills without any assistance. That's a valid choice, but you need to be realistic about enforcement.
Why Some Companies Are Drawing a Hard Line
There is a clear divide between large firms and startups, with large firms remaining conservative, continuing to rely heavily on DSA interviews, and using strict guardrails to prevent cheating.
The reasoning makes sense for certain roles. If you're hiring for a position where someone needs to debug production issues at 2 AM without AI assistance, you might want to see if they can actually code under pressure.
Some companies also worry about liability and fairness. If you allow AI for some candidates but not others, or if some candidates have access to better AI tools than others, does that create an uneven playing field?
Whether you decide to allow candidates to use AI or not, it's essential to enforce the rules, creating a level playing field; otherwise, the most ethical candidates are at a disadvantage.
Detection Methods (And Their Limitations)
If you're going to ban AI, you need ways to actually catch people using it.
Engineers don't write code in a linear top-down manner; they move around, rename variables, and test edge cases, and these inherently human behaviors make it much easier to identify AI usage in a live interview setting.
Some companies are shifting to in-person interviews specifically because they're easier to monitor. Some companies are shifting back to on-site interviews where they can engage candidates in discussion and probe their understanding.
But detection isn't foolproof. Only 19% of managers are extremely confident their current hiring process would catch a fraudulent applicant. Even experienced interviewers struggle to consistently identify AI usage, especially as the tools get better.
There's also the deepfake problem. The U.S. Department of Justice and cybersecurity agencies have warned that operatives are using AI-generated resumes, deepfake profiles and videos, and live video filters to deceive hiring managers, particularly in remote hiring contexts. That's a legitimate security concern that goes beyond just testing coding skills.
The Risk of False Positives
Here's the scariest part about strict AI bans: what if you're wrong?
The stakes are high; disqualifying the wrong candidate could mean losing a talented employee, while failing to catch actual cheating risks bringing in someone who doesn't meet your standards.
Half of recruiters view AI-enhanced resumes as fraudulent and have rejected candidates on suspicion alone, while another 40% have turned down candidates due to suspected identity manipulation. That's a lot of people getting rejected based on hunches rather than proof.
If you're going to enforce an AI ban, make absolutely sure your detection methods are accurate. Otherwise, you're just randomly filtering out candidates and calling it a hiring process.
Finding Your Company's AI Policy: A Practical Framework
So what should you actually do? The answer depends on your specific situation, but here's a framework to help you decide.
Match Your Interview to Your Job Requirements
Start by asking what your engineers actually do all day. Interviews should reflect the competencies required for the job, and as AI's role in software development grows, interviews will need to evolve.
If your team uses AI tools constantly, your interview should probably allow them too. If your team works in environments where AI isn't available (maybe you're dealing with classified systems or air-gapped networks), then an AI-free interview makes more sense.
Many companies prefer to allow the use of AI at some level because they want to see how developers solve realistic problems in realistic situations, developers already use many external tools while coding, and how a developer leverages tools such as AI is an important part of their skill set that will ultimately affect their productivity.
Three Practical Approaches You Can Implement Tomorrow
Here are three specific policies you could adopt, depending on where you land on this debate.
Option 1: Full AI Access with Transparency
Allow any AI tools candidates want to use, but make it obvious that you're watching how they use them. Record the session (with permission), ask them to explain their prompts, and probe their understanding of the code that gets generated.
This works best for teams where AI is already part of the daily workflow and you're mainly trying to see if someone can be productive with modern tools. Just as Utkrusht AI demonstrates, transparency over restriction provides more accurate signals of candidate capability than traditional AI bans, as hiring managers can watch exactly how candidates leverage AI tools in realistic job simulations.
Option 2: Selective AI Permission
Allow AI for some tasks but not others. For example, you might let candidates use AI to understand an unfamiliar codebase or generate boilerplate, but not to solve the core algorithmic problem.
Some companies are allowing, even encouraging, candidates to use the same tools they would at work, but they structure the interview so AI only helps with the parts where it would help at work too.
Option 3: Traditional No-AI with Better Enforcement
Ban AI entirely, but invest in proper enforcement methods. This means live monitoring, asking probing questions to verify understanding, and having clear consequences for violations.
We're increasingly seeing companies include clauses in offer letters that allow them to rescind an offer or terminate employment if they later discover a candidate cheated during the interview process. If you go this route, make sure your policy is clearly communicated and consistently enforced.
Building Your Decision Matrix
Here's a comparison of the three approaches based on what matters most to your hiring process.
Factor | Full AI Access | Selective AI | No AI Policy |
|---|---|---|---|
Matches Real Work | ✓ High alignment if team uses AI daily | ✓ Moderate alignment for mixed workflows | ✗ Low alignment unless role genuinely requires no AI |
Ease of Enforcement | ✓ Easy (nothing to police) | ✗ Moderate (requires clear boundaries) | ✗ Difficult (high cheating risk) |
Tests Fundamentals | ✗ Harder to isolate raw skills | ✓ Can test both AI fluency and fundamentals | ✓ Direct assessment of core skills |
Candidate Experience | ✓ Reduces stress, feels realistic | ✓ Clear expectations but more complex | ✗ May feel outdated or unfair |
Question Complexity | ✓ Can ask much harder problems | ✓ Moderate complexity possible | ✗ Limited to memorizable patterns |
False Positive Risk | ✓ None (AI is allowed) | ✗ Some risk of unclear boundaries | ✗ High risk of wrongly accusing candidates |
Implementation Checklist
Whichever approach you choose, make sure you:
Communicate clearly: Tell candidates your policy before the interview starts. Don't leave them guessing.
Train your interviewers: Make sure everyone on your hiring team understands what's allowed and what signals to look for.
Document everything: Write down your policy and your reasoning so you can apply it consistently.
Monitor outcomes: Track whether your policy is actually helping you find better candidates or just making hiring harder.
Stay flexible: The rise of generative AI tools has raised new questions about remote assessments, candidate authenticity, and the skills companies are really testing for. Your policy should evolve as the technology does.
What This Means for Different Company Sizes
Your company size dramatically affects which approach makes sense.
Startups and Small Teams
Small teams often benefit from allowing AI because it lets you test for practical problem-solving without getting bogged down in algorithm trivia.
Startups, by contrast, are experimenting, with some dropping LeetCode-style screens altogether. When you only hire a few people per year, you can afford to do more customized assessments that reflect your actual codebase.
For custom software development companies and small engineering teams, platforms like Utkrusht AI provide quick, high-completion assessments that cut through traditional screening. Their 20-minute real-job simulation assessments let candidates use whatever tools they want while giving you clear insights into how they actually work, complete with a ranked Top-10 shortlist of candidates within approximately 48 hours. This approach allows engineering leaders to cut their time-to-hire by an estimated 50% and reclaim dozens of hours typically lost to unproductive interviews.
Mid-Size Companies
As you grow, consistency becomes more important. You need a policy that works across multiple interviewers and teams.
This is where selective AI permission often works well. You can create standardized rubrics that specify exactly when AI is allowed and what you're evaluating at each stage.
The key is documentation. Make sure every interviewer knows the rules and applies them the same way.
Enterprise Organizations
Large companies face unique challenges. You're dealing with compliance requirements, legal considerations, and the need to interview at massive scale.
Large firms remain conservative, continue to rely heavily on DSA interviews, use strict guardrails to prevent cheating, and are slow to allow AI tools during interviews.
That conservatism makes sense when you're hiring thousands of people and need to defend your process in court if someone claims discrimination. But it also means you might be slower to adapt to how engineering work is actually changing.
The Hidden Costs of Getting This Wrong
Your AI policy isn't just about fairness. It has real business consequences.
When You're Too Restrictive
If you ban AI completely while your competitors allow it, you're fishing in a smaller talent pool. The best engineers, who are comfortable with modern tools and want to work somewhere that embraces them, might skip your company entirely.
You're also spending more time in interviews. Without AI, problems need to be simpler and shorter, which means you need more interview rounds to get the same signal about someone's abilities.
Almost eight-in-ten teams regularly encounter AI-generated or AI-assisted applications, with almost half of respondents saying they'd updated interview techniques to focus on deeper probing in response. Fighting AI takes effort that could go toward actually evaluating talent.
When You're Too Permissive
On the flip side, if you allow AI without proper evaluation, you risk hiring people who can't actually code.
65% of U.S. hiring managers have caught applicants using AI deceptively through practices like reading from AI-generated scripts, hiding prompts in resumes to bypass initial screening, or showing up as deepfakes. Some of this crosses the line into fraud.
The cost of a bad hire is enormous. By some estimates, it's 1.5 to 2 times the person's annual salary when you factor in recruiting, onboarding, and the opportunity cost of projects that don't get done.
The Compliance Angle You Can't Ignore
Here's something most companies don't think about: your AI policy might have legal implications.
EU AI Act obligations for general purpose AI began in August 2026, and New York City's Local Law 144 still requires an annual bias audit and candidate notices before using automated employment decision tools in hiring.
If you use AI to screen candidates but don't allow candidates to use AI themselves, does that create a fairness issue? If your AI tools are accidentally biased but you're not testing whether candidates can identify and correct biased AI output, are you missing a critical job skill?
These questions don't have clear answers yet, but they're worth thinking about before someone files a lawsuit.
Testing Your Policy: Questions to Ask Before You Commit
Before you finalize your AI policy, run through these questions with your hiring team.
Core Questions
Can your team explain why your policy matters? If your interviewers can't articulate why you allow or ban AI, your policy probably isn't well thought out.
Does your policy match your company values? If your company claims to embrace innovation but bans modern tools in interviews, that's a disconnect candidates will notice.
Have you tested your policy on actual candidates? Don't just theorize. Run some pilot interviews and see what signals you get.
Can you enforce your policy consistently? If some interviewers ignore the rules or apply them differently, you're creating an unfair process.
Would you be comfortable defending your policy publicly? Eight-in-ten insist final hiring decisions must remain human-led, which suggests there's broad agreement that humans should stay in the loop. But beyond that, opinions vary widely.
Red Flags That Suggest Your Policy Needs Work
Watch out for these warning signs:
Your pass rates change dramatically when you implement the policy. That might mean you're testing something different than you think.
Candidates are confused about what's allowed. If they're asking lots of clarifying questions, your communication isn't clear enough.
Your interviewers disagree about what they saw. This suggests your evaluation criteria are too subjective.
You're rejecting candidates you can't prove used AI. False positives are worse than false negatives in hiring.
Your time-to-hire is increasing significantly. If your new policy makes hiring take way longer, the cost might not be worth it.
Comparison: Allowing vs. Restricting vs. Selective AI Use
Here's how the three main approaches stack up across different dimensions that matter for hiring decisions.
Dimension | Allow AI Freely | Restrict AI Completely | Selective AI Permission |
|---|---|---|---|
Question Difficulty | ✓ Can ask production-level problems | ✗ Limited to textbook problems | ✓ Moderate complexity possible |
Candidate Pool | ✓ Appeals to modern engineers | ✗ May discourage top talent | ✓ Reasonable compromise |
Interview Time | ✓ Shorter (solve harder problems faster) | ✗ Longer (multiple simpler rounds) | ✓ Moderate duration |
Evaluator Training Needed | ✓ Moderate (learn new signals) | ✗ High (detection training required) | ✗ High (clear boundaries needed) |
Risk of Bad Hires | ✗ Risk if evaluation is weak | ✗ Risk if enforcement fails | ✓ Balanced risk profile |
Cost to Implement | ✓ Low (minimal changes) | ✗ High (monitoring tools needed) | ✓ Moderate (documentation required) |
Reflects Real Work | ✓ High match for most roles | ✗ Low match for modern teams | ✓ Can match specific workflows |
Legal Risk | ✓ Low (clear, consistent) | ✗ Moderate (false accusations) | ✗ Moderate (boundary disputes) |
Expert Perspectives: What Engineering Leaders Are Saying
The debate around AI in interviews is shifting fast. Here's what people who are actually making hiring decisions have to say.
The Progressive Camp
Companies like Canva not only encourage but expect their engineers to use AI tools as part of their daily workflow. Their take is that AI tools are essential for staying productive and competitive in modern software development.
Meta has confirmed they're testing how to provide AI tools to applicants during interviews, saying that it's more representative of the developer environment that future employees will work in and also makes LLM-based cheating less effective.
That last point is clever. If everyone has AI, then having AI isn't an unfair advantage. It just becomes part of the baseline, and you're testing something else entirely.
The Skeptical Camp
Many companies explicitly ban you from using AI tools during the interview, and if you're caught, it could ruin your reputation and get you on their do-not-hire list for life.
Companies taking this stance argue that they need to see raw problem-solving ability without any assistance. They worry that AI is creating a generation of engineers who can't actually code.
The Middle Ground
There's a growing number of companies that are embracing the use of AI in interviews because they want to hire engineers who leverage new tools to enhance their skills.
Companies should focus on evaluating candidates' ability to work with AI tools rather than prohibiting their use entirely, including assessing problem-solving skills, code review capabilities, and understanding of AI-generated solutions.
This perspective acknowledges that AI is here to stay and focuses on figuring out how to evaluate people in this new reality.
Frequently Asked Questions
How can I tell if a candidate is using AI during a remote coding interview?
Engineers don't write code in a linear top-down manner; they move around, rename variables, and test edge cases, and these inherently human behaviors make it much easier to identify AI usage in a live interview setting. Look for code that appears fully formed without the usual trial and error, or responses that come too quickly for the complexity of the problem. However, keep in mind that detection isn't foolproof, and false accusations can damage your reputation with candidates.
If I allow AI in interviews, won't that just let weak candidates pass?
Not if you design your interview correctly. Open-book tests don't lower the bar; they evaluate something different and arguably something harder. When you allow AI, you can ask much more complex problems that test judgment and code comprehension rather than memorization. The weak candidates will reveal themselves when they can't explain the code AI generated for them.
What's the difference between using AI to prepare for interviews versus using it during the interview?
Using AI for preparation (practicing common questions, getting feedback on your approach) is widely accepted and similar to studying with any other resource. Using it during the interview crosses into cheating territory if the company has explicitly banned it. The ethical line is whether you're honest about your methods and following the stated rules, not whether AI was involved at some point.
Should different roles have different AI policies?
Absolutely. Interviews should reflect the competencies required for the job. A frontend developer working with AI-assisted design tools might need AI during their interview, while someone working on embedded systems in secure environments might need to demonstrate they can code without assistance. Tailor your policy to the actual job requirements.
How do I prevent deepfakes and identity fraud without banning AI entirely?
Focus on identity verification rather than AI detection. Best practices include requiring real-time verification steps during video interviews, using multi-factor identity verification before granting system access, and training hiring teams to recognize signs of AI-manipulated visuals or speech latency. These methods address fraud without preventing legitimate AI use for coding assistance.
What legal requirements do I need to consider with AI in hiring?
EU AI Act obligations for general purpose AI began in August 2026, and New York City's Local Law 144 requires an annual bias audit and candidate notices before using automated employment decision tools in hiring. Even as the law lags behind technology, existing employment discrimination principles still apply, employers remain responsible for bias regardless of whether it stems from an algorithm or a third-party vendor, and liability cannot be outsourced. Consult with legal counsel about your specific situation.
How often should I update my AI interview policy?
The tech job interview process is shifting under our feet, with the rise of generative AI tools raising new questions about remote assessments, candidate authenticity, and the skills companies are really testing for. Review your policy at least quarterly, and be ready to adjust as new AI tools emerge and your team's workflows evolve. What works in 2026 might be completely outdated by 2027.
Conclusion
The question isn't really whether to allow AI in tech interviews. That ship has sailed. AI use across HR tasks climbed to 43 percent in 2026, and candidates are already using it whether you want them to or not.
The real question is whether you'll acknowledge that reality and design interviews that work with it, or keep pretending you can ban your way back to 2020. Companies like Canva and Meta are embracing AI and asking harder questions as a result. Others are doubling down on restrictions and spending resources on detection instead of evaluation.
Neither approach is automatically wrong. What matters is that your policy matches your team's actual work, you can enforce it fairly, and you're honest with yourself about what you're really testing. If your engineers use AI all day, test candidates on their ability to use AI well. If your team genuinely works without AI assistance, then evaluate raw coding skills in an AI-free environment.
But whatever you choose, make it intentional. The worst option is having no clear policy and letting each interviewer decide on the fly, creating an inconsistent mess that's unfair to everyone.
As Utkrusht AI exemplifies, the future of technical assessments lies in transparency and real-job simulations rather than artificial restrictions. By allowing candidates to use AI while providing hiring managers with detailed insights into how it's leveraged, companies can assess what truly matters: whether someone can solve actual engineering problems using the tools they'll have on the job.
Start by evaluating your current interview process against these principles. Ask your engineering team how they actually use AI in their daily work, then design an interview that tests those skills.
Your next great hire might be someone who knows exactly when to trust AI and when to override it.
Founder, Utkrusht AI
Ex. Euler Motors, Oracle, Microsoft. 12+ years as Engineering Leader, 500+ interviews taken across US, Europe, and India
Want to hire
the best talent
with proof
of skill?
Shortlist candidates with
strong proof of skill
in just 48 hours
Code refactoring: 5 techniques that work & 3 best practices
Feb 24, 2026
5 conflict resolution techniques in software development teams
Feb 21, 2026
10 proven communication practices for distributed development teams
Feb 19, 2026
Obvious signs when QA slows software development (and how to tackle it)
Feb 18, 2026

