
Clevrr AI needed Technical Assessment-first filtering to handle massive application volume and stop wasting founder and tech team's time on unqualified candidates.
What they used before
Mostly a mix of online job boards + team spending time on first phone screening + manually filtering/shortlisting based on pre-defined set of questions
Key Challenges
Clevrr AI faced 800 applications per job post with no effective filter—half were completely unrelated, making manual screening impossible. The founder was spending a third of his time on hiring activities while 70% of candidates failed basic technical rounds. Hiring cycles averaged 30 days in constant "rush rush" mode with zero bandwidth for proper candidate vetting.
Key Outcomes
Assessment-first filtering eliminated 70% of screening and shortlisting time, allowing the team to focus only on pre-qualified candidates. Clevrr AI hired 2 high-performing full-stack developers from 350-400 assessments who are delivering exceptional work. Founder time was freed from hiring hell to focus on product development, customer acquisition, and critical business growth activities.
What They Were Doing Before
Clevrr AI's founder, Yuvraj Dagur, was doing what most startup founders do when they need to hire: posting on job boards and manually screening everyone who applied.
Job Board Postings: Posted a full-stack developer role on LinkedIn. The response? 800 applications. Sounds like a good problem to have, right? Wrong.
Half the applications were completely unrelated. People from Dubai applying for a role in India. Candidates with zero relevant experience. Applications that looked like they'd been mass-submitted to 100 companies without reading the job description.
The other half? Slightly better, but still mostly noise. Resumes padded with buzzwords. Candidates who listed every framework under the sun but couldn't explain how any of it worked.
Manual Screening: Yuvraj and his team had to manually review all 800 applications. Read resumes. Try to figure out who was legit. Schedule phone screens. Conduct technical interviews.
The result? A third of Yuvraj's time was spent on hiring activities. Not building product. Not closing customers. Not scaling the business. Just endless resume screening and failed interviews.
And the hiring cycles? 30 days on average—always in "rush rush" mode, always behind, always firefighting to fill roles before the team burned out.
It wasn't sustainable. But it was the only option they knew.
Challenges & Pain Points
Clevrr AI's hiring nightmare came down to three brutal problems:
1. Massive Application Volume with Zero Quality Filter
When you post a startup role on LinkedIn, you don't get 800 qualified candidates. You get:
400 completely unrelated applications (wrong location, wrong skills, wrong experience)
300 somewhat related but unqualified (padded resumes, no real depth)
100 maybe qualified (and even these, 70% will fail technical rounds)
Screening 800 applications manually is soul-crushing work. You're reading resume after resume, trying to separate signal from noise, knowing that 90% of what you're looking at is garbage.
And here's the kicker: there's no good way to filter upfront. Resumes all look the same. Everyone claims to know React, Node, Python, AWS. You can't tell who's real until you interview them—and by then you've already wasted time.
2. 70% of Candidates Failed Technical Rounds
The funnel was brutal:
5-10 technical interviews per week
7 candidates (70%) failed basic technical questions
3 candidates (30%) passed and moved forward
That's 7 wasted interviews per week, or 28 per month, or ~340 per year.
At 1-2 hours per interview, that's 500+ hours per year of founder and engineering time burned on people who couldn't code.
And the failures weren't on hard problems. They were on fundamentals:
"Build a simple CRUD API" → Can't do it
"Debug this React component" → No idea where to start
"Explain how async/await works" → Blank stares
These candidates looked good on paper. Some even sounded smart in phone screens. But they couldn't execute.
3. Founder Time Trapped in Hiring Hell
For a startup founder, time is the most valuable resource. Every hour spent on unproductive hiring is an hour not spent on:
Building product
Closing customers
Raising capital
Scaling the business
Yuvraj was spending a third of his time on hiring activities—and most of that time was wasted on candidates who couldn't pass basic technical screens.
The opportunity cost was staggering. Instead of focusing on critical business activities, he was stuck in an endless loop:
Screen resumes → Phone screens → Technical interviews → Watch them fail → Start over
And because hiring cycles took 30 days (always in "rush rush" mode), there was never breathing room. Roles were always open. Hiring was always urgent. The work never stopped.
How Utkrusht's Assessment Platform Helped
Clevrr AI's breakthrough came when they stopped manually screening resumes and started using assessment-first filtering. Instead of guessing who could code based on resumes, they got objective proof upfront.
The Utkrusht Difference:
1. Assessment-First Filtering for Massive Application Volume
When you're dealing with 800 applications, manual screening is impossible. You need a systematic filter that separates qualified candidates from noise—before any human time is invested.
Utkrusht provided that filter. The new process:
Candidates apply through job boards (same as before)
Qualified candidates are sent to Utkrusht for technical assessment (new step)
Only candidates who pass assessments move to interviews (new step)
Founder and team interview pre-qualified candidates (same step, but way better candidates)
Results: Clevrr AI ran 350-400 assessments and identified the top performers—candidates who'd proven their technical depth before the first interview.
2. Open-Ended Scenario Questions That Reveal Thought Process
Most technical assessments are broken. They ask multiple-choice questions or right/wrong coding challenges that candidates can memorize or Google.
Utkrusht's assessments are different: open-ended scenario questions that watch how candidates approach problems.
Instead of asking: "What's the difference between == and ===?" (who cares?)
Utkrusht asks: "Here's a buggy React component. Debug it, explain your thought process, and show us how you'd prevent this issue in the future."
This reveals: ✅ How they think (not just what they know)
✅ What questions they ask (do they clarify requirements or just start coding?)
✅ How they approach debugging (systematic or random?)
✅ Whether they understand fundamentals (or just memorized frameworks)
For Clevrr AI, this was critical. They didn't need people who could answer trivia—they needed people who could think like engineers and solve real problems.
3. Questions Engineering Leaders Would Ask (But Don't Have Time to Create)
Here's what Yuvraj realized when he saw Utkrusht's questions: "These are the questions I would ask if I had time to create them."
But he didn't have time. He was too busy:
Building product
Managing the team
Closing customers
Firefighting urgent issues
Creating and maintaining a library of high-quality technical questions is a full-time job. And even if you build them once, you need to update them constantly (candidates share answers, frameworks change, new patterns emerge).
Utkrusht solved this by:
Building questions that engineering leaders would actually ask
Creating new questions every week (unlimited library, no answer sharing)
Designing assessments that test depth, not memorization
This meant Clevrr AI could assess candidates at the same level Yuvraj would—without Yuvraj having to spend time creating or updating tests.
4. 70% Time Saved on Screening and Shortlisting
Before Utkrusht, screening 800 applications and shortlisting qualified candidates took:
Hours of manual resume review
Dozens of phone screens
5-10 technical interviews per week (70% failed)
With Utkrusht's assessment-first approach:
Assessments filtered candidates automatically
Only top performers moved to interviews
70% of screening/shortlisting time was eliminated
That's not just efficiency—it's founder time and engineering bandwidth reclaimed from hiring hell.
"We posted a job on LinkedIn and got 800 applications. Half of them were completely unrelated—people from Dubai, candidates with zero relevant experience. I was spending a third of my time just trying to figure out who could actually code. We needed a filter that worked before we wasted interview time, not after."
— Yuvraj Dagur, Founder & CEO, Clevrr AI

Key Outcomes and Results
Clevrr AI didn't just save time—they fundamentally changed how they hire. Instead of drowning in 800 unqualified applications, they now filter for technical depth before investing any human time.
Screening Time: 70% Reduction
Before Utkrusht:
Manual resume screening for 800 applications
Phone screens for 50-100 candidates
5-10 technical interviews per week, 70% failed
With Utkrusht:
Assessments filter 350-400 candidates automatically
Only top performers move to interviews
70% of screening/shortlisting time eliminated
That's hundreds of hours per year reclaimed from hiring grunt work.
Hire Quality: 2 High-Performing Full-Stack Developers
Out of 350-400 assessments, Clevrr AI identified and hired 2 full-stack developers who are performing at "really high quality."
These weren't just adequate hires—they were exceptional ones. The kind of engineers who:
Ship fast and clean
Think independently
Raise the bar for the rest of the team
And critically, they were validated upfront through assessments. No surprises. No "we thought they could do it, but..." situations. Just strong engineers who'd proven their depth before day one.
Founder Time: Freed to Focus on Critical Activities
Yuvraj is no longer spending a third of his time on hiring. He's spending it on:
Product development
Customer acquisition
Strategic planning
Scaling the business
That shift is impossible to quantify, but it's the difference between a founder trapped in operational work and a founder driving the company forward.
Ongoing Success: Still Using Utkrusht, Still Hiring High-Quality Candidates
Clevrr AI isn't just a one-time success story. They're still actively using Utkrusht and regularly hiring high-quality candidates through the platform.
They ran 100-150 assessments during trial, hired 2 developers, saw they were excellent, and kept going. They've now run 250+ more assessments and are hiring 2 more people.
That's validation: when something works, you keep using it.
What Stood Out Most
When we asked Yuvraj what made the biggest difference, he pointed to the quality of questions and the unlimited library that engineering leaders would actually use.
Questions Engineering Leaders Would Actually Ask
Most technical assessments feel like they were designed by HR, not engineers. They test trivia, syntax, or abstract puzzles that have nothing to do with real work.
Utkrusht's questions are different because they're designed by engineering leaders for engineering leaders.
The questions test:
Real-world problem-solving (not abstract algorithms)
Thought process and approach (not just final answers)
Depth of understanding (not memorized frameworks)
Execution ability (can they actually build, or just talk?)
When Yuvraj saw the questions, his reaction was immediate: "These are the questions I would ask."
That's the validation that matters. Not "these questions are hard" or "these questions are comprehensive." But "these are the right questions."
Unlimited Library with New Questions Every Week
Creating technical assessments is hard. Maintaining them is even harder.
Even if you build a great set of questions today:
Candidates will share answers tomorrow
Frameworks will change next month
New patterns will emerge next quarter
You need to constantly update, refresh, and create new questions. That's a full-time job—and most startups don't have bandwidth for it.
Utkrusht solves this by creating new questions every week. The library is unlimited, constantly refreshed, and always up-to-date.
For Clevrr AI, this meant they could assess candidates at the highest level without investing any time in test creation or maintenance. The questions just worked—and kept working.
Open-Ended Scenarios, Not Right/Wrong Questions
The third thing that stood out was how the questions were structured.
Utkrusht doesn't ask: "What's the correct way to handle async operations in JavaScript?" (right/wrong, can be Googled)
Utkrusht asks: "Here's an async operation that's causing race conditions. Debug it, explain what's happening, and show us how you'd architect this to be production-safe."
This isn't testing memorization—it's testing how candidates think.
The assessment watches:
What questions do they ask before starting?
How do they approach debugging?
What's their thought process for solving the problem?
Can they explain their reasoning clearly?
For engineering roles, this matters more than getting the "right answer." You need people who can think, adapt, and solve novel problems—not just regurgitate what they memorized.
Why Clevvr AI Chose Utkrusht's Platform Over Others
Clevrr AI could've kept manually screening. They could've built their own tests. They could've used generic assessment platforms. So why Utkrusht?
1. Assessment-First Filtering at Scale
When you're dealing with 800 applications, manual screening doesn't work. You need systematic filtering that separates qualified candidates from noise—automatically.
Utkrusht provided that filter without requiring any setup or maintenance from Clevrr AI's team.
2. Questions Built for Engineering Leaders
Generic platforms ask generic questions. Utkrusht asks questions that engineering leaders would actually ask—because they were built by engineering leaders.
This meant Clevrr AI could assess at the level Yuvraj would, without Yuvraj having to design the assessments.
3. Unlimited Library That Stays Fresh
Building tests once is hard. Maintaining them forever is impossible.
Utkrusht's model—new questions every week, unlimited library—meant Clevrr AI never had to worry about test maintenance. The assessments just worked.
4. 70% Time Savings on Screening
The ROI was obvious: 70% less time on screening/shortlisting meant hundreds of hours reclaimed from hiring grunt work.
For a founder spending a third of his time on hiring, that's transformational.
"Utkrusht's questions are exactly what an engineering leader would ask—but we don't have the time or bandwidth to create them ourselves. And they're constantly creating new questions every week, so it's an unlimited library that actually tests depth and fundamentals, not memorization. That's impossible to build in-house when you're trying to scale a startup."

What's Next
Clevrr AI isn't going back to manual screening. They've seen what assessment-first hiring looks like, and there's no unseeing it.
They're continuing to use Utkrusht for all technical hires, running assessments at scale, and focusing their time on candidates who've proven they can code—not hoping candidates can code.
For growth-stage startups where founder time is precious and hiring can't be a bottleneck, assessment-first filtering isn't optional—it's the only approach that works.
"The two developers we hired through Utkrusht are doing really high-quality work. They didn't just pass our technical bar—they raised it. And we didn't have to waste weeks interviewing 50 people to find them. The assessments filtered for depth upfront, so we only talked to candidates who could actually execute."

Want to hire
the best talent
with proof
of skill?
Shortlist candidates with
strong proof of skill
in just 48 hours

Cutting Founder and Engg Team's initial screening time and filter 200+ applications in 48 hours
Nov 6, 2025

Hired foundational AI Engineers in 14 days
Nov 6, 2025

Cutting interviews by 70% using skill assessment-first screening
Nov 5, 2025

Identified and hired the best fit DevOps Engineer in 9 days
Nov 5, 2025