We've been in tech long enough to watch hiring trends come and go like fashion seasons. First, everyone was obsessed with ATS-filtering mechanisms. Then came the rise of tests, MCQ-quizzes and questions. Then all about whiteboard coding. After that came algorithm tests, and then take-home assignments.
The methods keep changing, but the goal stays the same: find tech people who can actually ship quality code and build things that work.
Here's the thing—there's always been a gap between what we test for in interviews and what people actually do on the job.
Think of it like testing someone's ability to juggle while standing still, when the real job requires juggling while riding a bicycle. Sometimes the gap is small, sometimes it's huge, but it's always there.
Right now? That gap is bigger than ever.
AI tools, ChatGPT, Claude, etc. have completely changed how we work, but most companies are still interviewing like it's 2019. It's like preparing for a horse race when everyone else has moved on to cars.
And if you're running a small engineering team or a custom development shop? This problem is 10x worse.

Let me tell you about some interview practices that made sense once but are now like asking someone to use a typewriter when everyone has laptops. And for small teams? These practices are not just outdated—they're actively harmful.
Memorizing Algorithms
Picture this: You're interviewing someone and ask them to write quicksort from memory. They nail it perfectly on the whiteboard. Great, right? Wrong.
Here's why—when they're actually on the job, they'll have ChatGPT or Claude sitting right there, ready to write any sorting algorithm in seconds. It's like testing if someone can do long division by hand when they'll always have a calculator in their pocket.
For small dev shops, this is extra problematic because:
You don't have time to conduct multiple rounds of algorithm tests
You need people who can jump into real projects immediately, not solve theoretical puzzles
One person who "aces the interview but can't code in the real world" can sink your entire project
I once saw a developer ace the whiteboard quicksort test. Hired them. Then watched them use that same quicksort in production on data that was already mostly sorted. The result? Their code was slower than just using the built-in sort function. They had memorized the "how" but didn't understand the "when."
The small dev shop that hired them? They lost a client because the application was too slow. The client didn't care that the developer knew quicksort. They cared that their app didn't work.
Think of it this way: knowing how to build a hammer doesn't help if you can't recognize when you need a screwdriver instead.
Language-Specific Trivia
Remember when knowing every quirk of Python or JavaScript syntax was impressive? Now AI can fill in those gaps instantly.
It's like the difference between someone who's memorized the entire menu at one restaurant versus someone who understands what makes good food good. The first person is lost when the restaurant closes. The second person can walk into any kitchen and figure things out.
For custom development companies, this matters even more:
Your client needs a React project done. You hire someone who's a "React expert." Three weeks later, the client wants to switch to Vue. The "expert" is now useless because they memorized React syntax instead of understanding JavaScript fundamentals.
Meanwhile, the person with solid fundamentals but less deep expertise picks up Vue in a week—with AI's help.
When you're a small team working on diverse client projects, you can't afford specialists who are useless the moment the tech stack changes. You need people who can adapt.
Coding Without Any Tools
Some companies still make candidates code on whiteboards or in plain text editors—no internet, no autocomplete, no AI assistance. It's like asking a surgeon to operate without modern equipment to prove they're "really" skilled.
For small engineering teams, this is doubly wasteful:
You spend 2 hours watching someone struggle without tools they'll always have on the job
You miss great candidates who are amazing with real tools but average on whiteboards
You select for people who are good at interviews, not people who are good at building
Real work doesn't happen that way. Testing someone without tools doesn't make the test more pure—it just makes it test something completely different from the actual job.
Someone who's amazing at whiteboard coding might struggle debugging a real distributed system. Meanwhile, someone who's average at whiteboard coding might be incredible at using AI tools to solve complex problems.
When you're a 15-person team and every person needs to be productive from day one, you can't afford to miss good people because of artificial test conditions.
Real example: Shopify and GitLab have both shifted to more practical, tool-inclusive interview processes, and they've found better candidates this way.
System Design Trivia
"How many requests per second can Redis handle?" "What are Postgres's isolation levels?"
These used to be great questions. Now? You can Google them in 5 seconds. Or ask ChatGPT.
But here's the deeper problem: people who've memorized these numbers often memorize the wrong lessons too.
For dev shops, this creates real problems:
I remember a senior engineer who insisted we needed to shard our database across multiple servers because "databases are slow." Turned out a single modern Postgres instance handled everything we needed—faster and simpler.
When you're a small team with limited resources, you can't afford to hire someone who over-engineers solutions based on outdated assumptions. You need people who understand tradeoffs and can build appropriately for the scale you're actually dealing with.
That over-engineered solution? It added 3 weeks to the project timeline and $30K to the budget. All because someone memorized "facts" from 2015 that are no longer true.
Our platform is developed only after we've personally experienced tech hiring pain points ourselves, being a part of ~500 interviews
So it's not some random AI tool we built in 2 weeks, instead we've built a massive infrastructure to conduct assessments of actual job simulations, not AI-video interviews
We intentionally made this for tech hiring, typically engineering and developer based roles, and not hiring for all functions
Our assessments are built for small and mid-sized businesses (SMEs), custom software development companies, or growth-stage startups
Unlike some of the other players, we're not a big brand with huge funding and big resources, so if that's your main criteria you should not choose our product
But if your main criteria is — strongest evaluation of technical skills, never doing “guesswork” hiring, always have high degree of confidence (~95%) that the candidates you’ll interview will be the best fit for your position and requirements — then our product is right for you
We're a lean team who obsess daily, have done rigorous research and simulations in our models to get the exact hiring outcomes you want