Skip to content

Commit 2aa6d3c

Browse files
committed
new post
1 parent bafcbf2 commit 2aa6d3c

File tree

2 files changed

+133
-0
lines changed

2 files changed

+133
-0
lines changed
Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
---
2+
title: "The coding interview in 2025"
3+
date: 2025-09-05T8:31:38+02:00
4+
tags: ["management", "hiring", "ai"]
5+
images:
6+
- "/images/undraw_interview_yz52.png"
7+
---
8+
9+
Over my career, I've conducted several hundreds of interviews, written coding exercises, and even designed entire hiring
10+
pipelines. I've also screwed up, many many times.
11+
12+
On the other side of the fence, I use AI coding tools frequently but not daily. I'm very aware of when they're a
13+
productivity boost and when they're a waste of time, money, and resources. Coding assistants are evolving incredibly
14+
fast, and I know first-hand how staying current with them takes time and commitment. I'm also experiencing how using
15+
them to write more and more of my code, I tend to forget the finer details of languages and libraries. While I think
16+
it's unacceptable for a software engineer not being able to use generative AI tools on the job in 2025,
17+
including this aspect into the hiring process is tricky.
18+
19+
From what I see and am told, companies today use variations of these techniques for technical screening interviews:
20+
some prohibit AI completely, doing their best to prevent candidates from using it by asking them to share their full
21+
screen and narrate their thought process. Others explicitly permit AI, sometimes using interview tools with integrated
22+
assistants so the interviewer can watch the interaction unfold, or simply asking candidates to be transparent about how
23+
they use these tools.
24+
25+
However, I believe that prohibiting AI in interviews is a flawed approach. First, getting used to AI tools can actually
26+
[make you slower](https://arxiv.org/abs/2506.08872) when they are taken away, which means you might lose strong
27+
candidates who simply lack practice coding without an assistant. Second, you really don't want to hire someone who has
28+
no idea how to use coding assistants; a good candidate should at least have an opinion on the matter. And finally,
29+
let's be honest, cheaters are always going to find a way to cheat.
30+
31+
All things considered, I think the best approach is the opposite: **go all in on generative AI and see how the candidate navigates its quirks.**
32+
33+
## The interview blueprint
34+
35+
The best problems for this kind of exercise are those that have an easy but sub-optimal solution that can be
36+
refined into a less obvious ideal solution.
37+
38+
To make this post more concrete, I'm going to show an exercise we used during phone screenings when I worked at
39+
[Datadog](https://www.datadoghq.com/). But if you decide to use this blueprint for your own interviews, I highly recommend
40+
you create an original problem. For the record, we stopped using this particular problem around 2016 after it leaked on
41+
Glassdoor, so I'm comfortable sharing it.
42+
43+
This blueprint should take no more than 30 minutes.
44+
45+
### Step 1: illustrate the problem
46+
47+
Datadog has its own query language and grammar, and supports templated variables in monitors descriptions for instance.
48+
We'd like to write a generic `is_balanced` function that would tell if a given string is balanced. We only want to
49+
support `(` and `)` for now. Parenthesis could be nested.
50+
51+
```python
52+
def is_balanced(word):
53+
pass
54+
55+
# Test cases:
56+
57+
print is_balanced('Warning: load is high on (host.ip)'), True
58+
print is_balanced('((hello)(world))'), True
59+
print is_balanced('my (monitor))(message'), False
60+
```
61+
62+
Start by asking if the candidate needs any clarification before moving to the next step.
63+
64+
### Step 2: get to a working solution
65+
66+
Any good coding assistant should be able to provide a working solution very quickly. What you're looking for here is
67+
how the candidate interacts with the tools. The problem is simple enough that some candidates may opt to code it
68+
manually, which is a perfectly good sign. If they do, you can shift the AI-coding evaluation to later steps.
69+
70+
Chances are that the keywords "balanced parentheses" will skew both the human and the coding agent towards a stack-based
71+
solution:
72+
73+
```python
74+
def is_balanced(word: str) -> bool:
75+
stack = []
76+
for char in word:
77+
if char == '(':
78+
stack.append(char)
79+
elif char == ')':
80+
if not stack:
81+
return False
82+
stack.pop()
83+
return not stack
84+
```
85+
86+
What to look for:
87+
- **Evaluate the prompt they're using**. Many will just paste the entire problem into the assistant's prompt, but pay attention to any original or creative approaches.
88+
- **Is the candidate correcting the code provided?** The provided problem description intentionally lacks type hints. Assistants usually understand typed code better, so fixing this upfront is a good practice. Note if the candidate thinks to do this.
89+
- **How does the candidate handle the test cases?** The tests are in pseudo-code and not ready to be run with a test runner (the `print` statement is there only to celebrate the good old times :). Check if the candidate uses them and how. Translating them into working Python might be a more effective strategy.
90+
91+
### Step 3: discuss the first solution
92+
93+
This step is all about seeing if the candidate can understand the code produced by the assistant and if they are able
94+
to question its decisions. Start with these follow-up questions:
95+
- Ask them about the time and space complexity of the solution. This will force them to walk through the generated code to understand it.
96+
- If the code produced is stack-based, ask them if there's a better solution with lower space complexity.
97+
- If the code doesn't use a stack, ask them if a stack would be better or worse.
98+
99+
What to look for:
100+
- **Can the candidate refine the assistant's output?** Look for examples like adding or improving typing, or makingvariable names clearer.
101+
- **Is the candidate able to iterate on the output?** See if the candidate is able to optimize the code and how.
102+
103+
### Step 4: produce the optimal solution
104+
105+
The final step is to arrive at the ideal, optimal solution. Since we want to have a more responsible use of memory, we
106+
can replace the stack with a simple counter. A coding agent would generate this code if explicitly asked to refine the
107+
stack-based solution in order to consume less memory. Obviouly, the same happens if the candidate comes up with this
108+
idea on their own and they prompt the agent with a simple _"rewrite the function replacing the stack with a counter"_:
109+
110+
```python
111+
def is_balanced(word: str) -> bool:
112+
count = 0
113+
for char in word:
114+
if char == '(':
115+
count += 1
116+
elif char == ')':
117+
if count == 0:
118+
return False
119+
count -= 1
120+
return count == 0
121+
```
122+
123+
What to look for:
124+
- **Does the candidate know what to optimize next?** Or are they just going to rely on a zero-shot prompt to the assistant?
125+
- **Does the candidate own the generated output?** See if the candidate can discuss trade-offs between performance and future-proofing.
126+
127+
## Conclusion
128+
129+
Embracing generative coding in the hiring process isn't just about passively adopting a new technology; it's about
130+
finding out how a candidate adapted to these new tools, shifting the focus from memorization to guided collaboration
131+
with an AI assistant and getting a clearer picture of their real-world skills. You’ll see if they can ask the right
132+
questions, critically evaluate a generated solution and refine it into an elegant, optimal product, which is something
133+
coding assistants have yet to prove they can do.
58.2 KB
Loading

0 commit comments

Comments
 (0)