For anyone who’s ever been involved in the hiring process, it’s no easy feat — particularly in a growing company. To get hiring practices right, it takes iteration based on feedback — both on the internal processes within your company as well as on the external process a candidate experiences. Continuously improving hiring is important for a host of reasons, and chief among them is the high cost of hiring the wrong person, or missing out on the right one.

The initial state

At Slack, we put a lot of care in hiring, and Engineering hiring is certainly no exception. When it comes to hiring backend engineers, we’ve always given a take-home exercise, preferring that candidates complete their programming practicum in an environment cozier to them: on their own machine, and in their own time, within reason. Once candidates have passed our phone screen and this take-home exercise, we invite them onsite for two technical interviews and two non-technical interviews.

The take-home exercise and the technical design onsite interview look for a number of meaningful attributes in candidates. We look for candidates who display a high degree of craftsmanship, who are security-minded, and who are concerned with system performance and reliability. For a long time, both assessments were giving us a good signal on these attributes and others.

The take-home exercise had a number of factors in its favor:

  • clear requirements
  • sufficiently broad in scope for a candidate to display their creativity
  • lent itself well to a standardized grading process via a defined set of metrics
  • easy to anonymize to reduce unconscious bias

From a candidate’s point of view, there were no surprises. There were no “gotcha” questions, and expectations were clear. In some cases, we would ask candidates questions about their solutions when they came onsite — particularly if they took a novel approach in their solution. Investigating the thought process behind these solutions was often illuminating.

With coding well-covered in the take-home exercise, the onsite technical design interview was modeled on how engineers approached and analyzed problems in their daily work. There was no whiteboard coding, though candidates were free to use a whiteboard to sketch out their ideas. Candidates were given a technical problem and asked to design a solution, and both the candidate and interviewer spent time digging into the various aspects of the problem and the solution together.

A need for change

Slack was growing, and growing quickly. We needed to hire engineers, but we soon realized that our growth was outpacing the rate at which we could fill open roles. Our take-home exercise, while loved by many, was also time-consuming. Its open-ended qualities meant that candidates, wanting to show off their best work, could end up spending many more hours of their own time to complete it than we had expected. This was often a barrier for candidates who couldn’t afford to invest the time needed to complete the exercise to our desired level of quality.

The end result was that, by our estimates, it would have taken a year to fill our existing open headcount, future growth aside. This timeframe clearly would not allow us to grow at the speed we needed. However, we were also unwilling to sacrifice quality. We needed an approach that would give us good signal and help us hire great engineers, but at a reduced time cost to the candidate and to us.

To satisfy these needs, we decided to create two new take-home exercises: an API design exercise and a code review exercise. In creating these exercises, we sought to create a problem that was not an onerous time investment on the part of the candidate. We wanted something that would give us good signal on the attributes we cared about while taking at most two hours to complete.

We started by defining those attributes. Craftsmanship involves code correctness and code style, an attention to detail and design, and an understanding of the importance of testing. We also wanted candidates who could keep an eye out for security concerns and performance issues. On top of this, we also wanted to know that these folks were good teammates — that they were concerned with maintainability and documentation, and that they could express themselves with empathy and a mindset towards collective learning.

Both of the new exercises also preserved what we liked about the existing take-home, such as anonymized grading and clear requirements, while improving on its major weakness — the amount of time required in order to complete the exercise. After running both exercises side-by-side for some time, we measured how well the exercises performed at achieving our goals of substantial signal and reduced time. In the end, we chose to move entirely to the code review exercise as it better fulfilled both of these goals.

Scaling up

While the content of the take-home exercise is incredibly important, we also had to think about the way in which it would be administered and graded. Though we as engineers were heavily involved in the pilot process for the new code review exercise, in order to scale it to all of engineering recruiting, we also needed to ensure that the exercise could be sent to new candidates and completed with minimal intervention needed by an engineer.

We started by evaluating tools that could be used to take the exercise. Our first internal pilot used docs containing markdown code blocks that were shared with candidates, who were asked to leave comments for any code issues they found. This approach had a number of issues:

  • creating and sending docs to candidates was a tedious, manual process
  • tweaking the code content of the exercise required manually updating the template doc
  • all the comments contained the name of the writer, removing anonymity
  • the tooling was unintuitive for code review

With that in mind, we decided to move the exercise to GitHub. We created a new organization to hold repositories for each candidate exercise along with a repository to hold the exercise template itself. For each new exercise, we would copy the contents of the template repo and create two commits on the candidate’s repo — one initial commit against the main branch, and one in a separate dev branch with the content they were meant to review. Then, we would create a Pull Request (PR) from dev to main so that we could use the code review tools built into GitHub. We’d invite the candidate to the repository and ask them to review the PR. Problem solved!

Well, not quite. The approach was sound, but not scalable. In order to streamline exercise creation and assignment, we started by writing a Python script that would invoke git on the command line in order to perform the desired git operations, and then hit the GitHub API to open the PR and assign access. We also wrote a script that would download the PR content and convert it into a markdown file for grading — this way, graders wouldn’t see the identity of the candidate in GitHub.

The scripts, however, had their own pitfalls. Python and the script dependencies needed to be installed, git needed to be installed and authenticated, and any error in the flow resulted in cryptic errors on the command line. This may have been fine for engineers, but not for recruiters. They needed a robust and easy-to-use way to administer the exercise, and the CLI script approach definitely was not that.

With that in mind, we built a new electron-based app for the recruiters to use: Slack Engineering Recruiting Tools. We wrote new logic in TypeScript that no longer relied on command-line tools for git operations. Instead, it used the Git Database API to manipulate git objects directly in GitHub — creating files, refs, trees, and commits as needed in order to construct the content of the exercise. It continued to use the GitHub API to create the repositories and pull requests and to assign access as needed. We wrote a frontend for this new logic in React, allowing recruiters to easily create exercises, assign access, and download completed exercise content for grading.

With this new app, we finally had a well-performing exercise, the right platform for taking the exercise, and the right tools for administering it efficiently at scale.

Expanding the effort

Having refactored the take-home exercise into a code review exercise, we needed a concrete way to measure candidate programming ability. In parallel to the work on the take-home exercise, we also reworked the onsite technical assessment into a coding session.

There were a number of factors that were important to us in an onsite coding session. First, we wanted to retain the “no whiteboard coding” precedent, as we did not and still do not believe that whiteboard coding is beneficial in assessing a candidate’s practical technical skillset.

Second, we strived for the problem itself to be realistic. We wanted candidates to implement a basic version of a real feature — for the implemented program to be loosely related to the kinds of work you could expect to do if you were hired.

Lastly, we wanted the experience to be as realistic and close to everyday work as possible. We wanted the candidate to have the comfort of working on their own machine along with access to reference materials such as Google, Stack Overflow, and any other sources a programmer might use in the course of their normal day-to-day work. Interviewers and candidates would work together through the problem — no gotchas, no purely algorithmic assessments. As technologists, we’re deeply pragmatic. While we know that programming is often a solo activity, building software is a team sport. We believe our technical assessment should reflect that.

Finally, with the exercises themselves reworked, we had to train our engineers on the new exercises. We developed grading rubrics for both, and we set up training sessions where we walked through examples. New interviewers and graders shadowed our initial team in order to learn the new content. Along the way, they also provided invaluable feedback that allowed us to continue to improve the exercises.

A new state

In the end, we saw tangible improvements against our goals. We saw a decrease in our time-to-hire — the time from when a recruiter first reaches out, to the candidate’s first day in the office. The time-to-hire metric decreased from an average of 200 days to below 83 days — and it continues to drop. We’ve seen positive feedback from candidates and employees in all parts of the process:

I really liked the take home code review exercise. It covers a practical part of the job that normal interview processes don’t usually focus on, and for an interviewee it has clearer expectations (especially around time) than unbounded take-home exercises usually do.
— Candidate

The interview process was very thoughtful. [It] tested my knowledge and skills I had gained during my years of work experience vs. testing textbook knowledge or concepts.
— Candidate

Our original take-home was too burdensome of a time-commitment. Our revamp required less time from our candidates as well as from our internal graders. Moreover, the streamlining of our process helped us to be more competitive with other technology firms. It enables us to better attract more kinds of candidates: both active candidates that are searching for jobs, as well as passive candidates that weren’t previously looking to move to a new company.

Throughout this refactoring process, we tried to consistently balance trade-offs by viewing the interview process holistically. We shortened the amount of time needed for the take-home exercise by reworking a coding exercise into a code review exercise. As a result, we needed to lengthen the onsite technical sessions to accommodate a programming evaluation. We also had to let go of some of the open-ended creative aspects of the previous take-home exercise. In return, the code review exercise gave us a clearer sense of what it would be like to work with candidates on a daily basis.

After all, much of software development is working with each other — sharing our experiences in order to help each other improve, as well as listening and learning in order to improve ourselves. Much like the process of self-improvement we apply to ourselves, we continue to listen, to learn, and to improve the processes we use at Slack.

Many thanks to the rest of the folks who also worked on this refactoring: Andy King, Bianca Saldana, Jina Yoo, Maude Lemaire, Ryan Greenberg, Ryan Morris, Saurabh Sahni, Scott Sandler, Stacy Kerkela, and Steven Chen.

Interested in joining our team? Check out our engineering jobs and apply today!