Skip to main content
Search
Search

Introducing Highlights

Since launching in 2013, Slack has helped millions of users across hundreds of thousands of teams communicate more efficiently, effectively, and transparently. But as Slack lowers the barriers to communicating internally, the volume of communication that results can be overwhelming. McKinsey estimates that knowledge workers spend 28% of their time managing digital information; accordingly, proactively helping users combat information overload is one of the top priorities of Slack’s Search, Learning, and Intelligence (SLI) group in New York City.

As a first step towards that goal, Slack is launching Highlights, a set of new Slack features designed to help you focus on what’s important first and catch up quickly when you fall behind. At the top of All Unreads, Slack will surface a selection of the most important messages, powered by how you uniquely work in Slack. Slack will also highlight important unread messages within channels, so you can skim and get up to speed in the channels you haven’t checked in awhile.

Mining the Work Graph

Predicting which messages are most likely to be important is a challenging technical problem, since the messages that matter the most to you may be different than the ones your coworkers need to see. Slack solves this problem with a personalized engagement model that uses machine intelligence to predict the messages you’re most likely to click, share, reply, or react to.

To make these predictions, SLI leverages the work graph: the highly-structured network of communication that takes place inside Slack. This graph captures the important connections between the people you message, the channels you interact in, and the kinds of files and apps you use most. Understanding these relationships is central to SLI’s larger mission: in the same way that Google captures the web graph to power search and Facebook leverages the social graph to power its News Feed, Slack is building a comprehensive log of the knowledge network within teams to give companies superpowers. Unlike other companies, Slack uses what it learns from this graph only to improve the quality of service it provides, not to target ads or sell user data to third parties.

Predicting Engagements

In these early days of machine learning at Slack, we have a strong bias towards simple models that produce good results and are easy to understand and reason about. We formulate the engagement problem as a logistic regression over user-message-engagement tuples, where the features include more than a hundred summary statistics capturing information about the message’s author, content, channel, reactions, and interactions. The output of the regression is a vector of feature weights that can be fed into the logistic function to produce values between 0 and 1; we can interpret these values as the probability of a given message triggering a particular engagement from a particular user.

One of the Airflow DAGs used to train and evaluate our engagements model, showing the data dependencies between Spark jobs.

This problem is complicated by the tight time window in which our regression must operate: our predictions are only useful if they can reliably be made before a user encounters a message and decides whether or not to engage. As time passes and we gather more signal about a message, the data on which to base a prediction increases, but the number of users who can benefit from that prediction dwindles.

Class Imbalances

Since most users engage with only a tiny fraction of the messages they see in Slack, we’re faced with a class-imbalance when training our regression model: we have many more negative examples than positive ones, and a trivial model that predicts “no” for every input message will have high precision when evaluated against real-world data. To combat this problem, we employ a number of proven techniques. We use stratified sampling to create training sets, picking an engaged message from a channel and then selecting a fixed number of unengaged ones from the surrounding conversation to preserve a fixed ratio of positive to negative examples. We evaluate changes to our models by computing the area under the precision-recall curve, which penalizes models with low sensitivity. We also employ prior correction to correct the maximum-likelihood estimate produced by the regression using statistics about the underlying distribution of the sample population.

Personalization

In theory, we would like to learn a personalized engagement model for every user, selecting different feature weights for each individual on a team. In practice, we have too little data to successfully train so many models, so we group user behaviors and train one aggregate model per team. To personalize these models, we incorporate a set of user-specific features learned from the Work Graph.

To predict the likelihood that a user engages with a message, we employ a measure of the user-to-author affinity (defined roughly as the general propensity of the user to read the author’s messages) as well as a measure of the user’s priority for the channel in which the message was posted. In addition, we create personalized features for each class of message engagements, weighted by the affinities of the users who engaged. These features allow the model to capture predictive signals like people being more likely to reply to a message that several close colleagues have already replied to. All together, online testing has validated that our learned regression is 20–50x better at predicting engagements than a naive model.

Justifications

Because our regression model is explicable, we can use it to power a justification layer for the messages Slack highlights. By computing the weight-multiplied value for each feature and sorting the results, we can obtain an ordered list of the features that most contributed to the prediction. Pairing this list with a set of contextual annotations allows us to surface human-readable justifications for each message — for instance, leveraging our user-to-user priority score to identify that a message has reactions from people you care about — so users can understand and trust Slack’s predictions. We also use these justifications to diversify the Highlights shown in All Unreads, so that users see a representative cross-section of the different kinds of content that are important to them whenever they’re catching up.

An in-channel Highlight, with a justification derived from our engagements model.

Helping Slack Help You

Highlights represents one of SLI’s first steps towards transforming Slack into an always-on Chief of Staff. It’s also Slack’s first attempt to deliver on a core SLI value: helping us help you. The more you interact with messages and content within Slack in the course of your daily work, the better Slack will understand what matters to you and help you focus your attention in the right places. Every kind of engagement helps: starring channels, reacting to messages with emoji, clicking on links, and leaving replies. Highlighted messages also come with opportunities to give explicit feedback, which we use to refine our models and train better ranking algorithms.

Soon, Slack will be rolling out a host of new data-driven features to help you surface important messages from channels you don’t regularly read, summarize lengthy conversations, curate and share content, and more. The future of communication is bright! ✨

If you want to help build data-driven features like Highlights and make the working lives of millions of people simpler, more pleasant, and more productive, check out our job openings in SLI and apply today.

Previous Post

Into the Clouds

At Slack we use push notifications to let you know when someone sends you a…

Next Post

Technical Leadership: Getting Started

The Individual Contributor path begins with Leading Self Before I became a software engineer, I thought…

Recommended Reading

Recommend API

@Katrina Ni@Aaron Maurer
scroll to top