We’ve reached a few big milestones for the Slack Bug Bounty program: it’s our three-year anniversary, and we’ve paid out more than $210,000 in bounties! We want to give a big thank you to all the security researchers who have helped make Slack more secure. In this post we’ll offer a retrospective on our bug bounty program, discuss lessons learned, and offer guidance for researchers. We also hope this information will be useful for anyone else running a bug bounty program, or considering starting one. For more details about our program, you can see our HackerOne profile here.
Our Program So Far
We launched our bug bounty program in February of 2014. Slack was a much smaller product (and team) back then, but we were committed to the security of our users. Bug bounties allow organizations of all sizes to incentivize security researchers to report vulnerabilities, and ours has formed an integral part of Slack’s security processes. Our current bug bounty process works as follows:
- A researcher submits a vulnerability report
- We evaluate the report and determine if the bug is valid (we use a third-party triage service to evaluate and reproduce incoming reports)
- If the bug is valid we file an internal ticket to track and fix the issue
- We fix the issue
- We reach out to the researcher for verification of the fix
- We reward the researcher for their finding
We’ve collected data over the last three years which has informed the evolution of our program into what it is today. It’s our hope that you find these measurements and observations interesting and useful!
As is the case with many new bug bounty programs, we received a massive influx of reports at our launch. In fact, in the first four months alone we received nearly 1,000 new reports, with over half of those occurring in the second month. Fortunately, we were prepared for these reports, and our response and resolution times did not suffer as a result.
After launch, we’ve had a pretty consistent rate of about 100 inbound reports per month.
Another metric we’re interested in is our response time, as it provides a way for us to judge the quality of our communication with researchers. At our program’s launch, despite the small size of our team and large number of reports, we were able to respond to reports very quickly. Gradually, our mean response time slowed a bit, though it is worth noting that these averages can be heavily affected by outliers. Our median response time was better than the mean, as the mean response time was increased by a small number reports which took a long time to receive a response. We first began using a third party triage service in April of 2015, which helped to decrease and stabilize our mean response time shortly after. We’ve also implemented better tracking of our response times, and our current mean response time is less than 24 hours.
We haven’t seen as many patterns in the value of rewards given out per month. At Slack we reward bugs once they are resolved, so the difficulty and prioritization of a bug-fix can factor into how long it takes to reward. We fixed and rewarded a large number of bugs shortly after the launch of our program, which was roughly proportional to the quantity of reports received. It is worth noting, though, that one severe bug which receives a large reward can also make this metric unpredictable.
What We’ve Learned
When we first launched the bug bounty, Slack had fewer than 20 employees and didn’t yet have a dedicated security team (though we did have engineers well-versed in security). Our engineering and security teams have grown significantly in the last three years, and we’ve been able to deliver consistently improving performance as we’ve grown and matured.
Over the course of the bounty we’ve also improved our own processes, which have in turn increased our response and scaling capabilities. We have allocated more resources to the triage process, and have streamlined our internal bug bounty workflow.
Every new bug provides us opportunities to improve our efficiency, resulting in faster response and resolution times. Three years have taught us a lot, and we look forward to growing our program!
One particular area of focus for us has been decreasing both our mean response and resolution times.
As we expected, we received a massive number of reports at the beginning of our program. For those considering launching a bug bounty program of your own, it is worth preparing for this influx so you don’t find yourself struggling to keep up with new reports. Ensure that you have the resources to handle large numbers of both valid and invalid reports — for the health of your program and the health of your engineers. There are many approaches to handling a high volume of reports — using a third party triage service works particularly well for us. We’ve added individuals from our triage provider to our program on HackerOne, which gives them access to incoming reports. From there, they evaluate the reports and reach out to us if more information is needed. They inform us when a bug is valid and provide a description of the issue, at which point we file an internal item. Once a bug is filed internally, our engineering team can work on the fix. The larger and more popular your product, the more interest and reports you can expect to receive.
Nearly half of the reports we have received are “Not Applicable”, which applies to reports which are not valid issues, not security issues, or outside the scope of our program. “Informative” reports may contain some useful information, but do not impact the security of our customers. “Duplicate” reports contain previously reported issues, and “Resolved” reports contain issues which we fixed as a result of that report. More information on the different report statuses can be found here.
A triage service has been an invaluable asset in the success of our program. It enables us to handle several hundred reports per year while still responding, reproducing, and fixing bugs in a timely manner. Our security team is able to focus on seeing bugs through to resolution without becoming overwhelmed by the total volume of submissions.
A slightly less quantitative, yet still important, consideration is the overall happiness of both researchers and internal developers. As a security team, we must maintain a balance among different stakeholders in order to keep our product secure.
Keeping researchers happy provides an incentive beyond the monetary rewards for researchers to inform us of vulnerabilities that we can fix. The mean response time metric acts as a surrogate for researcher satisfaction — in general, maintaining rapid communication with researchers lets them know that the security team cares about their reports and takes security seriously. While we are always reasonably timely in getting fixes out, when resolution takes a bit longer than expected we’ve found that keeping researchers informed of this progress tends to increase their patience.
Keeping our engineers happy helps foster a good work environment wherein we all work together to resolve bugs. We communicate quickly and concisely with our engineering team when we receive a vulnerability report in order to prevent teams from being overwhelmed. We group bugs into one of four severity levels when filing internal issues. Using discrete bug severities ensures that all vulnerabilities are resolved in a timely fashion, as dictated by their severity, while allowing engineers to efficiently allocate their time.
Having empathy for both external researchers and internal employees goes a long way towards encouraging long term goodwill for the security team, which is vital for a successful bug bounty program.
Tips and Tricks for Researchers
With all that said, we’d like to offer some guidance to researchers (both old and new) looking to find vulnerabilities in Slack. These suggestions for where to search and what types of bugs may be most interesting, within the scope of our bug bounty program.
Slack comprises many parts, interacting in complex ways. The list that follows highlights several of them, explaining what they are, what technologies they use, and what types of vulnerabilities are most relevant for each of them. We hope that this will provide more insight into what to look for, as well as which areas will be most lucrative for bugs. Happy hunting!
Slack API —api.slack.com
What it does: The Slack applications (web, desktop, mobile) all communicate with this endpoint, as do apps and integrations built using the Slack API. The API contains the core functionality of managing and administering a Slack team, communicating on that team, and more.
What to look for: Authentication bypasses, bypasses of permissions, information leakage
What it runs on: PHP, MySQL, Apache, HHVM, Cloudfront
Slack web app — my.slack.com
What it does: This is what you interact with when using Slack in the browser. Many of the requests and features interact with the API and WebSocket endpoints, but there are also features unique to the web experience.
What to look for: Web application flaws, XSS, CSRF, auth bypasses
What it runs on: PHP, MySQL, Apache, HHVM, Cloudfront, HTML/jQuery/Handlebars
Slack Desktop
What it does: This is what you interact with when using the Slack app on your desktop (Windows/Mac/Linux). Many of the requests and features interact with the API and WebSocket endpoints, but there are also features unique to the desktop experience.
What to look for: Authentication bypasses, bypasses of permissions, information leakage
What it runs on: Electron frontend, which communicates with the API/web backend
Slack mobile apps
What they do: This is the suite of mobile apps for iOS, Android and Windows Phone. They run native mobile OS code and have connections to the API endpoint and WebSocket connections to the WebSocket endpoints.
What to look for: Mobile-app issues, insecure storage of sensitive information, such as hardcoded secrets or tokens
What it runs on: Native frameworks for each OS (Java, Objective-C, C#)
Websocket endpoints
What it does: Slack passes messages through its Real-Time Messaging servers (the RTM API). These are real-time communications that are conveyed via WebSockets to our backend.
What to look for: Authentication bypasses, bypasses of permissions, information leakage, messaging arbitrary teams or users
What it runs on: Java, Go, WebSockets
Other Things We Care About
In all product areas we are always very interested in bugs which:
- Bypass private message security
- Bypass team privacy
- Generally bypass access control
We hope you’ve found this information useful. If you’re a researcher we welcome your vulnerability reports at https://hackerone.com/slack. For general information about Security at Slack, please visit https://slack.com/security.