
But the ways Faruqui has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster. Bitcoin’s immutable ledger was used to find the perpetrators.Ī magistrate judge doesn’t set precedent in the same way as a Supreme Court justice - stare decisis only must be obeyed by lower courts, and Farqui’s is not the highest. Particularly well known was a case involving a dark-web site called “Welcome to Video,” which had facilitated some 360,000 downloads of sexually exploitative videos of children to 1.28 million members worldwide using bitcoin. There, Faruqui prosecuted cases that involved terrorism, child pornography, and weapons proliferation. Attorney’s office in Washington, D.C., that called themselves the “Bitcoin Strikeforce,” and worked with agencies like the IRS and FBI in federal investigations. Rather, before taking the judge position Faruqui was one of a group of prosecutors in the U.S. His knowledge isn’t the product of spending time on crypto Twitter. "It's similar to DDoSing yourself," former Slack director of infrastructure Julia Grace told me at the time.
#Slack outage Offline#
A similar outage occurred on Halloween in 2017 when a coding error kicked Slack users offline and everybody tried to log back in at the same time. Slack has run into problems in the past when a disproportionately large number of people try to log into its service all at once. For its part, Slack signed a five-year deal with AWS in 2018 that appears to cover the majority of its cloud computing needs through 2023. One thing that isn't yet clear is how folks at AWS coordinated their response to the outage: AWS, after all, is actually a Slack customer, since the two companies signed a sweeping partnership deal last June. It will also redesign the server-provisioning service to handle a similar type of event and set new rules around how its servers automatically scale in response to demand. In its report, Slack promised customers it would improve several aspects of its architecture over the next few months, starting with a better alert system for packet loss and closer ties between its observability system and its provisioning service.

Once Slack's problems with its provisioning system were fixed, the new servers found they had stable network connections, and service began to come back to normal over the next hour. PT, AWS increased the capacity of AWS Transit Gateway, and moved Slack from a shared system to a dedicated system, Slack told customers. It was also unable to debug the issues properly because its observability service was also affected by the networking issues, according to the report.īetween 7 a.m.

Slack had a backup reserve of servers ready to go, but began to discover problems with the provisioning service it used to spin up and verify those backup servers, which was not designed to handle the task of trying to get Slack up and running on more than 1,000 servers in a short period of time. "By 7:00am PST there were an insufficient number of backend servers to meet our capacity needs," according to the report, and Slack went down hard across the world.

Slack engineers were not alerted to the problems until around 6:45 a.m. That forced healthy servers to handle an increasing amount of demand as more and more servers were tagged as "unhealthy" due to their lack of responsiveness, thanks to the networking issues. Over the next hour, packet loss caused by the networking problems led Slack's servers to report an increasing number of errors. Slack declined to comment beyond confirming the authenticity of the report. PST we began to experience packet loss between servers caused by a routing problem between network boundaries on the network of our cloud provider." A source familiar with the issue confirmed that AWS Transit Gateway did not scale fast enough to accommodate the spike in demand for Slack's service the morning of Jan.
#Slack outage series#
The hours-long outage that kicked off the 2021 working year for Slack customers was the result of a cascading series of problems initially caused by network scaling issues at AWS, Protocol has learned.Īccording to a root-cause analysis that Slack distributed to customers last week, "around 6:00 a.m.
