A 2014 breach at Twitch “was so bad that Twitch essentially had to rebuild much of its code infrastructure because the company eventually decided to assume most of its servers were compromised,” reports Vice. “They figured it would be easier to just label them ‘dirty,’ and slowly migrate them to new servers, according to three former employees who saw and worked with these servers.”

Slashdot reader em1ly shares Vice’s report (which Vice based on interviews with seven former Twitch employees who’d worked there when the breach happened):
The discovery of the suspicious logs kicked off an intense investigation that pulled nearly all Twitch employees on deck. One former employee said they worked 20 hours a day for two months, another said he worked “three weeks straight.” Other employees said they worked long hours for weeks on end; some who lived far from the office slept in hotel rooms booked by the company. At the time, Twitch had few, if any, dedicated cybersecurity engineers, so developers and engineers from other teams were pulled into the effort, working together in meeting rooms with glass windows covered, frantically trying to figure out just how bad the hack was, according to five former Twitch employees who were at the company at the time…

Twitch’s users would only find out about the breach six months after its discovery, on March 23, 2015, when the company published a short blog post that explained “there may have been unauthorized access to some Twitch user account information,” but did not let on nearly how damaging the hack was to Twitch internally…. When Twitch finally disclosed the hack in March of 2015, security engineers at Twitch and Amazon, who had come to help with the incident response, concluded that the hack had started at least eight months before the discovery in October of 2014, though they had no idea if the hackers had actually broken in even earlier than that, according to the former employee. “That was long enough for them to learn entirely how our whole system worked and the attacks they launched demonstrated that knowledge,” the former employee said…

For months after the discovery and public announcement, several servers and services were internally labeled as “dirty,” as a way to tell all developers and engineers to be careful when interacting with them, and to make sure they’d get cleaned up eventually. This meant that they were still live and in use, but engineers had put restrictions on them in the event that they were still compromised, according to three former employees. “The plan apparently was just to rebuild the entire infra[structure] from known-good code and deprecate the old ‘dirty’ environment. We still, years later, had a split between ‘dirty’ services (servers or other things that were running when the hack took place) and ‘clean’ services, which were fired up after,” one of the former employees said. “We celebrated office-wide the day we took down the last dirty service!”

Another former employees tells Vice that the breach came as …read more

Source:: Slashdot