What Equifax did wrong

Much has been written about Equifax’s poor response to their data breach. But how did they get breached in the first place, and what would have limited the damage?

On the 6th of March 2017 Apache announced a bug in Struts, which is software used in web servers. The following day, a proof of concept was publicly released to the Internet showing how the bug could be used. Three days later (the 10th) Equifax was hacked via the Struts bug. The hackers then install code on a number of servers which they can use to transfer data out of the company.

A patch for the bug had been issued within days of its discovery. Equifax patch their vulnerable servers on the 30th of June (including the ones that had been compromised) but do not notice anything wrong.

On the 29th of July Equifax discover they have been hacked and on the 30th of July they boot the hackers out of their servers. They don’t disclose the breach until the 7th of September, with senior executives selling large quantities of shares in the intervening period (something the SEC is investigating). When they do disclose the breach, they bungle the communication and try to get affected customers to sign away their rights to sue.

But back to the time of the hack. I’m sure that if you are responsible for IT infrastructure you will know how difficult it can be to patch servers. And four days is not a lot of time to do so. What else would have helped prevent the hack, or limited the damage?

  • Security Information and Event Management (SIEM) – these types of logging and alerting solutions look for suspicious events in log files and raise alerts to the IT team that a server may have been compromised
  • File Integrity Monitoring – often integrated with SIEM solutions, this would have raised an alert if a critical system or web configuration file had been changed
  • Network monitoring – the hackers moved a large amount of data from Equifax systems to the Internet. This would have shown up, either in the firewall logs or in the server logs, if it had been monitored and if an alert had been raised
  • Asset management – some of the information the hackers took came from legacy databases that had not been decommissioned. These should have been identified and removed as part of a regular review
  • Access restrictions – the compromised web servers were sitting in a DMZ. Good access restrictions.may have reduced the ability of the hackers to  use compromised web servers to take information from internal databases
  • Honeytokens – these can be made to look like files or be implanted in databases. When a hacker opens the file or queries the database, an alert is sent to the IT team

If your organisation has data that it needs to protect, we would suggest that you start by understanding your assets (where they are, how they can be accessed) and your risks (how valuable are they to others, what would be the impact if they were stolen). With this understanding, you can use a framework such as the CIS Controls to ensure that you are putting the right measures in place and that any investment you make, whether in time or money, is creating the most reduction in risk.

1 comment

Comments are closed.