For several years, we’ve had data showing the typical behavior of a data breach victim’s stock when the company discloses an incident: the stock drops at first, recovers in about a month, and then rebounds to outperform during the six months following the breach compared to the six months prior. This predictability creates an opportunity for people to game the system if they know a company is going to disclose a data incident. We’ve also known, at least since the indictment of Equifax employees on insider trading charges in 2018, that insiders sometimes take advantage of this opportunity. What we didn’t know, until now, is whether such incidents were common or mere aberrations. Now, thanks to researchers at the Universities of Florida and New Hampshire, we seem to have an answer: statistically, companies that disclose data breaches show measurable increases in pre-disclosure trading patterns "consistent"* with insider trading.
This is more bad news for data breach victims, since revelations about insider trading often lead to further reputational damage, investigations, and litigation. So what’s to be done? Many companies already have a suite of tools to prevent insider trading including training programs, trading clearance requirements, requiring employees to submit periodic statements from their brokers, insider surveillance, requiring employee attestations of compliance, maintaining close records of when insiders came to know non-public information, and blackout periods. All of those techniques remain good practice in the data breach context.
However, companies may need to take steps to sync up their cyber incident response plans with those overall governance and compliance structures.
For example, do the company’s business-as-usual compliance controls and training programs extend to tech and security staff? If a company treats its tech and security folks as purely back office, it might not be giving them enough credit. A similar point applies to external vendors, consultants, advisors, or other partners. It’s important that a company’s contracts with these external parties obligate them to strict confidentiality. This is true whether the company deals with these external parties in the ordinary course (since vendors themselves may be involved in a data incident) or in response to a suspected incident.
On the flip side, it’s important to build insider trading compliance into a company’s cyber incident response. Some compliance tools work only after the company activates them—for example, maintaining insider knowledge records and blackout periods. A well-designed cyber incident response plan should already include standards for classifying the severity of an incident and guiding the escalation accordingly: escalate to a security line manager when a DDoS attack is detected; escalate to the CISO if a phishing attack compromises any email accounts; escalate to the CEO if… you get the idea. It’s important that, whatever classification a company chooses, the incident response plan also defines the point at which an incident is deemed severe enough to trigger insider-trading controls. And because companies often have to upgrade the severity of incidents as they conduct an investigation, it’s worth erring on the side of caution in defining that point.
* The study’s authors carefully state that the trading patterns are “consistent with” insider trading, since they don’t have visibility into who is trading and why. Of course, except for purely accidental data breaches, there will necessarily be at least one non-insider who has inside information about the breach: the hacker. And then there are some hacks, such as last week’s hack of Intel, where obtaining inside information seems to be the hacker’s very purpose.
When companies get hacked, do their own employees and informed outsiders use that information in trading before the breach is disclosed? The answer is yes, according to our latest research.