In recent months, Americans have become aware that Hillary Clinton, throughout her tenure as Secretary of State, conducted all her official email communications via a personal, dedicated email network, and that in late 2014 Russian hackers allegedly penetrated the White House email system, gaining access to the President’s travel schedule and other unclassified information. One of the themes present in both of these stories has been potential compromise of “sensitive, but unclassified” (SBU) information contained on these unclassified email systems.
The public needs to be extremely skeptical of assurances provided to us by various government officials, pundits, lawyers and Spin Meisters who assert that we need not be unduly concerned because no classified systems were penetrated, and because “only” unclassified information on unclassified email systems may have been compromised.
Well, I’m concerned. Very concerned. Because I know from firsthand experience that unclassified does not equal unimportant. Not by a long shot.
Some years ago, well before the introduction and proliferation of social media platforms like Facebook and Twitter, I managed a multi-discipline effort to test the security of a particular organization, using a methodology known in government and industry as “Red Teaming”. Our customer needed to know what terrorists or other nefarious individuals could potentially discover about their organization’s personnel, facilities, and activities – and how that knowledge could potentially enable nefarious actors to identify exploitable vulnerabilities and use that knowledge to carry out attacks. My assignment was to pretend I was a hypothetical “terrorist” and deploy my team to gather whatever information we could from the public domain regarding this organization’s facilities, personnel, and activities.
Unlike the mythical terrorists we were directed to emulate, we were issued strict rules of engagement: We were not permitted to access classified systems. We could not violate the law. However, if information was accessible online, available in publicly-available documents, overheard in conversation, or acquired via “social engineering” activities conducted by my team and me – that was all fair game. If my team or I had any doubts as to whether a planned technique complied with our ROE, we had to (and frequently did) request a legal determination prior to proceeding.
Over the course of the next few weeks, our team amassed a considerable volume of exploitable information. A partial list of information we identified includes:
- Key facilities associated with the organization
- Physical security and other vulnerabilities associated with those facilities
- Leadership and key employees of the organization
- Details about some of the organization’s recently-concluded activities
- Planned future activities of the organization – to include the dates, times, and locations of those activities, and the key individuals involved
To the chagrin of our customer, our team’s efforts were wildly successful – despite our restrictive ROE. Our customer was not only shocked by the voluminous amounts of information we were able to acquire, but alarmed by how our team planners – simulating terrorists – subsequently conceptualized and briefed the customer on ways in which terrorists could hypothetically exploit that information to conduct terror attacks. One morning, our team was hastily assembled and ordered to immediately cease operations, pending a new assignment. The date was 11 September 2001; hours earlier, ACTUAL terrorists had hijacked four passenger aircraft and flown three of them into the Twin Towers and the Pentagon. We came to learn that part of the 9/11 hijackers’ pre-mission activities involved gathering publicly-available information, using some of the same methodologies employed by my team. Minus all the rules of engagement, prohibitions against violating any US laws, or legal determinations, of course.
In the Information Age, we have grown accustomed to having all sorts of information instantly available to us via web sites, social media, mobile phones, email and other digital means. But in our relentless pursuit of communication, convergence, and convenience we cannot become complacent about the value of our information – not only its value to us, but also to those who may wish us harm. The importance of information is not necessarily determined by its security classification, or by the security classification of the system on which it is housed.
Advance knowledge of “sensitive, but unclassified” information – such as a key individual’s schedule – is highly desired by terrorists. On 30 November 1989, three weeks after the fall of the Berlin Wall, Deutsche Bank President Alfred Herrhausen was headed for home, relaxing in the back seat of his armored Mercedes Benz limousine, the middle car in a three-vehicle convoy. Turning down a side street en route to Herrhausen’s residence in the sleepy Rhineland town of Bad Homburg, the first convoy vehicle passed a bicycle chained to a lamp post and tripped an infrared beam across the road that terrorists, posing as workers, had set up earlier. Hidden in the saddle bag on the bicycle was an explosively-formed projectile, or EFP, precisely timed to detonate and penetrate side doors of Herrhausen’s vehicle. The explosion severed Herrhausen’s legs, causing him to bleed to death.
No one has ever been charged in Herrhausen’s murder; German authorities believe the Red Army Faction terror group had carried out this attack. Absolutely critical for the success of this sophisticated terror attack was advance knowledge of the route and timing of Herrhausen’s planned travel home. In other words: His travel schedule. The kind of information that in 2015 hackers might surreptitiously acquire from an SBU government email system, or perhaps from a privately-own server tucked away in the basement of an upstate New York residence.