As a member of GreatHorn’s Customer Success team, I have daily insight into threat patterns as they emerge across our customer base. While we always see a variety of threats (and some more than others), occasionally we see volumetric phishing patterns that result in temporary spikes in one particular type of threat.
Over the past several weeks, we’ve seen a huge spike in service impersonation attacks. In this blog, I’ll explain what these are, how they work, what to look for, and what you can do to prevent them.
A warning from Google underscores the risk posed by ignored or little used email infrastructure.
A feature purportedly released to enhance security may actually keep security practitioners up at night.
The ALL CAPS warning from the Democratic National Committee about attempts to hack into its voter database turned out to be a false alarm.
Sometimes good security can generate bad vibes. That was the case last month, after the Democratic National Committee contacted the FBI and alerted the media over what it believed was a campaign of phishing attacks designed to gain access to its voter database.
According to published reports, the DNC and its security vendors had detected a website that presented a convincing likeness of the DNC’s Votebuilder hosted voter database, with the apparent aim of luring DNC staffers to enter their credentials into the site.
“This attempt is further proof that there are constant threats as we head into midterm elections and we must remain vigilant in order to prevent future attacks,” the DNC Chief Security Officer, Bob Lord said in a statement released to the media shortly after the site was detected.
In today’s heightened environment, amid warnings of pre-midterm election hacking by both Russian and Iranian actors, the news of the DNC voter hack rocketed around the globe.
Alas: the emails were a false alarm: the byproduct of an unauthorized penetration test (or “pen test”) conducted by a contractor hired by the Michigan Democratic Party. Neither Michigan Dems nor, it appears, the contracted pen testing firm told the national party that it had contracted for the tests. Within hours of alerting the FBI and (llikely) the media, the DNC had to do an embarrassing about face and walk back its warning.
Nobody doubts that user education is an important element of any internal security program. But the DNC “phishing” incident highlights the fact that tools like penetration tests and phishing awareness are double edged swords. In the wrong hands – or just in incompetent hands – they can be counter productive to an organization’s security. Rather than promoting good (secure) behavior, they can sow disruption or – worse – encourage complacency. (“What, another ‘spot the phishing email’ test?”)
At their best – that is: authorized, consensual, announced – phishing simulation tests train users to recognize the characteristics of “phishy” emails and respond to them accordingly (report them, don’t click on anything.) But, as this blog post notes, penetration testing outfits, which typically license third party phishing simulation tools, are often the last groups you want conducting phishing simulations. That’s because they are more interested in making the most out of the security weakness of gullible users than in using the tool for its intended purpose, which is promoting secure user behavior.
So what is an organization supposed to do? The fact is that every organization that employs homo sapiens is highly vulnerable to phishing attacks, because those attacks play on our weaknesses as humans. User education can help lessen the likelihood of any single user falling for a phishing email, but it will never prevent attacks outright.
Security gateways and desktop antivirus and anti spam are well entrenched in corporate networks and still take the point in the battle against phishing and other email borne threats. But the truth is that better tools and technologies are needed that can identify and block phishing emails before they reach a user’s inbox. And, because no detection technology is perfect, these tools also need to be able to flag and denote merely suspicious or high risk messages in a way that nudges employees to do the right thing.
Fortunately, technology like machine learning is making it possible to identify suspicious and malicious email messages. Coupled with user training, this technology can reduce the likelihood that any users will encounter a phishing email, while also helping to assure that should they encounter one that they will know what to do with it.
Paul F. Roberts is the Editor in Chief at The Security Ledger. Check out more of Paul’s writing at SecurityLedger.com or click here to subscribe.
Today, after 4 decades in existence, and more than 25 years’ worth of consistent, daily use, email remains the most reliable, ubiquitous, and constant communication platform for both personal and professional interaction. As users, we may grumble about its ubiquity or its misuse, but we have an inherent trust in email bred from familiarity and functionality.
So it’s of little surprise that email has also become the single largest platform for Internet Crime, at least as reported by the FBI in its annual Internet Crime Report. Business email compromise alone represents 48% of the reported $1.4B financial losses from Internet crime in 2017. That’s 10x more than the reported losses from identity theft, and 3x more than the second most lucrative Internet crime technique (confidence fraud / romance).
Defined by the FBI as “sophisticated scams [that] are carried out by fraudsters compromising email accounts through social engineering or computer intrusion techniques to conduct unauthorized transfer of funds,” business email compromise is just one of many email-based threats facing organizations today.
So why are such scams so successful? In June, GreatHorn conducted a survey of 300 business professionals – most of whom were involved in email security in some way – to understand the current email security involvement. We benchmarked threat frequency, prevalence, types, defenses, and remediation requirements to see what kind of patterns we could find.
As you see in this infographic, we found a number of clues that pointed to the root cause behind the success of social engineering scams such as business email compromise and other spear phishing techniques.
For example, we learned that the “average” user either doesn’t recognize email threats for what they are or they dismiss it under the rather innocuous heading of “spam.” We know this because two-thirds (66%) of average users could not recall seeing any of the following email threats in their inboxes:
- Executive or internal impersonations
- External impersonations (e.g. customers, vendors, partners)
- Wire transfer requests
- W2 requests
- Payload / malware attacks
- Business services spoofing (e.g. ADP, Docusign, UPS)
- Credential theft
And yet when asked the same question (explicitly about what reaches inboxes, not a quarantine folder), 85% of respondents that had some involvement in email security indicated that one or more of those threats was hitting inboxes.
That discrepancy demonstrates a dangerous perception gap within organizations – the exact perception gap that criminals exploit. We’ve moved beyond the easy-to-spot Nigerian prince schemes of yesteryear. Sure, there are still mass phishing attacks that are easy-to-spot, but such attacks in some ways increase the danger precisely because they are so easy to see. The user quickly identifies them as a danger, dismisses them as obvious, and pats themselves on the back for being perceptive enough to see them.
That self-congratulatory complacency may lead to an inability to recognize the real threats – the highly targeted, sophisticated, and well planned attacks that uses social engineering and research to replicate, impersonate, and redirect “real” communication. Our research indicates that most existing email security solutions are failing to catch impersonations (nearly half of our respondents – 46% – report impersonations; including 64% of email security professionals). Such emails often come without obvious triggers such as an attachment or even a link – they use urgency (5pm on a Friday), conciseness (typically just a couple of sentences), seniority (often impersonating a superior), and fear to drive the desired outcome. That’s why it makes sense that impersonations are the email threat that email security pros worry most about.
More concerningly, our study indicates that 1 in 5 organizations have to take some kind of significant remediation action (e.g. suspending compromised accounts, PowerShell scripts, resetting compromised third-party accounts, etc.) on a weekly basis as a result of email threats that bypassed their security defenses. And on average, our panel deployed approximately three separate security tools to protect their environment from email threats.
Given the remediation requirements, it’s no wonder, however, that 56% reported major technical issues with their email security solution today, including:
- “Doesn’t stop internal threats (e.g. if a user account is compromised)” – 35%
- “Missing payload attacks” – 16%
- “Missing payload-free attacks (e.g. impersonations, social engineering)” – 20%
- “Weak or no remediation capabilities” – 19%
- “Negative impact on business operations (e.g. too many false positives)” – 21%
We’ll dive more into the challenges with today’s common email security platforms and our results in upcoming blogs. In the meantime, however, we’d love to hear what you think. What do these numbers mean to you?
Want to download the full report? You can do so here.