December is an active month for cybercriminals – the uptick in holiday shopping, end of year budgets & contracts, preparations for tax season, end of year surveys, and generally frenetic pace lends itself to a ripe environment for phishing scams. It’s not atypical this time of year to see an uptick in phishing attempts that rely on old standby techniques such as DHL or FedEx impersonations or fake invoices being sent to accounting departments.
Last week, however, came with a twist – a number of high-profile ransom-driven phishing scams that prey on fear.
In a typical phishing scam, you can usually find three key characteristics: a trusted sender or brand, urgent language, and some kind of required response.
In these bitcoin-driven scams, attackers substitute fear for the trusted sender component: Last week saw a dramatic rise in bomb threats requesting bitcoin payment (which so far appear to be hoaxes / scams rather than legitimate threats), whereas starting this past summer, there has been a burst of sextortion scams.
Ultimately, the pattern is the same – threaten personal damage (physical or otherwise) unless the recipient transfers a certain amount of bitcoin to one of several circulating accounts. These scams often include some level of personalization to give the threat greater credibility.
In the business world, this same pattern can be found in other financially motivated phishing attacks: The target is sent a plain text, often personalized, email with no links or attachments that requests a wire transfer due to a late invoice payment, or W2 information for a former employee. Such requests are rarely legitimate but have enough details to encourage action.
From an email security perspective, such emails either completely bypass traditional email security tools because they are “payload-free” with no attachments or associated links or they’re quarantined – which can be problematic if they are legitimate. This binary approach to email security (either something is good or bad) belies the reality of today’s threat landscape which exploits the dangerous gray area of every day communication. The challenge of course is that a percentage of “legitimate” email follows this same pattern, and this good/bad approach to email can result in either exposure for the company or delayed business operations due to blocked or quarantined emails.
Security teams should use the current ransom scams as an impetus to reconsider how such emails should be handled not just from a technology perspective, but also from a business process and user education mindset. For example – what’s the process for authorizing wire transfers or transmitting confidential information? How should physical security threats be handled and to whom should they be reported? How is that information being communicated and reinforced to employees?
Once such decisions are made, technology can not only detect the threats but also be a powerful enabler and reinforcement for that process. For many of GreatHorn’s customers, for example, such emails come with a warning banner that reminds the recipient of the established business process and whether the email deserves extra scrutiny.
In 2019, we’ll be writing more about the Email Security Lifecycle – and GreatHorn’s unique ability to support all aspects of an organization’s comprehensive email security strategy. Stay tuned!
As a member of GreatHorn’s Customer Success team, I have daily insight into threat patterns as they emerge across our customer base. While we always see a variety of threats (and some more than others), occasionally we see volumetric phishing patterns that result in temporary spikes in one particular type of threat.
Over the past several weeks, we’ve seen a huge spike in service impersonation attacks. In this blog, I’ll explain what these are, how they work, what to look for, and what you can do to prevent them.
A warning from Google underscores the risk posed by ignored or little used email infrastructure.
A feature purportedly released to enhance security may actually keep security practitioners up at night.
The ALL CAPS warning from the Democratic National Committee about attempts to hack into its voter database turned out to be a false alarm.
Sometimes good security can generate bad vibes. That was the case last month, after the Democratic National Committee contacted the FBI and alerted the media over what it believed was a campaign of phishing attacks designed to gain access to its voter database.
According to published reports, the DNC and its security vendors had detected a website that presented a convincing likeness of the DNC’s Votebuilder hosted voter database, with the apparent aim of luring DNC staffers to enter their credentials into the site.
“This attempt is further proof that there are constant threats as we head into midterm elections and we must remain vigilant in order to prevent future attacks,” the DNC Chief Security Officer, Bob Lord said in a statement released to the media shortly after the site was detected.
In today’s heightened environment, amid warnings of pre-midterm election hacking by both Russian and Iranian actors, the news of the DNC voter hack rocketed around the globe.
Alas: the emails were a false alarm: the byproduct of an unauthorized penetration test (or “pen test”) conducted by a contractor hired by the Michigan Democratic Party. Neither Michigan Dems nor, it appears, the contracted pen testing firm told the national party that it had contracted for the tests. Within hours of alerting the FBI and (llikely) the media, the DNC had to do an embarrassing about face and walk back its warning.
Nobody doubts that user education is an important element of any internal security program. But the DNC “phishing” incident highlights the fact that tools like penetration tests and phishing awareness are double edged swords. In the wrong hands – or just in incompetent hands – they can be counter productive to an organization’s security. Rather than promoting good (secure) behavior, they can sow disruption or – worse – encourage complacency. (“What, another ‘spot the phishing email’ test?”)
At their best – that is: authorized, consensual, announced – phishing simulation tests train users to recognize the characteristics of “phishy” emails and respond to them accordingly (report them, don’t click on anything.) But, as this blog post notes, penetration testing outfits, which typically license third party phishing simulation tools, are often the last groups you want conducting phishing simulations. That’s because they are more interested in making the most out of the security weakness of gullible users than in using the tool for its intended purpose, which is promoting secure user behavior.
So what is an organization supposed to do? The fact is that every organization that employs homo sapiens is highly vulnerable to phishing attacks, because those attacks play on our weaknesses as humans. User education can help lessen the likelihood of any single user falling for a phishing email, but it will never prevent attacks outright.
Security gateways and desktop antivirus and anti spam are well entrenched in corporate networks and still take the point in the battle against phishing and other email borne threats. But the truth is that better tools and technologies are needed that can identify and block phishing emails before they reach a user’s inbox. And, because no detection technology is perfect, these tools also need to be able to flag and denote merely suspicious or high risk messages in a way that nudges employees to do the right thing.
Fortunately, technology like machine learning is making it possible to identify suspicious and malicious email messages. Coupled with user training, this technology can reduce the likelihood that any users will encounter a phishing email, while also helping to assure that should they encounter one that they will know what to do with it.
Paul F. Roberts is the Editor in Chief at The Security Ledger. Check out more of Paul’s writing at SecurityLedger.com or click here to subscribe.