By: Mark Hatfield, GreatHorn Customer Success Manager

This blog outlines the anatomy of phishing attacks and GreatHorn’s adaptive, anomaly-based detection capabilities. By moving beyond static good/bad analysis, GreatHorn customers can better detect and stop sophisticated phishing attacks that include both social engineering tactics and payloads.

Identifying phishing attacks is a complex and nuanced endeavor. Attackers use a variety of techniques to disguise their intent or coerce compliance, but every well-crafted phishing attempt includes two key components: social engineering and the payload. Think of social engineering tactics and the payload as the head and tail of a fish: the head (representing the social engineering component) goes first, followed by the tail (representing the payload). By scoping policies that identify both elements, we can greatly improve our confidence about whether a given email is suspicious.

Designed to encourage compliance with the attacker’s request and fool the recipient, the social engineering component of an attack typically falls in one of two categories: coercion or impersonation.

Coercion is typical of the bitcoin extortion attacks that we’ve seen often in recent months. In these scenarios, the attacker makes no attempt to convince the target that they are known to the victim. Rather, the attacker simply obfuscates their identity and attempts to threaten or blackmail the victim.

A common coercion technique is to say, “I have compromising information about you, and I will send it to your friends, family, and colleagues unless you comply.” Coercion is often combined with a direct spoof of the recipient’s email address in the From: field to encourage the impression that the attacker has control of the victim’s accounts.

Impersonation techniques, however, are more variable. One of the more effective impersonation techniques is a display name spoof. Spoofing display names are effective because distracted, busy email users are less likely to consistently check the display name of an email to identify the sender. Since most email systems allow users to specify how they want their name displayed, creating an account with the desired display name is trivial in most cases.

Other spoofing techniques are more difficult to execute and detect. Some of the email metadata types that are commonly spoofed include From: Address, Return Path, and IP Address. These come under the headings of direct spoof or sender anomaly, indicating that some of the metadata is inconsistent with known sending patterns.

The implied goal of an impersonation attack is to convince the recipient that a relationship exists between the sender and recipient when it does not. Direct measurement of relationships, such as GreatHorn’s Relationship Analysis Factors are powerful tools for surfacing this kind of deception.

The payload, or weaponized component of an email attack, is usually a malicious file, a malicious URL, or a direct call to action. These are complicated by the fact that a URL can be hidden in a file, a URL can access a file on a sharing site and/or upload malware, and any of these payloads can be weaponized after arriving in the user’s inbox.

When an attachment or URL is identified as a known threat, a traditional, binary block/allow process makes sense. A known attack vector can simply be excluded from the environment. But protecting the end user when the payload is a zero-day attack, weaponized post-delivery, or simply a direct call to action is when a more nuanced approach is required.

By crafting policies that identify a clear social engineering component combined with a possible threat vector (commonly exploited filetypes or suspicious URLs), GreatHorn takes more proactive steps to flag messages for the end user, along with specific concerns raised by analysis.

For example, if we spot an email that uses the CEO’s display name, but comes from an unknown email address, we could conclude that this is just an unknown private address. However, once we detect that the email contains a suspicious URL, “urgency” language (phrases like “ASAP,” “urgent task,” etc.), and the CEO has never emailed the company using this address, we now have more than enough information to apply a warning banner to the email. Depending on how risk-averse the organization is, we may even quarantine it.

Of course, not all policies require this level of polish and detail. If an email contains a known malicious link, that is plenty of reason to quarantine the email or block the link. But for most threats, surfacing the elements of the attacker’s strategy will provide more robust and proactive policy options.

GreatHorn’s cloud-native, email security platform  protects Microsoft Office 365 and Google G Suite customers from both malware threats and sophisticated social engineering attempts. In one Fortune 500 company, we identified more than 50,000 threats (business email compromise, credential theft, malicious links, malicious URLs, and more) that were missed by both a  traditional secure email gateways  and Microsoft ATP.