Email security requires a more holistic approach to guard against business email compromise, impersonations, and credential theft attacks. In this webinar, security expert Paul Roberts of Security Ledger and GreatHorn CEO Kevin O’Brien discuss the limits of awareness training, how to make it more effective, the importance of integrated incident response, and why “100% prevention” should be a red flag.
Ultimately, we’ll discuss how email protection requires more than a “preventative” mindset and how the above elements contribute to a holistic, full lifecycle approach to email security that is more efficient and effective than perimeter-based approaches.
LORITA BA: Hello, everyone, and thank you for joining us today for today’s webinar. My name is Lorita Ba. I’m the vice president of marketing at GreatHorn. And I’m joined today by Paul Roberts of the Security Ledger. Hi, Paul.
PAUL ROBERTS: Hey there, how are you doing, Lorita?
LB: Good, thanks. And Kevin O’Brien, CEO of GreatHorn. Hi, Kevin.
KEVIN O’BRIEN: Hi, there.
LB: We’ll be talking today about how to spot and stop business email compromise attacks, specifically talking about some of the challenges with some of today’s email security products, as well as the processes and other things that you need to think about beyond technology to really create a robust email security strategy for your organization. I’m going to go ahead and turn it over to Paul, who’s going to walk us through the agenda. Paul?
PR: Hey, thanks Lorita. And thank you to everybody who joined in today, who’s joining in. My name is Paul Roberts and I am the editor and chief at the Security Ledger. We are a cybersecurity news website and podcast. We’ve got email newsletters all focused around cybersecurity and internet of things,so if you haven’t check us out, please do. Today, we’ve got a pretty tight agenda. We’re talking, as you know, about business email compromise or CEO impersonation attacks, however you want to phrase them. We’re going to start off just sort of talking about what these attacks are, what their characteristics are. We’re going to talk about some of the challenges that organizations such as yours face in defending against these attacks, and there are many challenges to that beyond the challenges of traditional email security, and how to sort of leverage threat intelligence, create a full life cycle approach to identifying spotting, blocking, denying business email compromise attacks. And then we’re going to talk really about operationalizing that, so getting from wherever you are now and the processing you use now to having a robust and pointed response to this threat, which is not a new threat but it is a very fast-evolving threat and a very potent one. And helping me to do that is going to be my co-presenter, Kevin O’Brien, from GreatHorn. And then, at the end, we’re going to be doing Q&A. There is a Q&A feature here on the GoToMeeting interface, and if you have questions, or Kevin or I say things that sort of pique your interest, do us a favor and just type in a question and we will get to it at the end of our conversation. And next slide, Kevin. Great. OK, so I’m going to start off and we’re going to talk just generally at a high level about what — understanding business email compromise attacks and what they are. So that’s going to be our first sort of topic. And Kevin, go ahead and advance us to the next slide. OK. So business email compromise, otherwise known as CEO impersonation attacks, sometimes you hear them referred to as payroll compromise attacks because many of them target payroll, individuals involved in managing payroll at organizations, and it’s a way to get money out. They go many different names, but they have some common characteristics. These are generally targeted email attacks, so it’s not just like spam that you get — [silence; 03:39-03:43] — box that’s not differentiated. These are the product of a fair amount of research, opensource or otherwise, on the target organizations or individuals within the target organization, and a wide range of targets. You will read about impersonation attacks targeting small businesses, midsize and large firms, so this is something that is not limited to either a particular industry or type of company. Often they, in my experience as a reporter writing about this, you do often see them targeting firms that have multiple operations, so let’s say different offices in different countries and kind of gaining the breakdown in communication between different parts of an organization to a range wire transfers or other types of shenanigans. Ninety-percent of these start with some kind of targeted email attacks, so email is not the only avenue of attack, but it is by-and-far the largest. Some of them might start by phone or outreach that way, or some other communications medium, but at some point, most of these involve email exchanges of one kind or another. And they often involve other types of compromises as well, that could be the placement of malicious software, the transfer of sensitive account information, or the theft of that that could allow the attackers to take control of an account within an organization, and also the transfer theft of IP. So they’re not all payroll compromise; they’re not all gain debt trying to get banks to move money to an attacker’s account, but very often that is the case, but they may have other means; they may have other objectives as well, whether that’s just gaining access to the network for some kind of corporate espionage or nation state espionage, or actually the theft and transfer of data. And in terms of examples of recent business email compromise attacks, I mean honestly, you can set up a Google alert on them and you’re going to get your inbox — you’re going to get pretty regular hits on that. I read about one recently I think Naked Security wrote a couple days ago about a woman in the United States who was trying to sell property she had inherited in Australia. The fraudsters interjected themselves in the email thread between the woman and the agent in Australia and the attorneys, and was able to get this woman to transfer to them her banking account information and other sensitive credentials, and then also wire 150 thousand dollars that sort of disappeared into the ether. She thought she was facilitating the sale of this property in Australia and in fact she was just giving attackers money. Recent corporate, the French firm Pathé was involved or the victim of a fairly large business email compromise attack of some millions of dollars, 10, 15, 20 million dollars. That was a situation, again, where the attackers were able to play a branch office of Pathé off against the main office, pose as a senior executive and sort of headquarters, and arrange for money transfers supposedly to purchase a company overseas, and those were assumed to be legitimate correspondents by the folks in the branch office, and they allowed the transfers to go through. That’s a pretty common scenario, but there are a lot of these, and to talk a little bit more about how these play out in real time, Kevin, I’m going to let you take that and talk about why business email compromise attacks in this year, 2019, are such a potent threat whereas most people kind of think as email as almost like legacy technology, you know?
KO: Yeah, no, it’s a great point, Paul. And let me broaden that before we go down into business email compromise specifically, which will be momentarily, and talk for a minute about why we’re thinking about email security in 2019. And the way that this typically is posed as a question to me is “Hey, wait a minute. I thought we solved this thing.” Because if you look at the market for email security technology, it’s one of the older parts of the overall cybersecurity landscape, and we have been doing work in this space for coming on two decades. Email security is an evolutionary topic though, and the things that worked in 2014 might not work in 2019, and the technology that we built in 1997 probably don’t really have as much applicability today to the types of problems and press that we see. And that dynamic threat landscape is compounded with respect to why this is so important by the fact that we have seen huge infrastructure change with respect to =email overall in the last four years. There’s an increasing number of organizations that have migrated from a legacy on premise server, exchange or, in some cases, other technologies, to running their email through a cloud provider like Microsoft or Google. And in doing that, they’ve increased some of the foundational security of those platforms. It’s much harder for somebody to get access directly to the box when it’s sitting in a data center that Microsoft manages, but they’ve also changed how the nature of how that email system works, and they have less control in some places than they did when they could set up a perimeter security tool and just block things from getting into the mail environment at all. And that idea, that we’re in this evolutionary moment from a perimeter security model to a cloud security model, echoes what you see in the industry at large, when we’ve had clouded option drive entire new markets, things like cloud action security broker technology circa 2014, was a reaction to firewalls that used to be how we protected file servers, and suddenly that paradigm shifted when the entire nature of the underlying infrastructure moved into this transformation phase and we started seeing things go into the cloud. Well that’s happening with email. And email is more important, I would argue, that some of these on-prem to cloud migration stories for things like files because email is the largest threat surface we all have. And what I mean by that, well there’s an analyst firm that did some research last year, and I was chatting with one of the analysts in an event that he was speaking at, and he told me that the results of that survey showed that white color professionals will open and read 100% of the email they receive. They might not act on all of it, but they will always go into their inbox and at least see what’s there, click through it briefly. And that means that if you look at how we have historically measured the efficacy of a security product, “Oh, its 95% effective; it’s 98% effective,” those one, or two, or three% attacks that get through represent a vast threat surface. And so if you put all of this together, you start to see why we’re talking about the email security market today and still talking about it 20 years after some of the original industry players released the first versions of their email security products. And more importantly, the proof is in what we find from the other side of the equation, from the blue teams that are working to identify and respond to these threats, because one in five secured professionals that we interviewed over the course of 2018 reported they were, on a weekly basis, taking direct remediation action, meaning that something got around all the technologies these companies had invested in, reached their user basis constituencies, and were significant enough that they had to go, and often very laboriously, go pull things out, do remediation work. So we have a moment where a very small percentage of threats are able to bypass broad based industry solutions, and in doing so, are causing significant amounts of financial risk, and in many cases, the example that you gave just a moment ago, Paul, causing real damage to organizations that are going to have to go and remediate something that manages to get around these perimeter security tools. I want to get really practical about this and let’s look at some real-world examples. Paul, why don’t you take us through something that I think you highlighted to me.
PR: Yes, and to anyone who is joining us from in the US, this email here that we’ve sort of excerpted is probably familiar; if you haven’t seen it before, you’ve probably read about it, and this is of course a famous email from the Cozy Bear Group to John Podesta who was working within the Hillary Clinton presidential campaign, and this was the content, basically, minus the images, of the email that he received, supposedly from Google account security, saying “Someone just used your password to try and sign into your Google account, [email protected],” and the details of that, the location, “the Ukraine, we stopped this attempt but you should change your password and here is the password link.” So this is what he received. Again, it was dressed up to look like a message that you would receive from Google account security, so it kind of passed the sniff tests in the look and feel; obviously there are no obvious spelling errors or language problems that might have tipped him off. But when you look closely, of course, there are things that should tip him off. First of all, the link to change the password is a shortened link, and it’s not even shortened using Google’s URL shortener, right, which would be the “goo.gl”, it’s Bit.ly. That’s fishy, and obviously if he were to have moused over that, it would show him linking out to a site that was not a Google site, let alone taking him into his Google security settings, which is generally what Google would ask you to do, is to review your security settings. Now in this case, John Podesta, actually we know, did reach out to the IT lead for the campaign, and said, “What should I do about this?” And although it’s a matter of some dispute, was told that it was a legitimate email and to go ahead and follow its instructions, and of course the rest is history. But this gives you a sense of the types of preparations that goes into these. Obviously knowing his email, first of all, replicating pretty closely the look and feel of a real account warning email from Google, and then obviously not screwing up on the language, the changes — or the tells in this — are pretty subtle. If you really dug in on both the reset link and also if you look up there, the “accounts.googlemail”, the deprecated domain, that might be a giveaway as well. But otherwise, pretty hard to spot this one out of the box, which is why they’re so effective. And we’re going to take another look at an even more subtle, even more stealthy attack with our next slide. And I’m going to hand the mic back over to Kevin.
KO: Yeah, so 2016 s sees something like what we get from the John Podesta email. This is an attack that we would see in 2019, and this is an example, but it’s one that our threat research team has found last week, and we published some research about this and it made its way around various news sites. This hit a wide range of different kinds of customers, so what I’m showing you is a real version of an attack of business email compromise attempt. Now, what the fraudsters are doing here is that they are sending a message to senior executives, and they’re sending them to senior executives at companies, both public and private. We found examples both in the US and abroad. And it looks like — it might be a little hard to see on the screen, and so I’ll describe it — an email that is related to a board meeting being reschedules, and it’s asking for some scheduling assistance, and it looks like it’s coming from a poll that looks like it’s coming Doodle, which is a poll application for doing exactly this kind of thing. Note a few things about this, right? This is not using a deprecated domain. In this case, this is a rewritten spoof or a business email compromise spoof that was delivered to a user saying, “You’ve got a new message; look at the days for availability; expand of the choices.” There’s a real set of links to the Doodle application for iOS and Android. That’s really how Doodle sends mail out, and everything about it looks like a completely legitimate message. And notably, the attackers here are leveraging the fact that the mobile client experience — the bottom right of what’s on the slide — is not what you would see if you were looking at it on the desktop. So on the desktop — the larger outlook interface that we have up here — it looks like it comes from meetings, and they’ve rewritten the display name, that will bypass many of the Legacy tools that are trying to do something about display-name spoofing. What variant is this going to? compromise? Because it doesn’t say “Kevin O’Brien,” it says “meetings.” OK, fine. The mobile environment is actually a little bit different. The mobile client says “If this mail comes from your address and it’s to you, we’re going to write that out as ‘note to self’ because that’s helpful.” It’s actually not very helpful because when I see “note to self,” there’s no indicator for me that I didn’t send this to myself, and at least it’s an odd enough experience that I’m going to look at that message. Remember, 100% of all white-collar professionals read 100% of all of their email for at least a second. So when I glance at this, it looks likes, and it looks like a Doodle poll, if I’m the senior executive at a large organization, 10, 20, 30 thousand people, maybe someone is trying to schedule an important meeting. And it’s send in January, so a meeting sent in January about a February board meeting, “Yeah I might click on that, participate in that.” And here’s what happens when I do: well it looks like I’m going to be logging into an Office 365 login screen, and the office 365 login screen that I’m getting is not the impersonation website of days gone by. There aren’t ten different cloud service logos that are fuzzy and little off-kilt that I’m supposed to believe are asking me for Google or Dropbox, or box, or whatever. No, this looks really close to what the legitimate Office 365 login experience looks like. In fact, they have even prepopulated the login experience with the sender’s name — we’ve blurred it here, but the “[email protected]”, and all I have to do is put my password in. In fact, the only mistake that they made is the copyright date. It’s 2018 and not 2019. Everything about this is pixel perfect. So this concept of sending a business email compromise attack is not different from what happened to John Podesta in 2016, but it’s now a multistep attack with far more variability in how someone will interact with it. And I don’t reject my teaching, that’s how I use Doodle. That looks like Doodle. The links are right. “Oh OK, yeah, I use Office 365.” They click on it; it knows my name; it’s not asking for that.” Maybe I do put my password, at which point I’ve given it away to the bad guys. So Paul, why don’t you–
PR: Most people–
KO: —a little.
PR: Yeah, most people, Kevin, wouldn’t even — that 2018 copyright date, that might not be red flag for people. I mean, if you’re a sophisticated user, you might say, “Hey, there’s no way they’re running an outdated version of Windows 10” or whatever it is here. But others might see that and not think it was particularly amiss, being 2018 versus 2019.
KO: That’s right.
PR: Right, so you know, the question is what do you do against stealthy, targeted attacks like this? Obviously platforms like GreatHorn’s and others increasingly are relying on what’s termed “threat intelligence” to provide more sharpened email security. “Threat intelligence” is a broad term and it encompasses a lot of things. I mean, Legacy threat signatures are a form of threat intelligence, but these days, it’s much broader and many different types of threat indicators are being collected, analyzed, and then provided to end users as a way to identify, call out what’s good from what’s bad. Behind a platform like this, to identify business email compromise attacks, you need a lot of information. You’re not going to identify the attack wholly based on the content of the email or a malicious attachment like you might have in the old day; “This just sound like spam.” You need more information. What types of information? Knowledge about, first of all, the types of malicious actors that are out there and the particular modus operandi that they use: tools, techniques, and processes or procedures. The types of malicious infrastructure, command and control infrastructure they might use, you know, spoofed web domains, bullet-proof hosting services that host their attack sites, and also knowledge, historical knowledge, of the types of suspicious and malicious content they send out, what their campaigns tend to look like, who they tend to target, what types of hooks or wording or attachments they might use to get somebody to click on that link, surrender their credentials, or open a malicious attachment if that’s what the scam is. That’s all part of threat intelligence, and that’s all what platforms like GreatHorn rely on behind the scenes to allow them to have greater insight into patterns of attacks. But threat intelligence alone is inadequate to the task of identifying these things right? So simply knowing what command and control infrastructure a particular cyber-criminal group or nation state actor might use is very useful but it’s not sufficient. These are social engineering attacks targeting individuals. They might even be coming from within your organization by a compromised insider. And so you do need to kind of go beyond mere threat intelligence to actually stop these attacks. You need to address some of the other levers that these attacks pull, these social engineering levels, the point at which they actually fool your employee, or your customer, or both to fall for the attack to enter their credentials to click on the link, to follow the link, what have you, to understand what the objective of these campaigns are, what types of systems they are interested in, what types of data they want from your organization, what types of IT assets are going to be going after. To understand what types of accounts within your organization are valuable and what they’re going to look to take over or leverage, whether that’s payroll, whether it’s a C-Suite, what have you. Those are the types of things that you need more than mere threat intelligence feeds, more than just historical understanding of attacks to really stop and thwart these very sophisticated business email compromise [silence; 24:56-25:00] OK. So we’re going to move on and we’re going to talk about kind of the response to that, I guess, if mere threat intelligence isn’t sufficient, what is? What more do you need? And that’s something that, Kevin, you could talk a little bit more about this, but that’s something called “full lifecycle support”. So Kevin, just advance the slide and why don’t we talk a little bit more about what we mean by “full lifecycle support”. Obviously, historically, email security has been about, initially really, stopping spam and malicious email attachments, right? The sort of “I love you” or Anna Kournikova virus back in the day by-and-large arrived via email messages as malicious email attachments. There were very poor tools for spotting that type of stuff, so in generation 1, email security was about stopping those malicious attachments from getting into your organization by doing a match on the file and determining that it was malicious. Spam became a problem some time around the turn of the century or thereafter. Just junk mail started to really become a productivity issue for organizations. We’ve developed a lot of great tools for spotting spammy messages from non-spammy ones, and that was just a natural response to that. Again, these were all about kind of keeping your parameterized network secure, keeping malware off the interior of your network, keeping it out on the other side of the firewall, keeping your inbox from being overwhelmed by junk mail and allowing email to be a functional business platform for people. What’s changed really in the last 10 or 15 years is this shift to very subtle email-based threats that don’t explicitly rely on malicious attachments, don’t specifically use spammy, easy to identify, blasted out to a thousand or 10 thousand or a million people approaches. These are targeted; they are stealthy; they are conversational and are not going to seem like they are necessarily malicious. If you were to weed out all of those conversations, unfortunately, the problem is for organizations, you’d also spot a tremendous amount of legitimate email correspondents, and you’d be blocking or flagging that, and that would obviously arise the ire of your executives, of your staff who will be missing emails that they were expecting to get. So there is this line that you walk between wanting to stop all the bad stuff and still needing to let email continue so that people can get their work done. And how to balance those two needs is really what full lifecycle email security is about. So Kevin, why don’t we move onto the next slide, and you can just sort of talk to them about how this new paradigm really works.
KO: Yeah, of course. So the full lifecycle is really about multilayered analysis and threat response. And if we think back to that concept that I started the webinar by describing, this evolution from perimeter security models to cloud-native security models. The idea of full lifecycle email security is the concept that we are seeing a similar transition happen within the email sec space. That looks like the inclusion of threat intelligence and threat detection, because as Paul just mentioned, there’s quite a bit of value to be had from having categorical blacklist data, from knowing command and control infrastructure, from having known malicious senders and malicious actor data. [29:00] But we have to incorporate more than just a blocking model or a perimeter security modality if we want to really protect email when users can easily select their own devices, their own mail client, stand up alternative systems, all of the negatives that would come along with security blocking the business use of email. So full lifecycle email security assumes that some fishing and targeted threats will get through the perimeter security models that we built in years gone by. This is why one in five professionals in the InfoSec base say, “Every week, I have to go in and do something about an attack that bypassed the perimeter and that reached my users.” If you take that as a given, then you don’t disregard toing prime-of-delivery or border scanning and detection, but you layer in an automated threat defense model that looks for, yes, volumetric fishing and volumetric spam, but also looks at some of the more sophisticated versions of the kinds of attacks that we’ve described here: the unlikely sender, the unusual social graph or deep relationship analysis model that different vendors are talking about to identify when it’s a deprecated domain, as well as the ability to do incident response, meaning that if those attacks come in, and you’re aware of that, even if you’re made aware of them 30 seconds or a minute or two minutes after the email containing the threat reaches the user base, you can drive down time to response, and that delta between time to detection and time to response is where the full lifecycle of email security bottle really comes to bear fruit because it means you don’t have to only rely on what we blocked and everything else is a failure, you can start to incorporate techniques for doing response more quickly than historically was possible and limit the exposure of an organization. If John Podesta had gotten that email, or an executive on a board received a message that said, “This is a bad message”, and it could be removed before they had to reach out, say, “OK, what should I do with this?” That’s the concept that having this kind of integration really puts together and offers to people. So if we think about that, you can start to modify things like those risk-based threats, the Doodle poll of doom that might reach a user and they get them to give away their credentials, and not block it, because most people who would sit on a board, if they saw that and rescheduled a board meeting in their spam or their junk folder, spam folder spelunking is something that those users do when they go nosing around, seeing what’s in there. They’ll start drawing those emails out. They might click on them. They might interact with them. Rather than trying to block the message or just take something that’s unknown and say, “This is categorically bad”, we can provide them with context. And so the idea here is that we can warn the user at the moment that they receive the message, whether it’s on a desktop client or on a mobile client. “Hey, this looks a little unusual. It might be an impersonation attempt.” Moreover, that message that you’re getting that has this link in it, if you click on it, we can interrupt the workflow before someone gives away their credentials. It’s great that we have seen a rise in training over the course of the last few years. And many organizations now invest thoroughly in saying, “You should not click a link. You should hover your mouse cursor over a link.” I was reading on twitter this morning, unrelated to this conversation, some back and forth with some Fortune 500 security folks and end users, and they were absolutely decrying the use of this kind of a campaign-based approach, a training-based approach, saying, “My company has invested in this ridiculous technology; half the time when I click on something, I’m forced into detention. I have to go sit through an hour-long training session because I fell for it. And now I don’t click on anything. I just forward everything to the InfoSec team. That’s easier, and say ‘hey, is this legit? Is this legit?’ And ultimately, I don’t want to be responsible for anything.” Well, that’s an antipattern, right? That doesn’t help you. You need to let your users have confidence that you’re going to give them contextual awareness and that they can click on things, and that I swear, we start saying, “OK, there’s a way to not have security block business or make IT the bottleneck for every message.” And that’s where I think we start to see this concept of building out an operational security platform. Something that minimizes risk without causing users unnecessary tension or frustration, or worse, driving them to go stand up their own mail environments using private and often insecure consumer-grade technology, or [34:00] completely abandoning email and using Microsoft Teams, or Slack, or some other chat-based program to do business because they feel that it’s just too difficult to work in the system that otherwise could be made secure. And that concept really is where I think we go with the time we have left here. And it is not “You should go buy something.” Yes, there may be technological investments that you will be able to make and it will help create and augment some of this, and obviously I have a perspective on that given the nature of what I do. But really, this begins at the process level, and that combination of process that is — we’ll look at it more in-depth in a moment — but what should happen, which alerts an end user, “Hey, this is a time when all of that process matters the most and you should pay attention to it.” An empowerment for those end users who are the frontline rather than treating them as your weakest link or as though they were a security concern, instead recognizing that they have the opportunity, given the right information to make good choices and defend your organization. Let’s start at that concept of process. Paul, why don’t you take this and talk a little bit about the operationalization flow overall.
PR: Sure, Kevin. And before I do that, let me remind the folks on the line as well that we will be taking questions at the end of our session, so if you do have questions, use the QA feature here on GoToMeeting and pose the question that you have, Kevin and I will take time at the end of the conversation to try and answer them. Yeah, operationalizing email security, I love this term really, because I think generally in technology and certainly in security technology, we tend to think in terms of or look for silver bullets, you know, the sort of press the button and it’s going to take care of your security problem and make it go away. And the reality, of course, is that good security is always a mix of technology and process of getting people to do the right things or do things with the assistance of technology, and I think obviously that’s something that GreatHorn is talking a lot about. I mean, obviously the goal of good operational email security is to, first and foremost, identify the suspicious, malicious email-based threats that are facing your organization, right, and to prevent those from being successful in whatever it is that they’re trying to do, whether that’s compromise an account, entice somebody to make a fraudulent transfer, what have you. The converse of that is you want to keep the bad stuff out, let the good stuff in, so you don’t want to become burdensome to your organization or to your employees. As Kevin just related, kind of the gripes coming from the financial services sector and others where they’ve gone full-bore with fishing education and really trying to train up users not to fall for these attacks. The potential downside of that is that you encourage either avoidant behavior or bad behavior. Again, setting up your own email account or starting to have people email you through a personal account because you’re tired of dealing with getting flagged, and warned, and so on your secure platform. So you want it to be seamless as much as possible, and certainly not penalize people for using email or encourage them into even larger, more insecure behaviors that are outside of your purview, right, that’ll be on your view. That’s super dangerous. And yet, human nature, people do that all the time because at the end of the day, they just want to get their job done. So the key really here is neither to block everything nor allow everything. It is increasingly accepting, or at least coming to terms, with the fact that some email-based attacks are going to succeed. The goal is to allow that window of success to be as narrow as possible, to shorten the amount of time or opportunity that attackers have to take advantage of your organization. And to do that, you really need a platform that is providing users with as much context as they can, that sort of reinforces what they know about suspicious or malicious email, about email-based attacks, but it actually allows them to use that in a real-world setting. So, “Here’s an email that’s sitting in my box. I can’t tell by looking at it if it’s bad, but I’m getting these visual cues and process cues that help me do the right thing, which is either deleting that, quarantining it, or at least proceeding with caution, and when that screen comes up telling me to enter my credentials, to be suspicious of that, whereas otherwise I might not think about it at all.” So yeah, it’s really trying to use the technology to reinforce the types of behaviors that are actually going to make a meaningful difference in your organization.
KO: I think that’s a great point, Paul. And when you say, obviously, the meaningful choices and the behaviors, what I like to draw out to people is that those are going to be different for different constituencies, right? And so when we start seeing what happens here, a wire transfer that reaches someone who is in a finance team may not itself be the most unusual thing that they will get, but you need to have articulated process to those individuals in that finance group far advance of them sitting with that message from the CEO saying, “This is urgent. I need you to do it”, so that they know, “Well, this is our process.” Let me give you an example of that. We’re working with a company right now where a business nearly sent out tens of millions of dollars to a vender and this was a vendor whom they do a significant amount of work with every year, so this was not unusual. And an internal account was compromised and as a result of that compromise, the attacker, the fraudster, didn’t send a whole raft of the emails, but they waited until there was an existing thread going back and forth about an upcoming payment, 25, 30, million dollars, something in that range. And they inserted one message into the long exchange here that just said, [41:00] “We’re undergoing an audit and we’re going to have the payment go to a different account.” And it almost worked. And the reason that it didn’t work was because they had put process in place that said when a change is made to a billing address like that, there must be a certain PDF that is attached with that information written out, and the attacker didn’t know that. So this obviously led to panic in the organization, because they came one mid-level finance person away from wiring out over 25 million dollars, but having that process articulated is what saved them. I think it’s really, really important that we note that the very first thing you do when you start to think about how we address the business email compromise problem, is we define process, and we look at our risk teams, and we think about this not as a compliance play. This is not about training people with fake emails exclusively and then washing our hands of it and saying, “We’re done.” We take a risk management perspective. That’s where I think this conversation goes. Paul, go ahead.
PR: Right. And also for the — I know that, Kevin, you and I have talked about the importance of incident response, so from that organization standpoint, the good news is they stopped the fraudulent transfer. The bad news, of course, is that they have had a malicious actor observing their sensitive email communications for weeks, or months, or whatever it is. So that’s certainly, from that organization’s standpoint, not the end of the story, right? That’s just the beginning because they’ve obviously got a bigger security problem. But it’s not starting out from the standpoint of “Oh yeah, and we just lost tens of millions of dollars to a fraudulent wire transfer.” And just having that — often it seems, and Kevin, I’d be interested in your perspective on this, that one of the key things obviously is to have out-of-band processes as part of how you operationalize this. So think about the Pathé business email compromise attack. You know, it’s really interesting because — and that came out as a result of a lawsuit, actually, a wrongful termination lawsuit that we even know about this — but part of what came out on that was that there were these communications be– [silence; 43:18-43:22] office of Pathé, I think in Netherlands, who were being asked to make these wire transfers where they were saying, “This doesn’t seem right,” or “I don’t think this is what the process is,” and “We should ask for his ID. He should send us his ID.” So they kind of improvised this process, and in doing that, of course they just responded to the criminal’s email and asked them to sort of send the proof, and of course criminals oblige to send fraudulent, but convincing looking ID. But none of it was out of band. Nobody picked [44:00] up a phone and made a call, talked to a person, started up a new email thread to get right with what was going on.
KO: Yeah, I think it’s interesting to start to think about how close these things come to happening on a daily basis for different companies. And you’re right; the conversation doesn’t stop there, and that’s the opportunity for technology, right? It’s great that there are people who are saying, “Hey, I’m not sure this is what the process is.” But the insidious part of a business email compromise attack is that it relies on social engineering techniques, that is, it relies on psychological pressure to convince somebody to do something that the know somewhere is the wrong thing to do. And that’s what we look at when we think about these kinds of attacks. So an example is when somebody gets an email that says, “I need you to do this immediately,” and it’s the boss, and it has all of the elements of an urgent request from someone in a position of power, and for most of us who are just going about trying to do our jobs, that is something where we might make a decision that maybe we wouldn’t make if we had thought about it just for a second or two longer. And because so many of these attacks get through, think about an organization that gets two million messages a day, and maybe one% of all of the fishing attacks are making their way to users. The traditional security ventures say, “That’s great! We blocked 99% of all the attacks.” That one% are the one% that are going to cost you 25 million, 30 million, 100 million dollars or lead to a huge data breach that you’re going to have to report publicly and kill your share price. And technology investment, when it’s made, is not a silver bullet that just makes all of the bad stuff go away. Nobody has that. It probably will never exist. But if it reinforces business process, if it takes that user the extra second right before they fall victim, and you don’t have the sort of Goldilocks problem with too much or too little, but you get it just right, that’s how you can start to reinforce that process if you have articulated the process in the first place and said, “We don’t do,” say, “Wire transfers over email and if you see this message and you need to sign this or do this immediately, we can warn the user, ‘hey be careful. This is not the email address that your boss typically uses.’” “OK, maybe I still click on the sign link, but then I get another thing that pauses me and says ‘hey, this looks like it might be fake.’” Maybe you don’t click on this one. And that only comes up one in 10 times, one in 100 times. Now we start to see this idea of weaving together the technology and the process, and I think, Paul, you have this perspective, and I agree with you on it. I’ll turn it over to you to comment. That’s how you bring end users into the loop.
PR: Sure, sure. Again, providing them with the cues and the inclination — the cues and the education to make the right decision. It’s like that book Nudge; you’re making small suggestions that result in positive decisions and the right behavior versus kind of abstracting it all from them and saying, “well, the technology is running transparently behind the scenes and we’re going to find the bad stuff and therefore everything you look at is good because it’s made it through the filter which is a good way to get people to drop their guard.” So yeah, I think the goal in 2019 as opposed to maybe 2004 is yeah, you really do need to involve your users in keeping your organization secure. Training, or course, and education is a part of that, whether that’s red teaming, whether that’s false flag operations, or training exercises to get them on their toes and aware of the fact that these attacks are real, that’s great. But then you need to fold that type of knowledge and training into your actual platform for keeping them safe and not say, “OK, well you’ve finished this education and now you’re done basically,” and go back to your inbox and continue as you were. On the back end, obviously, as we’ve talked about administrators need better tools, right? So they need the advantages, that threat intelligence, and really pointed information and knowledge about what malicious actors are up to, what types of malicious domains, what types of spear-fishing attacks and techniques are using. You need those as well, and they need a way to bring those to bear in ways that are not going to alienate their users, gum up communications within their organization, and with customers and the outside world, have a positive impact on the number of threats that are stopped without inciting a full-on revolt among users who are then going to end up doing a whole lot of dangerous things just so they can do their job. And then, as Kevin, I’m sure you can talk about, sort of feeding the success of the system back into detection, right? Making it sharper and more honed over time, so kind of learning the lessons from either successful or blocked attacks and bringing that to bear on future defense as well.
KO: Yeah that’s right. And I think that’s the lifecycle, right? And when we think about why we describe this is a lifecycle is not because it’s good marketing, but it’s because it describes this feedback loop very accurately, which is not “We’re going to just block stuff” or “We’re just going to drain users.” It’s that “We’re going to take in the ability to pivot off of threat intelligence to build automated defenses and then loop end users into a dialogue at the moment that a real threat exists.” When you do that, and you pivot off of real threat intelligence coming from your user base, that knows your business process correctly and is able to articulate it, that’s where I think we start to see something that feels like an operationalized email security lifecycle, and your end users are going to reduce risk for you, and you can diminish that exposure window. You’re never going to block everything bad that’s out there, and anyone who’s trying to tell you that is trying to tell you something that just won’t work. And we have years, and years, and years of evidence of it. So I think where that brings us is to the conclusion of the prepared remarks. Go ahead, Paul. [silence; 50:48-50:54] Paul, did we lose you? [silence; 50:55-50:58] Well, we’ll get Paul back in just a moment, but I think that brings us towards the end of our prepared remarks. But I know we’ve had some questions come in, so I want to make sure we leave enough time to have some of those get opened up. Lorita, you’re still there. Do you want to maybe open up and see what we’ve got?
LB: Yeah, absolutely. Everyone, just as a reminder, we have the questions panel. We go to webinar control panel. It’s likely on the right side of your screen. We have a few that have come in. The first is “How do you determine which messages that have made it through your initial threat protection defenses warrant that additional suspicion?”
KO: Sure, I’ll start with that one. It’s a good question, right? Because the historical way that we think about email security is as a perimeter tool, and it has very limited ability to go and look at things that have arisen after they’ve reached a user. And there are three categories of risk or threat to a message post-delivery. The first is that whatever platform or technology is supposedly keeping bad things out has failed. And if that’s the case, there are indicators inside of the message that you need access to. You need access to post-facto and you need some mechanism by which you can then deal with that IR workflow — that incident response workflow. You need to be able to pull those messages up regardless whether a user has deleted it or it’s been moved to a spam folder. You need information about what specifically within that message is anomalous or suspicious, and that’s where technology can have a really strong component of helping you see those things can call out abnormalities or anomalies with the relationship between the sender and the recipient or the types of links or attachments that are there. The second bucket of threat is that something was fully innocuous at the moment that it was received by a user but gets weaponized. So someone deploys a fishing kit to a website and they link that website to an email that they send out, but they don’t deploy the fishing kit until an hour, or ten minutes, or some period of time after the email campaign went around. So the email campaign gets delivered. It then gets weaponized. You have to have some mechanism for doing continuous analysis and assessment. And then the third risk is that you have something that maybe is coming from an internally compromised sender. And this is often the hardest one to spot because those are legitimate messages and maybe it’s an externally compromised sender as well. Think of a message where somebody sends you a document off of their SharePoint but it’s a real document on a real SharePoint site that was really sent from that user. The only thing that’s suspicious about it is the relationship. That context is how we start to deal with it. In giving both InfoSec and end user’s context, is how I think we start to solve for those problems.
PR: And Kevin, you mentioned that you do a couple types of analysis. One is obviously [silence; 54:04-54:08] the warnings that you might get around a spoof send or address, but the other is the link analysis as well. Could you talk just a little about if they were to clink on a link, what types of processing GreatHorn is doing behind the scenes to make sure that link is legitimate?
KO: Yeah, of course. So very briefly — and we have more information on greathorn.com about all of this — but we do three different forms of link analysis. We do analysis across our global datasets, so we analyze billions of messages on an annual basis and that gives us this really wide perspective about all of the different URLs we’re seeing being sent around. And those get looked at and they’re analyzed against both public and private threat intelligence services as well as, from a reputational perspective, everything that’s happening within the GreatHorn client set, and so we can detect those anomalous messages. We do time of [55:00] delivery analysis so we can also look at the sender relationship. Often times, when somebody sends you a nefarious link or something that’s going to steal credentials, there’s not a strong relationship between you and that particular sender. The Podesta email is a great example. That was a very unusual sender that was distributing that email, and it’s something where that link to go to the Bitly — you know, Bitly’s are hard because, unless you’re going to the destination, and although we do that, it can often be tough for people to know, “Is that really the message that I think it is?” But that relationship should give you indication. And then finally and most importantly, you have to do time of click analytics, and you have to be able to say at the second you click on that, there’s a moment where the technology goes and looks and determines if it’s malicious, and there are many techniques for it, and we’ve got some pretty innovative technology around detecting credential theft and more coming. A little bit a preview, stay tuned as we get towards the RSA Conference in a few weeks. But there’s a lot that we’re doing around the overarching credential theft problem and it’s something where we can do analysis at time of click, as well as time of delivery, as well as globally and reputationally.
PR: Kevin, before we break, I thought I’d ask, I mean, you guys are seeing these things, these types of attacks happening all the time. I mean, what are you seeing as the big meta trends in this business email compromise or CEO impersonation attack space? I know you showed that phony Doodle poll; I thought that was really interesting. What else, for the folks who are on the line, what should they be looking out for or concerned about?
KO: We are seeing multistep attacks that are increasingly common amongst our client base, and multistep attacks are messages that have often been part of a campaign that is focused against a very specific demographic of users. They are going to combine the techniques of what you see in sort of the older “Hey, click this link” and [it’s bad?], with things that will include social pressure, maybe that link isn’t nefarious initially, but it starts to compromise an account. It compromises some of the accounts inside the business that’s then used to send a business email compromise attack, and so you start to weave together these many, many different little pieces of information, and they’re very, very effective at tricking people, because the further you get from something that looks like what a stimulated attack looks right, you get a message that says, “Your mailbox is full and all your mail is about to be deleted”, that’s probably a fishing attack. But if you’ve gotten something from the IT team that says, “This is part of a mailbox upgrade. We’re going to be moving you over,” and it’s really from your IT team, and you click on it, and it goes to a SharePoint site, and it has your logo, now you might actually fall for that. And those are the kinds of multivariate attacks that we’re seeing and more and more concerned about in the market.
PR: So super stealthy, very difficult. I mean, we all have work to do that’s not scrutinizing every single message that comes into our inbox for the red flags and intels, so I guess that kind of speaks to why we need to enhance protection and a little bit of help with this very tricky problem.
KO: That’s right. That’s exactly right.
PR: Great. Well, let me first — and I know Lorita, you probably want to say some goodbyes too — but let me thank everybody for tuning into this presentation. Obviously thank Kevin and Lorita from GreatHorn for hosting it as well. And Lorita, if I’m right, this presentation will be available for download or listening to offline if you don’t catch it live, or if you revisit what we talked about.
LB: No, that’s right. We’ll be following up with everybody with a recording of the presentation as well as a copy of the slides should you wish to download them. And just as an FYI, we’ll actually be doing a webinar next week. It’s our monthly introduction to GreatHorn webinar, so for those of you that are intrigued about what GreatHorn can provide from an email security perspective, please consider registering for that and joining us on that webinar next week. In the meantime, Paul thank you so much for your time as well. Paul has written a great, white paper on basically this exact topic that we will also be following up with everybody so that you can get a copy of that and take a look at, in written form, some of the core points that Kevin and Paul talked about today. Thank you all for your time today and we will look forward to seeing you on the next webinar.
PR: Thanks everybody!
KO: Thanks everyone.
END OF AUDIO FILE
Request a Demo
Like what you hear? Contact us to learn more about GreatHorn’s sophisticated email security platform and how easy it is to set up for your Office 365 or Google Suite platform.