Protecting against Advanced Email Threats: Beyond the Nigerian Prince Scam
In this webinar, GreatHorn CEO Kevin O’Brien discussed how phishing attacks have evolved – from the early days of the widespread Nigerian prince schemes to today’s sophisticated and highly targeted spear phishing threats – and how our collective response has failed to keep up. By reviewing common attack patterns, he highlights areas outside of technology that are critical to protecting organizations from advanced email threats – from business processes to user engagement.
TERRI ROBINSON: Hello everyone, my name is Terri Robinson, and I’m the executive editor at SC Media. Welcome to our webcast, “Beyond the Nigerian Prince: Protecting Against Advanced Email Threats,” sponsored by GreatHorn. While phishing has advanced significantly beyond the prototypical email from a Nigerian prince, most email strategies, and the tools they rely on, have not. Today we’re going to discuss how phishing attacks have evolved from the early days of the widespread Nigerian prince schemes, to today’s sophisticated and highly targeted spear phishing threats, and how our collective response has failed to keep up. By reviewing common attack patterns, we’ll highlight areas outside of technology that are critical to protecting organizations from advanced email threats, from business processes, to user engagement. And finally, we’re going to highlight the multidimensional approach that organizations should be considering to combat phishing and how they work together. Our speaker today is Kevin O’Brien, he’s the CEO and founder of GreatHorn. Under Kevin’s leadership, GreatHorn has become the world’s leading next generation email security company, analyzing billions of messages, and stopping phishing attacks targeting a global customer base of organizations both public and private.
Prior to founding GreatHorn, Kevin was vice president of marketing at Conjure, where he built the early go to market team responsible for initial market positioning and growth. Previously, he led product marketing and sales engineering efforts at CloudLock, the leading cloud access security company that now has more than 6 million enterprise users. In addition to his role at GreatHorn, Kevin serves as co-chair for the Mass Technology Leadership Council cybersecurity group. He brings deep industry experience to the table, and I’m so happy to welcome him here today. Hi Kevin.
KEVIN O’BRIEN: Hi Terri, thank you so much for the kind introduction, and hello everyone, nice to be here. So why don’t we get started, and talk a little bit about what we mean by phishing, and dive into our topic? One of the things that I think is helpful when opening this kind of a conversation up is to take a step back and look at what we mean by phishing. And I’ll share an anecdote. When I started GreatHorn nearly four years ago now, I was in New York City, and I was sitting down with a venture capitalist and having a conversation about what we were looking to do, and why we thought that there was an opportunity, and a requirement for a new kind of anti-phishing email security company to be founded and created. And one of the things that we saw in those early days was that we were running into a nomenclature problem. And this venture capitalist really put a very fine point on it for me, because I realized about 30 seconds into my elevator pitch to her, as we were looking at getting some capital for the business, that she thought we were in the maritime trade, and we were doing something about actual real-world fish in the ocean. And I don’t think we’d have that conversation today, the problem has obviously been one that’s had impact for most organizations at some level, whether it’s wire transfer fraud, or it’s an attack that resulted in somebody sharing sensitive financial information about employees, like W-2 tax forms with an attacker.
But phishing is still an overloaded term. And as the slide says, phishing is many things, and not just one. And one of the things that we mean by that is that when you run a survey, as we did, and you ask two different constituencies what they think about email security, one of those constituencies being email security professionals, and the other being non-email security professionals, people who work at companies and work with email, as most all of us do, but whose job is not to secure or manage information technology. You get very different answers about what kinds of things they see in their inbox on a day to day basis. And notably, most professionals, almost three quarters of all professionals who aren’t specifically focused on email security still think that the number one problem that they get in their mailbox, on a daily basis, is spam.
If you’re lucky, you’ll find that there’s an understanding amongst nonpractitioners that there are phishing attacks, and those phishing attacks are different from spam. So, what do we mean by those? Well spam briefly is unwanted marketing emails, for the most part. And phishing is something that runs the gamut between, or from what we look at as the subtext or subtitle for this presentation, the classic Nigerian phishing campaign where the Nigerian prince says you’re going to get $10 million, and please just send me your Social Security number, your bank account, your home address, etc. And what we see in the business world, which are a far more nuanced set of impersonation attacks, of wire transfer fraud, brand impersonation, and if you’ve seen or ever received a message that looks like it’s coming from your own domain, but it’s one character off in the URL, things like that are what we mean by business service spoofing, or brand impersonation.
And of course, there are then second order threats, where somebody sends you a link, maybe it says you have a voicemail, or you’ve got to go log into something. And we’ll look at these in more detail, but they could be attack vectors for stealing your credentials, get into your mailbox and literally send mail as you, or even distribution mechanisms for malware, viruses, ransomware, and so on. The problem we face though, is that most organizations are staffed by people who do not think about the problem with this degree of nuance. And so, I’m going to put down the first point, and one that I’m going to come back to repeatedly throughout this presentation. As an industry, we are failing our customers. Because we’re telling them that end users are the weakest link. End users are the problem. Well the reality is, end users, the normal people who work alongside us as security professionals, have to use email, and want to do the right thing. But we have neither provided them with the technological support, nor the business and process support required for them to make better security decisions.
And so, that’s the theme that will weave its way through the presentation over the next half hour or so. And as we talk about what these attacks look like, and how we can mitigate them, there is no silver bullet. There is no single thing you can do that will solve this problem, but there are a set of relatively straightforward framework-level steps you can take which will dramatically reduce your risk of falling victim to one of these attacks, and once they’re implemented and consistently applied, will allow for a much more robust security posture when it comes to email overall.
Again though, let’s start by looking at what a phishing attack actually is. In a sense, phishing attacks are social engineering attacks. And I think back to in the early 2000s, when I was part of a Boston cybersecurity company, I was part of a group called @stake. And we did all kinds of security work, but some of the things that the team would do would be social engineering attacks, and they would be designed in those days to get around perimeter security in the real world, things like a locked door, or a requirement that somebody swipe a badge to get into a building, or a server room, and gain access to something sensitive. And companies would pay us to test their physical defenses. And [08:00] oftentimes what would work the most effectively for getting around those defenses was to go to the costume shop and buy a mail uniform, or a UPS, or a FedEx uniform, and show up with some cardboard boxes, especially in the Boston area, on a cold and snowy night, and wait for someone to come outside for, to get to their car, or for a cigarette break, say hey, can you hold that door? And they would. Because human nature is to do something that seems like it’s the right thing to do, based on the social cues we have. Phishers, and those people who are attacking via email, know that this is a common human characteristic, and they exploit that when exercising a phishing attack.
Look at the examples I have on the screen here. These are all fake emails. They all have characteristics of social engineering woven into them. On the left, what we’re seeing are things like messages claiming that you have a security alert from Outlook, and that there’s a trusted source that’s sending you something, it’s Microsoft, it’s high priority, that red box, and that you’re not getting your email. Neil Wynne of Gartner Research pointed out last June that they had run a survey of all of their customers, and Gartner being an analyst firm, has a pretty broad reach. And found that nearly 100% of all white-collar professionals read nearly 100% of all of the mail they receive. And that may not be a long read, or a careful read, but if you send a professional an email to their work address, and it hits their inbox, they’re at least going to open it and look at it for a second. So you can think about that social engineering play here, saying that you have sensitive mail that you’re not getting, it’s being blocked, it’s urgent. Someone’s most likely going to go and click on that, or do whatever it is that the phisher is asking them to do, and in this case, it’s probably credential theft [10:00].
Alternatively, on the right-hand side, drawn from my own company, we have a Kevin O’Brien urgent request saying that I need someone to respond and do something. I, of course, didn’t send this message. This isn’t a synthetic example, but this concept is that there’s something urgent from the CEO going to someone in the finance department, and claiming that I need them to respond to an email. And of course, if they do, they’re going to fall victim to a much more nuanced attack that’s going to probably result in financial fraud, or some other form of damage to the company. And it’s relying on that idea that the boss is saying I need you to do something urgently. And we see this often go from a senior level, say CFO, to someone in the finance team. And it will be an update on wire transfer information, or it’ll be some other form of required action.
All of this to say we have a nuanced set of attacks that most people still think about Soviet spam, they’re not on the lookout for whether or not attacks might be impersonating their colleagues, or services that they commonly use, and these attacks, when they come in, rely on human psychology and psychological hacks or tricks to create a sense of urgency from someone whom they trust, and they’re looking for some action to be taken.
And this is why we see that these sophisticated attacks will result in somebody falling for them, and if they’re credential theft attacks, clicking on them 1 in 25 times. And we see, given the breadth of the data that we have access to via GreatHorn, that there are hundreds of thousands of these attacks being sent on a daily basis across our customer set, and you can just imagine what that means with respect to the response rate when somebody is falling for them 1 in 25 times.
So, where does that put us, and how can we start thinking about this? Well, there are a couple of things that I would like to unpack here. The first is, we need to have a better and more nuanced understanding of the technical tactics that attackers are using. And then, let’s look at what those countermeasures in an ideal world would be. The primary counter, which is the bit in red here, is that everything we’re about to describe are things that your users most likely won’t do. And so, we’ll solve for that in a moment, and talk about why the approach to solving the email security problem is still one that is not working. But first, let’s just articulate and enumerate the attack types.
The technical tactics for a phishing campaign are a display name spoof, and that’s where somebody will register a legitimate mail account on a private but often free email service, like Gmail, or Hotmail, and mirror the display name of an executive. This might be what you saw from the message from Kevin O’Brien that was coming from remotemail.com, or it could be, on a more sophisticated attack, as we sometimes see happen for our larger customers, where a senior executive has a private email address, and the attacker finds it, typically through something like LinkedIn or Facebook, and then mirrors that account, but is one letter off. If my personal email address were [email protected], then the attack might be [email protected] And that’s often convincing enough that somebody will open that, and think that oh yeah, that’s really from the person, that looks like their private address.
Email address spoofs, or direct spoofs, are the next level down. This is one where an attacker will often find a way to, through an open mail relay, send mail as one of your users. And those are attacks where somebody will use an SMTP header rewrite, and it will literally come from the address of your executive, or your colleague. This is harder to do if you have mail authentication set up, SPF, DKIM, or DMARC. But not always, and most organizations don’t implement those perfectly, meaning that it is possible for somebody to take this more technical route and impersonate your email address directly.
Branding and domain lookalike attacks come down to that breadth of accounts that don’t have SPF, DKM, and DMARC set up. These are mail authentication standards, generally reliant on your organization making changes to their DNS records. And most organizations struggle to do this comprehensively, and branding attacks that directly impersonate your brand will then maybe not directly go after an executive’s name, but will come from what looks like your brand email account, and that can be very confusing, as a consumer you’ve probably seen mail like this, claiming that it’s coming from your mortgage company. These are attacks that can be very tough for somebody to spot if you haven’t gone through the effort setting up mail auth. correctly. And domain lookalikes take it one step further, where there’s substitution of a character. So if your company name had two L’s in it, the attacker might substitute a one for one of those L’s. And at first glance when somebody looks at it, it’s going to look very much like it’s coming from that brand, and that domain. And moreover, if they then set up SPF, DKM, and DMARC correctly, they greatly enhance the deliverability of the fake accounts, and it will often, especially for a user on a mobile device, only show the first and last name, that is the display name, and not the full email address.
URL obfuscation is the final technical tactic we talk about here, and that’s where you’ll see someone use a URL shortener, or a Bitly, or a Google rewrite, to hide an attack URL, and ensure it gets delivered. Sometimes these are also done with open relays, it looks like you’re going to one website, but when you click there, it actually redirects you somewhere else. And these are typically parts of credential theft attacks, or malware distribution.
These are pretty sophisticated tactics. So what do we do? Well ultimately, it wouldn’t be that hard if we had infinite time and perfect visibility into all of the mail data that comes in with email. If your users, and they obviously won’t do this, could look at and contextualize every piece of every mail they get, they could see if that mail has passed those fundamental mail authentication checks. They could verify if the sending address that went along with their boss, or their company, was the one they were expecting, and one that was typically used by that person. They would dig through all of those headers, and try to determine if the IP address or the sending characteristics were what they would expect, given what most mail from that user looks like, and they would, of course, have some mechanism of taking every URL at the moment they clicked on it, and looking it up, and figuring out if this was actually something they would be expecting to see at that moment. This is tough. And most users, I’d say all users probably, won’t do this. So, how do we start to weave together a cybersecurity response to deal with it?
Well 15 years ago, 20 years ago, you would go out, and you would purchase something called the secure email gateway. And secure email gateways came about when we were still primarily concerned with email being something that would potentially go offline if somebody kicked out the cable to the mail server. And those of us who’ve been doing this for long enough probably spent our time in a co-lo somewhere, dealing with an exchange server, or a loaded server that’s gone offline, and trying to solve that problem. So one of the original use cases for this secure email gateway was mail spooling. That is, redundancy in the circumstance that your mail server went offline. And later, as email and the internet became more widely adopted in the late ‘90s, early 2000s, we saw the inclusion of antispam, antivirus, and additional security. And today, there are a few of these secure email gateways that have pretty good functionality when it comes to providing perimeter tools for blocking known bad attacks. And increasingly, they’re getting better at things like basic display name spoofs, one to one matches for your domain, and so on.
The problem is, they’re perimeter security tools. And most businesses today are either in the process of, or have already migrated to cloud email systems like Office 365, or G Suite. That migration means that those organizations don’t have an email server to protect, mail deliverability and spooling are no longer necessary. They have users to protect. And this entire legacy model is based on keeping mail away from your users. It’s analogous to what we see in the firewall market, where the rise of the cloud access security broker technology set came about. Blocking users from getting access to something they need to do their jobs, whether it’s third-party devices, or applications in the case of CASB, or it’s email itself in the case of the secure email gateway market, will simply drive those users, given the nature of technology today, to their own side channels, to less secure and often unmonitored alternatives. That is, in summary, the secure email gateway is a business blocker. It quarantines and delays mail delivery, and makes users unhappy with the security department, and ultimately seek alternatives that don’t subject them to those hassles. [20:00]
There’s something we can do about that, though. And that is the ability to begin to look at defense in depth, and think about all of the different things that we layer on top of a user to protect them. So now, fast forward from the late ‘90s, where we see circa 2007, 2008, and we start getting better endpoint protection, we start seeing the early adoption of web app firewalls, and later things like multifactor authentication. So if your credentials are stolen, you still have to have something like a phone. This is a good improvement over the idea that you just put a box in front of your email server and somehow protect yourself. But it also papers over some of the failings of this legacy market, and so what we now see is that by including these technologies, which we should do, we’re not necessarily thinking about what’s gone wrong that led to the initial incursion of an attack, or into your environment. And so, we come up with this model where the tools that reduce the impact of an effective attack are being relied on, and we’ve moved away from thinking about why those attacks are getting around so many of the market offerings for email security. That’s the biggest problem that we face, regardless of what technology you have.
And in the enterprise, that is, within the Fortune 500 to Global 2000, there is historically a near 100% adoption of secure email gateway technology. And yet when surveyed, one in five professionals are reporting that they have to go in and take remediation action at least once a week. These gateways are not stopping today’s attacks. And this is where the fundamental problem with the legacy email security market has to be addressed. Because the perimeter has become porous, and the idea that somebody will get into your email environment and wreak havoc has been proven over and over again, because of that porous perimeter, and the failings of that legacy market.
Now unfortunately, and this fast forwards now from say, 2007, 2008, to 2013, 2014, the response from the security community was to blame the end user. And in blaming the end user, we said they are the weakest link, we need to train them, we need to keep them from getting access to bad mail, and we need to slap them on the wrist when they fall for different kinds of attacks. And the security awareness training methodologies that we see deployed today, [23:00] although a critical part of a compliance program, actually have a fairly low attach rate when it comes to stopping attacks. They have a short-term boost, for sure, but when you send out fake phishing emails to all of your users as a security professional, what you’re teaching them is to be wary of you, because you’re going to embarrass them, or you’re going to put them through a mandatory training that they’re not going to retain six to nine months later. What can we do? Well this is obviously where my bias comes in. I started a company to try to solve for some of this, but I don’t believe that the sole answer to the email security problem today is to look at technology, or look at a purchase, and think that you can buy your way out of this.
Instead, we talk about the email security lifecycle. And the email security lifecycle has a couple of different meanings. First is that it’s more than just technology. It’s the concept that there are processes and people involved, and the technology that you purchase or implement should support those first two layers. And in fact, serve as the glue between them. And secondly, it’s that there is no silver bullet, and that the concept that you will ever stop 100% of all bad mail is effectively impossible. So, integrated incident response, and the ability to reduce time to detection, and time to remediation, are critical components of a modern email security lifecycle. This isn’t a sales pitch, and I’m not going to go deeply into how big a problem this is. But it’s sufficient to say that most security professionals list email as either their number one, or their number two attack vector, and their biggest concern. So immediately, one can address that by stepping back and looking at how and where email fits into their organization. Business process is both the simplest thing to change, but often the slowest. It’s simple because you can write new policy and roll it out to people. It’s slow because changing behavior is one of the hardest things to do.
If you could, what would it look like? Well obviously, not everyone has the same level of risk or access. The finance department might be able to effect wire transfers, or change billing information, inside of your ERP system. They are most likely a stronger likelihood of being — have a stronger likelihood, rather, of being a target, than the janitorial staff. Both have real ability to cause harm to the business if they’re compromised, but from an email perspective, you want to think about working with the right teams, and dealing with risk, and developing process based upon role and what that organizational asset, that financier, or the person in HR, has access to.
Secondly, although everyone’s going to read all of their email, it doesn’t mean that they should all act on it. And so things like sharing confidential information, or executing financial transactions, shouldn’t be done over email. And it’s a requirement that a security team build a practice that says if something really is urgent, here is how we are going to address it as a business, it’s not going to be an email from the CEO or the CFO, it’s going to be a phone call. Or it’s going to be a face to face meeting. Because email is too vulnerable a system for the most urgent of requests. So when we start thinking about this, we can write policy that informs and describes what email is good for, and how it should be used.
Then, as you start to think about building out the ability to enforce that policy, technology can step into the mix. And one of the things that technology is capable of doing is reinforcing the policy that you built. An example here of adding a real-time alert right into the body of a message, reminding an end user who gets an email about something like, for example, a wire transfer or an accounting update, that this is not how this is supposed to happen, and referring them either directly in that message, or through your company’s security resources, a SharePoint site, or an intranet, to the policy itself, and how to ask if they have questions. This changes the idea. Rather than saying that your users are the problem, you say our users are our opportunity to do things the right way, and we’re going to step back and inform them, or remind them, of what they’re supposed to do in the moment.
One of the analogies that I sometimes will use when discussing this with people is that business process change is akin to when we send our children to school, the fire department coming in and telling them that if there’s a fire in the house in the middle of the night, they should leave, not hide under the bed, or run into the closet. It’s good advice. Having technology to back it up is like installing smoke detectors in the hallway. And when there’s a fire, it goes off, and alerts everyone hey, this is the moment when all of that training has to come to bear on this problem. [29:00] And you need both. If you just have the alarm, if you just have the technology, but you don’t tell people what to do, they’ll disregard it. You have the false positive problem, or you get the Goldilocks problem where it’s too much information, or too little information, and they don’t know what to do, and if you only have business process, it’s too abstract. There’s also security awareness training, and why it doesn’t stick. Married together, technology can back up process.
And that leads to the second point that we want to make, which is that users are a point of risk. They’re not the weakest link. A point of risk here means that an end user is somebody who is actually going to be on the forefront of the attack. They’re going to see it before your security team does most of the time. And so, giving users the ability to balance risk and agility is critical. [30:00] And that’s not specific to email security. In any security domain, having the ability to give a user the opportunity to do the right thing, and protect them in the circumstance that they make the wrong decision, is how you get users to treat security as a part of their job, rather than a blocker to it.
So, we talked a lot about this idea of reinforcing good security hygiene. And this is often a combination of investing in technology to do things like create those banners, or sandbox URLs that might lead to a credential theft attack. But also, giving users the ability to say I’m not sure what to do here, help me, without embarrassing them, without chastising them for not hovering their mouse cursor over a link, for example. Things that also don’t make sense in the modern ecosystem and technology landscape. How do I hover my mouse cursor when I’m looking at mail in the airport, running to a meeting, on my mobile phone? There’s no mouse cursor to hover. So, you need to have some mechanism by which you can integrate user feedback and also, have some way for them to report that they’re not sure what to do.
Technology here should provide users with context in the moment to remind them hey, this might be a little bit suspicious, or here’s someone who’s claiming to be your boss, but you don’t actually know this particular sender. Or there are links or attachments in this message that aren’t just bad, if it’s just bad, if it’s malware and you can identify it, sure, strip it out. But what if you don’t know? What if it’s statistically anomalous? What if it’s suspicious? Getting a user to slow down by giving them more information, but not getting in their way and blocking them, is a fundamental security technique. In the security market for CASB, we used to talk about a concept of a traffic circle in Denmark that was the subject of a huge study. Where previously, before anyone had tried to mitigate this, it was one of the most dangerous, and had the highest mortality rate, traffic circles in the world. And so the initial response was to put up impediments to driving. Speed bumps, rumble strips, stoplights. And what happened was that this traffic circle, which was a required part of most people’s journeys to and from Copenhagen, would be blown through. People accelerated because they didn’t want to be stuck in traffic, they were annoyed by the time they got there. The problem got worse. What happened after that is where it has implications for security.
A study was conducted, and all of those control devices were removed. In fact, not only were the rumble strips and speed bumps and traffic lights removed, the lines dividing the road were painted over, everything was taken out. It was such a change, it was such a strange experience for those users that they slowed down. Mortality rate dropped to near zero. People fundamentally want to do the right thing, but you have to meet them where they are, and getting in their way, or slowing them down, often has a contradictory effect from the one that you actually intend. Here, having the ability for a user to get a message that might be dangerous, and giving them the ability to see what’s going on, to get them to open their eyes, and slow down, and look, is the same idea that we heard and were talking about six or seven years ago, when people-centric security was one of the watchwords for the CASB market. And it’s our belief that you can continue to do this inside of email security if you commit to giving users that training, and then backing it up with technology that gives them context around what might be going on.
And ultimately, it is that concept that technology plays a dual role. It is both an enablement function that lets users work with their email, that doesn’t drive them to shadow IT systems, it doesn’t say that they have to go and use their personal email addresses to send a document because your server’s blocking all attachments, or convince them that they should go upload something on a filesharing site because they can’t possibly get it done over email. But also, an enforcement tool. That is, something that’s able to identify threats or risks in real-time, and mitigate them. There are certainly places still in the email landscape where known bad attacks need to be removed. That’s baseline, that’s what most segs do well. But it’s the gray areas, the uncertain emails, the ones that might be threats, or might actually be legitimate mail from an executive who inadvertently sent from a personal email address, but still needs to get that message across, or having the ability for someone to use technology in a way that it lets them do that job, changes the narrative.
Ultimately, technology can also simplify the work that we do as professionals. When an attack happens, think back to this concept of there being a silver bullet. Sometimes, somewhere, things are going to slip past your perimeter, or they’re going to get into your mailboxes. How long does it take you to execute a search, to find all of the instances an attack had been mitigated? This is the other half of the security posture, which is the email security lifecycle doesn’t stop at the mailbox. It has to extend long after delivery, and equip your team with the capabilities to redact or remove threats, even if those weren’t threats when they were delivered.
For example, a phishing kit that gets activated two hours after an otherwise innocuous link in an email is allowed to reach your userbase, is something that you have minutes to go and remediate, not hours or days. And that delta between detection and response, or remediation, is the window through which an attacker will get into your organization. And that is one of the foundational approaches to using technology, which is that it can make the task that you’re doing easier or faster. And that is the appropriate place for a technology investment, but it has to exist as part of a comprehensive approach alongside process and thoughtfulness around how your people actually need to engage, without being just a pure business block.
Where do we go from here? Obviously, [37:00] you need to first assess your environment. There are a lot of ways you can do this. There are existing cybersecurity frameworks that you can apply, there are technologies that you can invest in that will allow you to identify if you set up mail authentication correctly, SPF, DKM, and DMARC. You also need to do something with that analysis. You need to set up and look at a security program for email, and for messaging writ large. And then create an integrated response plan saying, for example, we’re a financial institution, we occasionally have to communicate sensitive wire information over email, here’s how it should happen, this is where it can come from, these are the users who are allowed to execute that kind of change, and here is how we are going to enforce that so that somebody can’t levy an imposter attack against our business. And then back it up, where not only do you say that these are the ways that that kind of transaction will occur, but when something falls out of bend, or is unusual, the technology will step in, and arm the user who’s the recipient of that attack with the information, and empower them to ask questions of the security team in a meaningful and timely way to reduce your risk.
Briefly, that’s what GreatHorn does. We’re going to transition over to questions in just a moment, but if you’re grappling with these problems, especially in the cloud email world, our team has spent the last four years building out process and working with some of the world’s largest brands to implement technology, to allow for this kind of protection in a timely and responsive fashion. That’s all we’ll say about that. Why don’t we transition over, and take some of the questions that I know we’ve got.
TR: OK, great. Thank you so much Kevin, and we are going to open it up to questions from the audience. Let’s start with this one. We’re happy with our vendor FireEye, but the leading known bad emails from Office 365 requires us to search for parameters, then run PowerShell. How can you help us with this?
KO: Very common example, and we have multiple customers who are using existing technologies like FireEye, which is a great suite of tech, but then have to go back to doing the e-discovery and audit process through 365’s security and compliance console. And then remediation is all PowerShell. And it’s often measured, at best, in minutes and often in hours or days of work. This is a little bit of now a vendor-specific answer, but the GreatHorn platform is a cloud native approach that plugs directly into an O 365, or G Suite environment, and search and remediation take two clicks, and often only a matter of seconds to do once somebody has the technology integrated. That integration takes all of five minutes, and it can sit alongside an existing email gateway, like a FireEye device. [40:00]
TR: OK. Oh great. So how do we fix the inability to see easily metadata on a tablet? Also, the inability to hover over a hyperlink to see the real URL?
KO: That’s a great question. Unfortunately, one of the challenges of mobile devices are that they have highly heterogenous application environments. I might have Apple Mail, I might have Outlook, I might be on a browser and using the web portal. Hard for a security team to know exactly what that’s going to be. MBM is somewhat helpful here for ensuring that you provision a core set of applications, but often even those core applications like Outlook don’t give you that kind of information. What we do with GreatHorn is provide a single integrated piece of functionality for end users that’s the same experience on the desktop, or on mobile. We call it GreatHorn Reporter, that will highlight where their destination links are going, will flag them if they’re suspicious or risky, and will provide that level of contextual information. And you don’t have to see all the email headers in order to parse it, we highlight those things that are anomalous or potentially suspicious, so a user can, with one click, get access to that information, no matter what device they’re on.
TR: OK. How does GreatHorn determine to what extent the sender of an email has spoken to the recipient, or the recipient’s colleagues, before?
KO: Sure. So staying in the vendor-specific world for a bit, what we do is, we call it GreatHorn’s deep relationship analytics. And it is a function of being integrated through those cloud APIs, so we are seeing technologically, that mail exchange over time, and building a record of who inside of your company knows whom. That information doesn’t require that we ever transfer your mail to our servers, we’re not an MX gateway, but it gives us the data exhaust, the view that something has occurred, if there was unidirectional or bidirectional communication, the ability to rank the frequency, and the reciprocity of those different email exchanges, and use them to build a risk score, or a communication score. So that all happened behind the scenes through that API, between the in and out of a mailbox, and over time, allows us to build upon that deep relationship analytics model, and see who knows whom, and how well.
TR: Thanks. If we as a company are already committed to a vendor like FireEye, what can you add?
KO: So, we’ve already had one question about an area where there is a need for that kind of additional support when it comes to incident response. So one area where we often will come into an existing secure email gateway customer’s environment is by layering in that real-time search remediation. In addition, those technologies are generally speaking, bound to the perimeter. Meaning that they have the ability to block or quarantine things, but often can’t do very much once the message has reached a user. And so out of necessity, what will happen is this that they’re attuned and configured to put more things into the quarantine, rather than fewer. And that can delay someone from getting access to critical mail, and ultimately we see those organizations get pressure from the business group to turn down that level of filtering. What’s nice about GreatHorn is that it can sit alongside a gateway, and modify messages that do reach a user. For example, including a warning banner, or sandboxing URLs, or even providing a stripped-down view where things like attachments aren’t there, but they’re still available via request, so you don’t have that binary good/bad, black/white, allow or quarantine model, but you have a far more nuanced set of controls. And those can exist alongside the use of a legacy technology to simply blacklist known bad things.
TR: OK. I have struggled with my security awareness program on what’s the best approach for repeat clickers. What do you think is the best effective approach for these associates to learn?
KO: That’s a great question. You know, one of the challenges is that people are going to click. We are almost instinctively trained to click on links when we see them in email. And all of those social engineering techniques that we talked about at the start of the presentation are part of why that happens. What we think of is a different approach. Which is to take every link and run it through a rewrite sandbox, and giving your administrative team the ability to not only control whether or not a link is accessible, but also to provide a combination of time of delivery, and time of click protection, so that if someone is a repeat clicker, and they’re always falling for it, there is a pause in their workflow where the technology is reinforcing that analysis. And rather than just blocking things, you can do something pretty sophisticated with that kind of tech. You can give that user an isolated sandbox view of the destination, and say hey, you’re about to click on something, and it looks like an Office 365 login page, but it’s not. And we’re not going to let you through, and here’s why, and reinforce that security awareness training at the moment of interaction by the end user. And over time, take that repeat clicker, and rather than just giving them those abstract security awareness training sessions, or those videos they have to watch when they fall for it, actually enforce it in the real world when they click on something. And start to get that recognition that oh, this is what’s meant by what I saw in the awareness training that I did last week, last month, last quarter.
TR: If an attacker is using an STMP header, rewriting on a trusted external domain, so it comes from their actual email address, how does GreatHorn detect if the trusted party hasn’t set up their SPF DKM?
KO: Yeah, so when somebody hasn’t set up SPF, DKM, or DMARC, there are still factors like the sending IP address, the actual mail agent that’s being used, the geo IP of the sender information, that can be used to determine whether or not it is likely to be the same sender. Now it’s not a categorical, somebody could be traveling and using a non-authenticated mail system. But it’s still unusual, and having that ability to track what we call authentication drift, and looking for an SMTP rewrite that might suddenly be sending with a different mail relay than the one that’s typically used, is a key factor for finding those kinds of threats or attacks. In addition, you’re not bound to this idea of blocking mail as you would with a traditional seg. So you can say hey, you just got this message from a trusted external sender, and we’re going to let it through, but be careful, it looks a little unusual for how this person typically sends mail. And if that then also says something like I need you to process a wire request for me, or are you in the office? Please call me right away, that kind of messaging, combined with that kind of authentication drift, or that unusual sending characteristic, is an opportunity to again, pause the user, and have an interaction that says there’s something happening here, maybe I should just slow down.
TR: OK. Well how important is it to implement DMARC and other authentication standards then?
KO: It is important enough that the federal government in the United States has made it a mandate for all of their agencies. And it is important enough that nearly all of the enterprise brands that we interact with are either underway, or are planning to do so in 2019. These are good security hygiene projects, there are companies, very good companies like dmarcian, founded by some of the guys who created the DMARC standard, who can help you if you’re struggling with this. But it’s also something you can do for free yourself by modifying DNS records, and doing SPF and DKM setup. In fact, we wrote a couple of blogs now a few years back on the GreatHorn.com site, under our blog, our five-minute guide to SPF and our five-minute guide to DKM. DMARC takes a little bit longer than five minutes. But these are things you should just do, and you should implement them, and if you need help, there are companies that make that their core practice, it’s something so essential from an overall internet security hygiene perspective, that you really should be, if you haven’t already, undertaking it in 2019.
TR: OK. How do we better protect workers who primarily access email on their phones?
KO: I’m certainly one of them, so I am sympathetic. Again, everything that we talked about here from a technology enforcement perspective is predicated on the concept of putting those alerts or warnings into the mail message itself. We’re seeing platform providers, Google and Microsoft, do a pretty good job inside of their native applications at putting some basic warning banners up, or moving things to the junk or spam folders, respectively. But a lot of that stuff falls by the wayside if you’re on a mobile device, or you’re not using say, Outlook or Chrome, if you’re accessing Google. So being able to protect your userbase constituency that’s on the road, or on a mobile device, means modifying the mail message itself. Adding those alerts, those warning banners, that functional reminder of something that they learned in training, without requiring them to have something additional, or a specific environment. We have done that now for the last four years, it’s one of our most widely adopted pieces of technology, and that in message rewrite or banner is a core part of the answer to that challenge.
TR: OK. Microsoft has come out with a lot of security features, aren’t they good enough to address the phishing issue?
KO: So it’s great to see Microsoft doing this, it’s not best of breed. That sounds like a vendor answer, but if you go and look at the analyst community, and do a bit of digging on what they have to say about things like Exchange online protection, or Advanced Threat Protection, you’ll find there are some areas in risk spaces that they’re very good at. So malware is something that is, generally speaking, well protected by Microsoft. And then there are just huge gaps when it comes to end user engagement, the ability to build nuanced policy around the kinds of threats we’re seeing, as well as massive deliverability challenges, 15, 20, 30-minute delays on mail reaching users. It really feels like, in some ways, they’re ahead of the curve, and in other ways, they’re bringing technology from the ‘90s to bear on this problem. So no is the short answer, it both lacks the incident response capabilities, as a previous questioner asked about using PowerShell, and the audit and recovery functionality inside of Exchange, and it also is just not well tuned to the needs of the modern cloud-enabled worker who’s going to be demanding mail in real-time.
TR: OK. (laughter) Should we focus on our business processes first, or user education?
KO: Business process has to be defined first. You can’t educate your users if you don’t know what you want the outcome to be. And so if you don’t have a policy that says we never do wire transfers over email, then those users have very low likelihood of actually enforcing that policy, or adhering to that policy, because you haven’t written it down and told them. Business policy is something that you can look to the Sands Institute, you call look to [NIS?], you can look to a number of different third-party frameworks, many of them free, to start to understand what that process should be, and what the best practices are. I would actually argue that then jumping to training is the wrong move. Training is a last step. You can go from process definition to process enforcement with technology, and then reinforce it with training to test the efficacy of your existing programs, rather than using training as a security tool. Training is a compliance tool. Compliance is not security, and vice versa. So start with process, then enforce it, then enforce compliance and assessment through security awareness training.
TR: OK. Well you know, that’s going to have to be it for this session. Just a reminder that today’s session will be available for download tomorrow at SCmagazine.com under events. Thank you so much Kevin for being with us today, and thanks to all of you for tuning in.
KO: Thanks everyone.
Request a Demo
Like what you hear? Contact us to learn more about GreatHorn’s sophisticated email security platform and how easy it is to set up for your Office 365 or Google Suite platform.