Email Security Redefined_ An Introduction to GreatHorn
Good afternoon everyone. And, welcome from snowy Boston. I’m Lorita Ba. I’m the vice president of marketing for GreatHorn, and I want to welcome you to our webinar today: “Email Security Redefined: An Introduction to GreatHorn.” This is our series of monthly webinars where we give you a little bit of an inside peek from a demonstration perspective on what the platform is, what it does, and how it works. And obviously, the goal here is to help answer some questions that you have, as well as provide you with an opportunity to see it in action.
Before we get started, I just wanted to cover a few logistics. You will be on mute. You’re going to submit questions in the Q&A box that’s probably on the right side of your screen, at any time. We will be answering questions at the end of the webinar. The webinar is being recorded and will be available for replay, and the slides will be made available after the webinar as well I’m joined by EJ Whaley, one of our solutions engineers here at GreatHorn who will be conducting the demo, so you’ll hear from him in just a minute.
So just go ahead and dive right into this, right? First, just a quick overview of who GreatHorn is, and what we do. Our goal here at GreatHorn is really to protect the life cycle of email. What we mean by that is, protecting email through every potential vulnerability point within the cycle, rather than just focusing on the detection piece. So, we have some defense strategies that we’ll walk through as well as integrated real-time instant response capabilities to help really simplify your ability to manage your email security and to respond to incidents that make it through.
In addition, because of where GreatHorn came from, you know, a lot of our focus here is on not just the malware threat, and malicious links, and sort of more traditional threats, but the really pernicious and invasive social engineering-type threats, CEO impersonations, business email compromise, and most recently, we’ve seen a huge rise in credential theft attempts. So we’ll talk through how we combat some of those as well.
Quick overview; the reality is that we serve a lot of different customers of many, many different sizes, from Fortune 500s, all the way down to smaller mid-market organizations, and across a number of different industries. The common thread across all of these is that they’re organizations that have entrusted either Google or Microsoft through Office365 or G Suite with their email, and so they tend to also have cloud initiatives in general.
What you see on the quote on the left here, is that our customers really trust us to help protect them. And again, it’s not just about detection, it’s not just about blocking, but a full comprehensive protection strategy.
So why do we need this? It’s 2019, and the reality is that email security and email itself has been around for decades. You’d think that we’d solve this problem by now, but it’s really actually just getting quite a lot worse, and the reason for that is as you can see here, there’s a wide variety of threats that are making it in to inboxes today. What you see here is just a small sampling of real attacks that we’ve seen and caught in our own environments, or in our customer environments, you know, just in the past few months, right, I think the earliest one here is from November.
And you see a wide variety; you see in your upper left, CEO-spoofing. So Kevin O’Brien is GreatHorn’s CEO. This was directed at one of our sales execs. And what you can see here is that there is no link; there is no attachment here to help give it away that this is a spoofing attempt, right? And we see that increasingly more, which makes it significantly harder for organizations, and email security platforms to detect, because there aren’t the kind of markers that they’re accustomed to, and that many of the methods to try to detect such things results in significantly high false positive rates.
You also see here brand impersonation, aesthetics, [Log me in?], Microsoft. So common brands that people use and rely upon in the business environment, trying to convince people to go ahead and click through, you know, either to a malicious site, or credential theft situation, as you see in the bottom left. That again was a real credential theft site that was designed exactly to look like Microsoft O365 login page. [05:00] And so it’s no surprise that according to Verizon, one in 25 people will click on any given phishing attack.
The challenge here is that legacy email security vendors are simply failing, right? We did a survey mid-last year of 300 professionals; 200 of them were email security professionals, and 100 were your average white-collar worker, and what we found was that there was a significant disparity between what email security professionals identified as making it into their inboxes versus white collar professionals. And what’s interesting about that is that despite the millions upon millions of dollars that organizations are devoting to user awareness training, there’s still simply a low bar for understanding what is truly a threat, and what do they consider just to be spam. For the average white-collar person, for the most part, they’re characterizing all unwanted email as spam despite the fact that some of them are quite a bit a greater threat than others.
The other thing to note about this graphic that you see is that these numbers are after whatever email security provider they already have in place. So that means, this is what’s making it to the inbox despite having a Secure Email Gateway, or despite having Microsoft Advanced Threat Protection in place within their organization. Sixty-four percent of email security professionals say they absolutely see impersonations, either of executives or other internal employees, or external employees, like a business partner, customer or vendor, making it through to inboxes, and those of course are among the most dangerous.
So why are legacy email defenses failing? Well, traditionally with a Secure Email Gateway, there’s just very sort of over-reliance on threat intelligence and a kind of binary attitude; things are either good or they are bad. The bad things get quarantined or trashed, the good things make it through to the end user’s inbox. And the problem with that, of course, is that when it comes to advanced email threats like phishing, like impersonations, no email security advice can be 100 percent effective without also significantly stopping business operations and the agility of business to do their jobs, right? And so this sort of perimeter mindset, and they’ve tried to come up with some tools since then to address this, but this perimeter mindset really leads to a lack of robustness in terms of how to protect your users from start to finish from these threats, not just preventing them from getting through, but also helping to prepare and protect users from things that make it past those initial defenses.
The other thing is that, this threat intel sort of heavy approach also requires that these threads have been seen before; that they are volumetric, effectively. And so while there are constant common threads that you can see in a lot of phishing attempts, they aren’t so universal that you can really, you know, market on them, or rather, detect on them, at the top level, right? And so there needs to be a greater consideration of other threat-detection techniques to have a better understanding and more clear understanding of what’s really a threat, versus what’s normal communication
So the result of all of this is that, according to that survey that I mentioned earlier, one-in-five security professionals indicate that they have to take direct remediation action on at least a weekly basis. What do I mean by that? That was anything from needing to run PowerShell scripts to try to take email out of their user inboxes, the threats that they had identified and that their email security tool had missed, or even shutting down compromised accounts, right? One in five on a weekly basis.
And so it’s no surprise, this is a sample customer; this is a real life example; this is a customer of ours, and while the vast majority of our customers use us in place of other tools, so you know, as their sole email security provider, we do have a couple of customers that use us as part of a multi-layer defense that includes a Secure Email Gateway as well as Microsoft ATP, and that’s the case for this customer that you’re seeing here. And so what GreatHorn sees is everything that the Secure Email Gateway and ATP have missed. And over the course of 12 months, this 20,000-person organization had, as you can see here, a huge number of impersonations, and a huge number of phishing attempts make it through, right? On top of that, the very threats that you would expect those tools to be particularly good at, malicious links, malicious attachments, there were still others that made it through those defenses as well.
So, the challenge here is that despite all of the work that’s been put into these products, and despite the work that they have continued to try to put in to address some of these deficiencies, they are continuing to fail to protect their customers from the most pernicious threats.
So let’s talk about GreatHorn. Why do we think that we’re better, and why do our customers see such results? Well the first is that we are a cloud-native platform. [11:00] We protect Office 365 and G Suite through a cloud API, just their regular API deployment, and that deployment strategy both provides us with a far greater insight into the organization; so for instance, we see internal person-to-person, or employee-to-employee communication as well as all emails coming from external recipients, but we also are afforded more robust action, and the ability to not just limit ourselves to something like a quarantine, or a shunting-off of the email to something else, but the ability to start thinking about, what kind of user protection actions can we provide to help really service that broader email security life cycle that I mentioned earlier?
I mentioned earlier that we do protect all manner of threats, so that’s impersonations, credential theft, ransomware and malware, and that ability to see internal mail also helps us identify account takeover attempt quite a lot more quickly.
Now, let’s talk about this email life cycle I’ve mentioned a few times. From our perspective, email security needs to be treated as a continuous improvement cycle, right? It’s not just a question of, let’s block all threats to users. Of course we’re going to do as much of that as possible. But given the sophistication of these threads and the need to balance business agility with security, there’s a constant source of tension as to how strongly to turn those controls, right? And so GreatHorn was built with the idea of taking a look at the email life cycle from, you know, the moment that it’s delivered to the moment it’s deleted from your environment, and identify where the most dangerous points of vulnerabilities are for your employees and for your organization.
And so what you see here on the Advanced Threat Defense and Email Fraud Defense is we use very sophisticated threat detection techniques. Of course we use threat intelligence, both our own as well as third-party threat intelligence. But we combine that with an understanding and a recognition of the communication pattern that your organization’s accustomed to. What does “normal” look like in terms of both how an individual within your organization communicates, and with whom do they communicate, as well as organizationally. So that’s a combination of deep relationship analytics and understanding what the bidirectional communication is; it’s an understanding of also the technical fingerprint of organizations; having seen this organization before, what do they typically look like? This is where, for instance, authentication might come into play, and it’s not simply a, oh, well you failed DKIM, DMARC, or SPF. Frankly, if that was the barrier, quite a lot of mail wouldn’t get through. But rather, what does your typical authentication look like, and does this differ from that? Is that a concern, right?
In addition, because we are a cloud-based tool, we’re able to identify emergent threats quite quickly, because there’s a database, effectively, of all of our customers’ metadata in terms of understanding the threats that are coming through and the patterns that we’re seeing; the threat patterns that we’re seeing, and being able to then apply that knowledge as an emerging threat appears across our entire client base.
In addition that, we have substantial fraud defense capabilities on impersonation, business services spoofing, understanding based on our deep expertise in the subject, understanding what to be looking for, what are the anomalies to typical communication patterns, but also to typical email patterns in general that tend to indicate that this is an email that isn’t to be trusted.
But then what we do is, we work with your team to identify the risk that you’re willing to bear, right, and we provide these additional tools in the form of end-user protection to determine how do we inform your users of potential threats, right? The known threats, or dangerous threats, we will absolutely be blocking from end users. But we also provide this idea of context-based understanding to your users. So unlike security awareness training which is sort of theoretic and point-in-time base, the end user protection that we’re providing is really focused on informing your end user of what the anomalies that we’re seeing are, warning them that this doesn’t look like, for instance, your CEO doesn’t typically email you from this address. You know, we’ve never seen this email address before. You should probably confirm before responding to it. Or alternatively, reminding your end users of a business process that you have in place; you know, wire transfers should never be authorized over email; they need a verbal confirmation as well. So whatever processes that you have in place, we can reinforce in this manner, but we can also identify those anomalies and say look, it’s possible that this is legitimate, but there are a number of indicators here that suggest that it is not.
In addition that, we have robust link protection capability that not only give us an understanding, and give the administrators an understanding of whether or not a user has interacted with a suspicious email, but we actually for emails, for links that we deem as suspicious, we provide them with a warning page that gives them a live preview of their destination site, so that they can compare whether or not it is what they expect it to be, right? And EJ’s going to walk through a lot of these in detail in a minute.
In addition to that, and EJ will also walk through a relatively new product that we have, GreatHorn Reporter, which is an Outlook plugin that helps provide even greater context to users to help them understand whether or not they’ve communicated with a given sender before, whether or not there are authentication issues with it, and whether or not there are suspicious links within the email.
Finally, we have the real-time incident response, and this is a combination of sort of a deep forensic capability, right, the ability to understand, hey, you know, of the 30 people that received this mail, this particular threat, right, and I can identify that quickly using a robust search engine that can be a combination of different factors; it’s not a simple, like keyword-based search, for instance. So I can use this robust search engine to identify everybody that has received this email. I can of course bulk-remove them using just two clicks and say, I don’t want them in the inboxes. But I can also, if there are links present, for example, identify those users that have actually engaged with the email, so that I can really target my incident response much more carefully and easily so that I can stop the potential threat in a much more fast fashion.
So what does this mean in general? It means that we’re providing you with a multi-layer defense strategy, right? So this is a real attack that came in; we saw this in about seven percent of our customers; this happened, I think two weeks ago, where the senior officers within an organization were targeted. So, we actually received the attack as well; our CEO and CTO received the attack. In other organizations, we saw the VP of finance, the CFO, definitely the CEO, received this attack. And it looks like it was a board meeting reschedule. The email was designed to look like a Doodle poll, which if you’re not familiar with, it’s a pretty common chat app to designed to help figure out scheduling. And there was nothing in it, you know, disclaiming to the recipient, looked like it came from meetings, right? But the email address was their own email address. So interestingly, for instance, on an iPhone, or on your mobile device, it might show up as a note to yourself. I think that was in a Microsoft Outlook native IOS app. And so, you know, there were a number of factors here that might cause someone to easily click on a link, and try to see what was going on; hey, there’s a board meeting I need to reschedule; I need to tell them my schedule.
In our case, of course, we noticed that there were definite issues with this email, and so we slapped a big red banner on it that said, hey, there are unexpected authentication issues. It looks like this email is trying to impersonate a greathorn.com sender. There’s more details that you can click through, as you see there, the “Why am I seeing this?” function. If someone were to click the link, they get sent to our suspicious link page, right, what you see on the right, that provides a quick, live preview of the destination site, which you can see there looks exactly like an Office 365 login page, complete with the person’s email address already pre-populated. But they see right above it the destination URL, which has quite a lot of concern, right? It does not look like a traditional Microsoft Office 365 you all know. And then they see on the right some additional details on why they’re seeing this, and what they need to be looking out for. They can still go through that, through this page to the destination site, but in this case an administrator would be able to see immediately that they did so, as well as to be able to track when they did so.
So this is a perfect example of there being multiple layers of defense to both educate the user on something that could be real; so you don’t want to stop the pace of business, right, but had a number of concerns and issues with it that would cause the user to stop and reconsider whether or not this is what it said it was.
And so with that, I’m going to go ahead and turn it over to EJ so that he can really walk through with you what the platform looks like, and answer any questions that you have as well. So EJ, I’ve just turned over control, so go ahead and start sharing your screen.
All right, thanks Lorita.
So, as far as the time today that’ll be spent in the demo here, we’re going to do a couple of different things. I think what’s important is not only what’s happening obviously from an administrator perspective, but also what’s happening from an end user perspective; a lot of what Lorita talked about in the early going or in the presentation, it’s a lot of what we hear back from folks, in terms of what their users are seeing, why they’re struggling with it, et cetera.
This example here is actually really illustrative of part of what we saw when we did that survey that we had performed, right? Folks don’t necessarily realize what they’re getting, and I think there are oftentimes two reasons behind that. One, when the attacks are more sophisticated, an end user might not even realize that he or she has received a phishing email because it just looks so compelling; it looks so true to form. Occasionally, there might be one little bit about the email that’s off that someone might catch; for instance, a grammatical mistake within a sentence, but otherwise, the rest of the email looks legitimate. It looks like it’s coming from an employee that they commonly work with internally; it looks like it’s coming from a third-party vendor that they commonly communicate with. Maybe it looks like it’s coming from Amazon, or FedEx, or DocuSign, or Microsoft, or Google, things like that. So when the user receives those, they’re not even necessarily in their mind saying, well that’s a phishing email. In many cases, they’re probably simply looking at it as, that’s an email that I should be receiving.
This particular example here goes to illustrate that point even further. This particular method, for all intents and purposes, appears to be from a user within my organization. So right now, I’m logged into the Outlook Web application as Emily. Emily is a CEO at Flying Deliveries, and Emily has gotten this message from an employee that she works with, that’s Aaron here. A lot of organizations, they go ahead and they saw, well we go through training and awareness, so users know to hover over the sender; they know to hover over URLs. In this case, if I go ahead and I do that, you’re going to notice here that once I hover over the sender, the information that comes up, it appears to show me that this is, in fact, from Aaron. I see his title; I see the group that he’s a member of there. I see his addresses, his phone numbers. I see some previous mail history from him, right? But this particular message, it’s actually a spoof, right? In short of a user coming and having any idea how to take the message details and parse those in real time, it’s going to be pretty difficult for them ultimately to go through and identify this as an impersonation; again, the way that Microsoft handled this, more or less tell the user hey, you can put your guard down; this is who we think it is.
From an analysis perspective, GreatHorn is doing a handful of things to identify this as an impersonation attempt. In terms of this level of analysis here, we’re going to provide this regardless of whether we’re flagging a message via policy or not. What that means is that, [26:00] if you’re ever in a situation where perhaps a user’s reporting a message to you, saying, hey I think this might be a phishing attempt; can you take a look? Even in those instances where it’s a perfectly legitimate message, and there’s some sort of confusion, you can come in and take a look at this analysis and say, well what does GreatHorn think about it?
In this particular case though, you can see that we’ve actually flagged this under a couple of different policies, one being the “Direct Spoofs” policy, the other being the “Auth Risks” policy. Those represent two of a handful of out-of-the-box policies that are looking at different types of fraud attempts, business email compromise attempts, phishing attempts, et cetera. The “Direct Spoof” piece is looking at the routing information that we see, and the “Auth Risks” piece is going to be looking at the authentication that we’re looking at here.
Now, here’s what’s important: when it comes to the data analysis that we’re performing, you can see up here we have the full header available. We reformat this for administrators to make this a little bit easier if there’s ever an instance where they want to go in and take a look at this. But the idea is that, we’re pulling from that header the interesting points for us from an analysis perspective, and that ultimately looks like this on a per-email basis.
And one of the questions we oftentimes get though is, well what is different about this; why is this different than a gateway? How is this any different than me as an individual going through and pulling the header apart myself? The difference is in terms of the automation around the analysis, and the dataset that we have access to. What that ends up looking like is when we perform our analysis, you’ll note here that we said hey, there’s something a little bit amiss about the return path, and there’s something a little bit amiss about DKIM. Those two points in and of themselves, they’re not necessarily the end-all be-all of our analysis, right? We see that organizations send from third-party tools all the time; it’s not that uncommon. And it’s also not that uncommon that when messages are sent from those third-party tools that DKIM is signed by the third-party tool, and not the organization’s domain. We also from an authentication perspective might see a number of different issues with that too, if you’ve accidentally misconfigured your SPF record, or if the third-party sending to you has not properly configured the SPF record to include the third-party tool. SPF might fail or soft-fail in those instances, right?
So, if you just look at it from a more static perspective of saying, hey, the return path doesn’t match the domain, or we’re having some sort of undesirable result from an authentication perspective, that can really leave you unaware of the rest of what’s going on within a message. Instances like that make it more guesswork than actual data-specific analysis.
Where GreatHorn differs is that, we are looking at the entirety of a dataset that we have access to say, when we see mail from this domain, or when we see mail from this center, what does it look like? Is there typically a reply to there? What do we typically see from a return path perspective? What do we typically see from an IP perspective? What are the authentication results? We don’t want to be putting administrators in a situation where they’re having to go through and make corrections for a system solely based upon what the authentication looks like for a third-party, or what third-party tools a third-party might be sending from. We also don’t want to prevent mail flow for end users based on those same factors. Instead what’s taking place is, we’re pointing out that hey, this particular domain and this return path, it’s not that they don’t match; it’s that based upon the entirety of the dataset that we’ve seen within this customer’s environment, as well as across the entirety of our dataset, we would not commonly associate this particular return path with this particular domain, or this particular address. But similarly, when it comes to authentication, we would expect SPF to pass, but we certainly wouldn’t expect DKIM to be signed by a domain other than the sending domain.
This philosophy is what we’re looking at across the board, and it’s how we really apply the different policy threats. It’s not just about what’s present in the individual message at the point of analysis; it’s taking that information and comparing it up against what we’ve seen previously in order to make more specific determinations, more exact determinations on what we should be doing there.
So we covered the Direct Spoof policy; we covered the Auth Risks policy. We have a handful of other out-of-the-box policies that are again looking at very specific types of business email compromises, broad attempts. We have domain look-alikes, the idea being here, when we were looking at mail, both from potentially your sending domains, as well as third-party sending domains, looking for any sort anomalies there. In this case, what we’re noting is that the display name, we have an address that’s spelled correctly, like “flyingdeliveries.” But in the actual address field, we have an additional “L” that’s there. So this particular policy is going to be looking at additional letters, letters that might be missing, numbers in place of letters. We’ll also look at the top-level domains that are used; so in this case, we can see here that we’re seeing mail from “adp.cm,” rather than, as we say from an analysis perspective, it’s similar to “adp.com.” This is also going to be applicable if you have a .com, and someone goes out and they register as a .net, or as a .io address, it’s something that for an end user, whether they’re looking or not, it’s very easy to convince yourself that that could be normal, just given the propensity for organizations to go out and register new domains. It’s not uncommon to see a new domain being utilized at any level.
One of the things that people commonly ask us is, hey, you know, we got this message; we’re not sure if it’s from Microsoft or not. Microsoft is notorious for utilizing different addresses, not necessarily domains, but different addresses constantly for the same services when they’re spitting out notifications, or they’re just going through and sending out regular communications about billing and various notifications. They can keep people on their toes in terms of saying, well I haven’t seen that one before, but it appears to be from Microsoft, but how do I know? The idea here is that we can keep track of all that just based upon the routine scanning and analysis the platform does.
We’re also looking at name spoofs. The idea behind name spoofs is it’s one of the most common attacks, and the idea behind it is, we’re looking at instances where this particular sender doesn’t typically send from this address. Whether it’s your executives, whether it’s the finance team, accounts payable, your controller, there are a number of folks in the organization whether they have access either to money, or records that an attacker would want access to, whether we’re talking W-2s, whether we’re talking, I’m trying to convince the controller that I’m the CEO and I have a new bank account, and you need to change my routing information, it could be any number of things, right? Given how simple it is to go and register a new email address, whether it’s through Gmail, whether it’s through Yahoo, or whether it’s I’m going out and creating my own domain, and then to just alter the display name as you go to attack different organizations, or even to attack different individuals within the same organization, the problem ends up being that with an end user, they can look, and again, even if you’re telling them, hey, scroll over, take a look, it can be really difficult to know every single personal email address, for instance, of every employee at your organization. All right, so if we imagine, you know, this is Stephen McWilliams, let’s just say, Stephen is our head of HR. I might look at this, and I might say, OK, well they’re the head of HR; that seems like a perfectly reasonable request, and that very well could be Stephen’s personal email address. It’s just nearly impossible for an end user to keep track of all that, and that’s the whole idea on this particular policy with the system. The system automates that for you; it detects it on its own, and then we can take an automated response option.
We also have our malicious payloads, so if there’s a malicious attachment, or a malicious URL that we identify within the message, we’re going to flag that under these polices. Then we have a handful of other policies that we work with clients in the early going of testing, not even in production, but in testing to get configured to be specific to what it actually is that they’re looking to accomplish on a couple of fronts.
We’ll pivot to those in a moment. But before we do that, I think what’s important to note here is, we have all these policies; that’s great. We go through, we do our detection, but what happens once a message has been detected? Over here, we have the management column, and you’ll notice that we have these icons below that. Each of the policies has its own management screen. The idea is ultimately that we’re able to come in and we can choose the policy actions that we think best fits the policy in question, and where this becomes important and different from the standard gateway approach, and some of the other approaches that are taken out there is that we don’t simply have to quarantine things; we have other options. One of the things that I think from the information security perspective as a whole that we don’t do a great job of is communicating information to our end users that is ultimately actionable by them. If there’s an instance where we know something is bad; we don’t want that to reach the end user, quarantining that message makes a lot of sense, but the reality is that, based upon the way that a lot of other tools are going about detection, there’s not a 100 percent success rate in terms of what ends up in the quarantine folder. Depending upon which tool you’re looking at, and depending upon what that quarantine workflow looks like, there are going to be instances where messages end up in that quarantine folder that shouldn’t. And if the end users have access to those, they’re going to get into the habit of going in there and routing around.
The same applies when you talk about something like ATP for Microsoft, or some of what Google does natively with a G Suite where things get placed in a spam folder, or things get placed in the junk folder. It only takes one instance for one legitimate message for one person to end up in there where they have to go in, and they have to pull it out, and suddenly they tell themselves, I always need to check junk; I always need to check spam. Then they start telling their colleagues, and their colleagues get in the habit of doing the same thing, and to go back to the example that I had given previously, some of these truly malicious emails look really legitimate, and it can be very easy to convince yourself at that point, well hey, that invoice request that I was expecting from that vendor that we work with that ended up in junk, this Amazon request can be the same thing; this DocuSign request can be the same thing.
So to force the hand of administrators to say, this is the way to do things; you either have to quarantine things or move it to the junk folder, move it to the spam folder, we don’t necessarily agree with that approach. And on the flipside, we also have this mindset of, well what can we tell end users, what can we give to end users that will make their experience with interacting with email more scalable, and just safer overall, right? So in the case of this particular policy, we’re looking at domain look-alikes, more often than not if we identify a domain look-alike, we probably want to quarantine that message; that’s one of those policies that rises to the level of, that’s probably not something that an end user should be seeing.
That said, we have a number of other options. We can move the message to the trash; we can move it to a specific folder. So again, maybe you’ve trained your users really well, and they know what the spam folder looks like; they know what the junk folder looks like. They know how to handle things within that. You can say, you know what, we’re comfortable moving it to that folder, or we’re comfortable moving it to a folder that GreatHorn creates for us. You can add labels or categories, obviously dependent upon whether you’re in G Suite, or whether you’re in Office 365. You can archive.
One of the pieces that I am personally a big fan of, and our clients are a big fan of, is the email banner, and this speaks directly to the idea of, how exactly can we inform end users, and how can we get them involved with the email security process in a better way than we are today? How do we reduce the administrative workload without increasing risk significantly?
I’m going to use the “Wire Transfer Content” policy as the example here. We come in and take a look; you’ll note that we can scope the policy down, like who does this policy apply to it? Do we want it to just apply to everybody? Should it apply maybe just to a specific user? Should we have to apply to a group, whether in active directory, or whether it’s a group that you’ve created within Google itself. When you start to look at different factors, right, and in this case, with this particular policy, what I’m concerned about, it’s going to be a message that is ultimately from an external address, with whom our recipient doesn’t have a relationship with the sender, and it’s going to contain at least one or more of these key words or key phrases related to wire transfer. If we come down and we take a look, we can see here that I’m going to add a banner, right? So if we think about the factors that are in play here, there’s nothing about the content that’s malicious, right? It’s not a known bad sender; it’s not a known bad IP, it’s not a known bad domain. There’s nothing malicious within the keywords, right? But the keywords are interesting to us in the sense that, well hey, we’re receiving a wire transfer request. And because we’re tracking sender-recipient relationship, something that’s largely absent from most of the other tools in the market, we can also tell the user what exactly it is that they’re looking at.
So, what I mean by that is you can see here, we have this particular message already ready to go. We can change the language on this, depending upon how you as an administrator can see fit to communicate to the end user.
But what this ultimately looks like in the inbox is something like this. You’ll note that this particular banner is only going to apply in the instances where all the policy criteria are met, so I don’t have a significant relationship with the sender, I have these wire transfer keywords or key phrases, at least one of them in there, and in this case we have a couple, and then I’m also able to give the end user a little bit of a warning in terms of hey, this is the company policy around this, right? This different significantly from the templates that a lot of folks push out with an Office 365, where at the bottom here, have a really long-winded message about, “Wire transfer threats are real. Please do these things: hover over any URLs. Don’t download attachments from senders that you don’t know,” et cetera, et cetera. Those end up on every single message, or they end up on every single message from an external sender. People start to ignore those very quickly; it’s almost as if they’re not there.
The benefit of the banner is that it’s only going to apply in the instances that you want it to apply, and it’s only going to apply to the users you want the banner to apply to; it doesn’t have to apply to everybody. If you have an entire group of the organization that can’t access everyone else’s W-2s, or they can’t execute wire transfers, you don’t necessarily need to go through the hassle of informing all of them, because they can’t do anything anyways; this can be very specific to again, a specific user, or a specific group of users.
Considering along the line of how else can we involve end users when it comes to phishing detection, and just being more aware in general, we have an instance like this here where we’re going to go ahead and we’re going to look at the message, hey, it looks like it’s a legitimate alert from Microsoft. What attackers are very fond of doing, is putting, if you look at the bottom-left-hand corner, you can see my various URLs, very fond of putting legitimate URLs that go back to Microsoft, go back to DocuSign, et cetera, but oftentimes the one URL that we’re most concerned about is the one that says click here; click here to download; click here to view, please sign in, log in here. What we do in those instances, you know, first and foremost is that we’re performing analysis on URLs at the time of ingest. We’re looking both at known bad, but also known good, and we’ll have that classifier within the dashboard itself, and we’re also looking at URLs that we deem to be suspicious. Any of those suspicious URLs, they’re ones that we don’t have on a thread intel list telling us, hey these are bad, but they’re not also a part of the intel that we have that’s saying, hey, that’s a known good website based upon this data that we have about it.
What’s going to happen is when I click here, we’re going to perform time-and-click analysis. Do we have an updated threat intel match? In the absence of that match, the user will be brought to this page, and again, this is only going to be on URLs that are not verifiably known good, or known bad, because I tell them, why are we seeing this; what should I do? We’re going to have a real-time preview of the destination itself, and unfortunately, it looks like this destination has been taken down, so we might not get the actual preview here but the idea being that there’s going to be a destination preview as well as the destination URL. Those two factors become very important, because again, if you think about where we are here, if an end user looks at this, and it says, sign in to the customer portal, they don’t necessarily know where the URL is going. They might hover over it and look and say, I don’t know; I think it’s OK, or maybe they don’t even look at all; they just click. And then they’re brought to the destination, and the destination looks like a passable login page, and they just log in and they’re off to the races.
This is also, from a similar perspective here, this is also going to be applicable when we have time of click, and we do get a positive threat intel match, we get this experience instead. “Why am I seeing this?” “What should I do?” But you’ll note that we don’t give the end user the same level of context around this.
All of that interaction is actually tracked with the dashboard. You can go and you can actually change the default settings of URLs, so if there’s specific domains that you want to block, if there’s specific domains that you always want to warn on, if there’s specific URLs that you always to warn on or block, we have the ability to do that. We also have the ability to say, how many users have seen that message? How many users have clicked on the URL? All of that gets tracked and it’s very easy to go in and alter those settings from an administrative perspective.
Finishing up in the inbox here, the last piece is going to be the reporter, as Lorita did mention. You’ll notice here that I have this add-in that’s our shield logo. This is going to apply within the OA application of Outlook here; it’s going to apply on the desktop application, and it’s going to apply within the mobile application. We can go ahead and we can leave this pinned, or I can pull it up, and the idea is that it’s going to give me some information about who the sender is, do my colleagues know him; have I communicated with this person before, right? [46:00] I always like to poke a little bit of fun at our CEO when it comes to this, where he’ll be having conversations, and he’ll claim, “I have never spoken to that person before,” when in reality, they’ve spoken a half-dozen times via email, and they’ve met twice in person. The idea is that, it can be tough for someone to remember, have I talked to this person before? And at times, when you go to search for those messages within Office 365, or G Suite, sometimes it’s not that easy to find a message from a particular sender. So we tell you; how well do I know this sender? How well do my colleagues know this sender? Is this likely from the sender that it purports to be? Now, if there are any URLs in here that we deem to be suspicious, we’re going to give you a little bit of info on those as well.
As an end user, I have the ability to mark this as spam and block the sender. I can also mark it as spam without blocking the sender if I so choose. This ends up becoming an individual blacklist rather than a universal blacklist. This spam report will also go back to the admin team so they can take a look, maybe they do want to add it to a more universal blacklist; maybe they don’t.
Additionally, we have the ability to report as phish. One of the common concerns that we hear from organizations is hey, when something gets through, we require that our end users forward the messages. It’s great because we have eyes and ears everywhere, but it’s a difficulty in that we lose the original header; we have to go through the compliance center; we have to jump in the G Suite and run a search on that message to find the original. When a user reports this as phish, it’s not forwarding the message; it ends up in the admin console under this particular section here, the phish reports. We give you all the original header data, and then you can take action on that message by an individual basis.
The last piece that we’ll go through here is just start search functionality. Whether you get a phish report like that, whether you get a piece of intel, or you just want to go take a look at something, you can come in and we have 30 days’ worth of data for the entirety of your organizations mail. We can search by one of these criteria here; we can search by multiple. Maybe it’s a specific link; you might have a specific file hash, or file type that you want to look at. It could be a sender; it could be a display name; it could be a subject line, the idea ultimately being that, I can come in, and I can look for the sender, or partial sender in this case, partial address. I can do a one-day search; I can also do a multi-day search. I’m going to go back the full 30 days and just say, you know, what have we seen from the center previously.
We have these 13 results here. If I know this to be a bad sender, I can just do a “Select All” here, excuse me, and in the dropdown, we can click “Remove from User’s Mailbox,” “Apply to Selected” and “Apply Again.” That’s a full removal; that’s not moving to the trash; that’s not moving to the archive; it’s removing those messages from the user’s inboxes. [49:00] They don’t have access to them. Because we know that at times this is a little bit hasty and we have to get these out as quickly as possible, you might go and retroactively note, we’ve removed too many; we need to release five back. We have the ability to undo that action; it’s really simple. And again, the thing to keep in mind with this is you’ll note here, there’s no bouncing around from the admin consoles to GreatHorn and back again; it’s all done right here. And again, we also have all the details about all these messages. So this isn’t a [factor either?] be saying hey, we’re going to pull these, and we think that the sender is the factor we want to look on, and we’re just going to remove them and we’ll deal with the fallout later in terms of whether or not we’ve removed anything accidentally. We pull the search results and you have the ability right here to say hey, let’s go and look; let’s make sure these are what we’re looking at, this is what we want to pull. Did we get it right; did we pull too many; did we pull too few. The increased visibility around this, and just the ease-of-use from an IR perspective is really straightforward, and something that makes the platform in general really rounds things out, right? You have the protection upfront via the policies and the analysis that’s going on there. We have our automated response actions. We have our end user piece where they can report things; they can blacklist certain centers if they don’t want to see mail from them, and then lastly here, we have the ability to go in and say, is there anything that we’re missing, is there anything that we need to pull? It’s all done right here from the GreatHorn console; you’re not having to involve anything from the Office365 side, or anything from the G Suite side.
So with that, that was what we were hoping to get through from a demo perspective. I’ll open things back up. Lorita, I don’t know if we have any questions from the group today.
Sorry, thanks. We do have a couple of questions, but I will remind everybody that you’re welcome to ask additional questions in the “Go to Webinar” Control Panel, there’s a questions tabs. Feel free to go ahead and enter your question there.
One question here that we’ve got is, if we have domains, one domain that uses O365, and one where we’re using G Suite, can I manage them from the same console?
Yes, you can; it’s really straightforward. I know we didn’t get too deep into the architecture during the call today, but in terms of the setup process, it’s essentially the same. We actually have a number of clients that are utilizing both, whether that’s by choice where they said hey, we want to do, have one particular set of activities through G Suite, another of activities through Office 365. There are also a number of instances where you go through the acquisition process, and the company that you’re requiring may not necessarily have the same email platform as you. So we’ve seen it, well we’ve handled it, and it’s really easy to get set up, and yes, it’s all manageable through the same console, and there’s really no limit either in terms of the total number of environments; it can be one of each; it can be five of each; there’s really no limiting factor there.
Great. A second question here is about implementation. Do I need to change my MX records?
Yeah, and again, I apologize; I didn’t do a great job with the architecture today guys. (laughter) No, you do not have to change your MX record. There are no DNS changes that go into place. That has a number of advantages; first and foremost, just from a setup perspective, we can get up and running within five minutes. People always say that that sounds crazy; there’s no way, but that’s the reality. We run at any given time, you know, a couple dozen probably POCs and if it took a really long time to get those set up, it’d be pretty tough to manage those, right? It’s really easy to get those up and running; it’s really a light life from an administrative perspective, and then from the opposite perspective, the architecture also affords us the ability to not in any way, shape or form impede mail flow. [53:00]
One of the complaints that we oftentimes get about some of the gateway technologies, or even the way that Microsoft and Google analyze things, the unfortunate nature of those is that certain emails require much more analysis, force power, if you will, whether they’re heavily laden with URLs, heavily laden with attachments, sometimes the delays on those messages reaching end users, they can be pretty significant, five, ten, 15, 20 minutes. When you’re in a really time-sensitive situation where you know, hey, you’re at work, right; you’re trying to get things done, it’s not really acceptable to be delaying messages that much for your end users. So, that’s the other advantage that we have from an architectural perspective, really quick and easy to get set up, and we’re also not impeding mail flow.
Thanks, EJ. Sorry, just taking a look to see whether or not there are any additional questions. Looks like we are done for the afternoon, and we’re right up against the hour anyway.
I wanted to take the time to thank everybody for joining us on today’s webinar. Hopefully you found it useful and informative. We will be running another one of these in about a month if there are questions you didn’t have a chance to answer, or of course you could just drop us a line, set up a one-on-one conversation; we’re happy to answer any questions you have that are specific to your environment.
EJ, I want to thank you for taking the time out of your day as well to run this demonstration, provide our viewers that additional information, and as you can see here, we do have a webinar next week, also starring EJ, with Security Week, and that’s going to be called, “Protecting Against What Gets Inside the Perimeter,” and we’ll also be at RSA, so please swing by booth 4200 to say hello. EJ and I will both be there as will a number of other members of our team. [55:00] So, keep an eye out for the email with regard to the Security Week webinar in case you’d like to join that, but in the meantime, have a fantastic afternoon, and we’ll look forward to chatting with you soon. Thanks so much, bye.
END OF VIDEO FILE
Request a Demo
Like what you hear? Contact us to learn more about GreatHorn’s sophisticated email security platform and how easy it is to set up for your Office 365 or Google Suite platform.
Hello, everyone, and thank you for joining us today. We’re here to — with our monthly — our monthly webinar, “Email Security Redefined: An Introduction to GreatHorn.” I’m Lorita Ba. I’ll be your moderator for today’s webinar. And I’m joined by EJ Whaley, our Solutions Engineer here at GreatHorn.
A few logistics to get us started. First, you will be on mute. You’ll have the ability to submit questions in the Q&A panel. This should be probably on the right-hand side of your screen, and you go to “Webinar Control Panel.” We’ll be taking a look at the questions throughout the — throughout the webinar, which should take, oh, about 40 minutes, 30 to 40 minutes, including time for questions. The webinar is being recorded, and will be available for replay, and the slides will be made available after the webinar, as well.
[01:00] So we’re going to go ahead and get started. First who are we? Well, the reality is that GreatHorn’s goal is to make email security simpler. We’re doing this is in a couple of ways. The first is obviously from a threat detection perspective. We’ve focused our company on protecting your organization from all kinds of email threats, comprehensive protection against not just the traditional malware and malicious links and things that we’ve been fighting for decades, but also from the nefarious and sophisticated phishing attempts that are increasingly becoming a problem.
The second thing that you can expect from GreatHorn is that we treat email security as a life cycle. It’s not jut a question of sort of detecting and preventing email from — or email threats from entering your environment, but it’s also about supporting the entire life cycle detection through remediation, and incident response. [02:00] No email security tool is going to be 100% perfect. Frankly, if anybody’s telling you that, they’re probably not accurate. So our goal here is to help ensure that we’re supporting people for all of their email security needs, from detection through to email solutions.
We’ve been trusted by a number of organizations across a — across a number of industries, from small organizations to multi-billion-dollar Fortune 500s. And our customers trust us precisely because of what I just said, from making email security email and simple, and protecting their environment in a way that they’ve not been able to be protected in the past.
So let’s get into it, right? How big a problem is phishing? Well, according to the FBI, American businesses lose an average of $2 million to phishing every day, and when you add all of that up, [03:00] that results in 48% of all internet-driven crime that was reported to the FBI in 2017. So the — even though the incidence of phishing is actually smaller than a lot of these other threats that you see over on the right, the actual cost to American businesses is significantly larger. And, of course, that — that’s compounded across the world, not just in the U.S.
The problem, of course, is, as we look at email threats and how they’ve evolved, phishing, in particular targeted phishing, spear phishing, they look like real attacks. So what you’re seeing on your screen here are examples of actual emails that we’ve gotten very recently within the GreatHorn environment. Now, our product, of course stops them, but we were able to take a look through them and be able to capture the screenshots. And what you can see here is that these are — these — Kevin O’Brien is GreatHorn’s CEO. [04:00] As a venture-backed organization, we’re often a threat — a target, especially because we’re in the security industry. And so people go through extraordinary efforts to try to personalize things, and [commit to our control?] over that. There’s an invoice that needs to be paid, for example, or to get us to log into Office 365 fake credentials page that looks quite real, and give up our credentials. And this is the same kind of pattern that you’re seeing in organizations like ours and organizations that are much, much larger.
According to Verizon, one in 25 people will click on any given phishing attack. And so the threat is — the threat is real, right? And if you start talking about your accounting department and your HR department, or anybody that has access to sensitive information, the impact is substantial.
The reason for this, in large part, this is — so this is — what you’re seeing here is the result of a [05:00] survey that we ran over the summer of about 300 employ– 300 personnel. About two thirds of that were email security professionals; the rest of them were laypeople. I think what’s really interesting on this slide is that if you compare the responses from email security professionals to the average businessperson, the average businessperson characterizes almost all of the threats that they see as simply spam. It’s not that they’re getting fewer threats; in fact, they’re probably getting at least as much, if not more, than email security professionals. But 66% of them nearly two thirds of them, characterize these things that they see as simply spam. And the challenge, of course, with that is that they’re not therefore taking extra precautions. They’re not informing their email security professionals that there is maybe a widespread attack going on, or they’re just ignoring it entirely. Or, in some cases, they’re actually responding to it, and that response is resulting in email security professionals 20% of them, having to take some kind of direct and impactful remediation action at least weekly. [06:00] That might be running PowerShell scripts, or shutting down an account.
And so again, just trying to emphasize that the threats that we see within this category of email threats has measurable impact, not just in terms of the security of the organization, its employees, and its data, but also in terms of the time that is being spent to manage and remediate such threats.
The challenge, of course, is that when we think about traditional — oh, and by the way, that last graph was — the question was what actually reaches inboxes. So this is after whatever email security solutions these employees had all — had in place, right? And so whatever they are — they’re seeing in their inboxes, they’re actually seeing it even though it’s already gone through email security. And the reason that that’s a problem is because there’s a philosophical IT shift that we all know about, right? This movement from perimeter-based networks to [07:00] cloud architectures. And the challenge from an email security perspective is that the philosophy behind these infrastructures are fundamentally different, right? In the perimeter-based world, we are thinking about things in terms of how do we — how do we create a wall, right? We’re going to be very authoritative. It’s very permissions-based. There’s a gate, and anything that gets through the gate is good. Anything that that we’re worried about, we stop at that gate, right? And even today’s other email security products that don’t have that gateway heritage, a lot of them still have this kind of binary good/bad analysis that goes in that’s really indicative and reflective of the perimeter-based network.
The challenge is that organizations continue to move toward cloud architectures, the philosophy of IT’s changed, as well. We don’t talk about shadow IT anymore very often because everybody’s spinning up a new EC2 server, or [08:00] creating — creating the requirements that they have in the cloud. It is self-service. It’s user-defined. There’s an expectation of business enablement rather than hindering business for the purposes of security. There’s also this idea that security and failure is architected into cloud architectures, right? In order for us to trust AWS, AWS has spent, you know — and Microsoft have spent countless, countless dollars on making sure that their systems could handle things like failure, like security issues. And so we have a certain expectation that these things are built in.
So what’s happened from a practice perspective is that traditional email security solutions, they have this judgment, right? There’s a judgment day when an email comes in. It’s either good or it’s bad. It’s passed on to inbox, or it’s sent to trash or quarantine. That doesn’t give a lot of room [09:00] for the nuance that is required from these very sophisticated phishing attacks, many of which don’t have payloads attached to them. There’s no links attached to them. They’re just trying to create a certain amount of trust to get the information that they need out of — out of your employees.
What we believe is that email security is a life cycle. There isn’t a point in time when you should be — you should be managing it. Of course you’re going to be checking it as it comes in, but you also need to be protecting the email and the inbox at all stages of the email life cycle, right? And so that means we’re not just talking about threat detection, which is, of course, important, but we’re also talking about what’s the automated threat defense. And then, as I said earlier, not just the automated threat defense, but what kind of incident response tools are we providing you in order to think about how to — how to address any additional threats that have made it through?
So if we take a look at that and we consider this, and we [10:00] consider email security as a life cycle, I want to show you kind of how we’ve done that at GreatHorn. What you’re seeing here is the GreatHorn email security platform. And what you see here really reflects that idea of protecting email at all stages, right? At the top, you see detection. And, of course, we’ve got common threat intelligence beads from trusted trusted security providers, right? We also have our own proprietary community threat intelligence that we’ve developed from the millions of emails that we see within our environment, which helps us identify emergent zero day type threats. But what really helps to set us apart is what we call adaptive threat analytics, and that’s where we’re taking a look at the communication patterns for not just the organization but also at the individual level. What’s the relationship with people? What are the — what — what’s the relationship with people? What’s the relationship with the other organizations? [11:00] How does that look from a normal basis? What — what are our expectations from a technical and organizational fingerprinting perspective? Does a particular domain typically fail authentication? Well, then, for them to fail it, it’s not unusual. But if they typically pass it and one day fail it, now we’ve suddenly got an anomaly. And we take all of that information — sender reputation, fingerprinting, deep relationship analytics — and that all draws into our threat detection engine.
At the bottom of your screen you’re seeing our threat response, right? And this is where we talk about both the automated threat defense but also the post-delivery incident response. And so the automated defense, as I said — the advantage of being cloud-native and being connected directly to cloud email APIs is that we actually have access to and the ability to have far wider remediation actions than, [12:00] a secure email gateway might have, for instance. So that means that, yeah, we — of course we can quarantine things, and things that we know we absolutely know are bad will likely be done — handled in that way. But we also have the ability to provide nuance and context to the users.
And so what do I mean by that? In a lot of instances you may have an email that looks anomalous but that doesn’t fit the pattern that we expect, but it’s possible that an actual person is breaking the pattern that we’ve come to expect. And so, in such an instance, we might banner the email and say, “Hey, this doesn’t look like an email that we” — “This doesn’t look like the person that you think you’re speaking with,” or “There’s something that’s suspicious about this (inaudible) based on our threat detection. Be careful.” Right? And so having that additional context and warning really helps to reinforce the security awareness training that you probably already have in place, but to do so in an immediate and context-driven manner.
[13:00] On top of that, we have the ability to sort of reinforce business processes and policies that we have in place. So, for example, if you have a policy that wire transfers should not be authorized through email, then we can — we can add a reminder that says “This email looks like it’s about wire transfers. Remember, you aren’t allowed to authorize wire transfers through email. Please please call to confirm,” right? And so all of these kinds of options provide that greater context to protect the user, protect the organization, and to give the user the knowledge and information that they need to protect the organization, as well.
That final piece is the incident response, right? we all — we all are all too familiar with the habit of or the requirement to use PowerShell scripts after an incident has made it through email security, and desperately trying to figure out, okay, how widespread it is, [14:00] how quickly can I remediate it. And the need to do that both lengthens the time to remediation, but also is inaccurate, right? Our post-delivery incident response capabilities enable you to do sort of multi-vector searching within our environment, do a quick forensic analysis to identify what the problem is and how widespread it is, and with a couple of clicks just take all of those emails directly out of inboxes, no matter when they were delivered, whether it was five seconds ago or it was five days ago or five months ago. So that ability to respond quickly really helps, as well.
And so what you see in the middle here is the combination of all of what we’ve learned, right? So we’ve created these four modules that are specific to the threat topic, that are turnkey solutions, and built on the best practices and the knowledge that we’ve — that we’ve learned, [15:00] as well as the specific functionality required for each of them. So, for example, imposter protection focuses on domain lookalikes, executive impersonations direct spoofing, business services spoofing, all of those things where an email is pretending to be someone or something that you trust, okay?
Link protection we handle — and EJ is going to really go through this in his demonstration, but link protection we handle in a little bit of a different way, right? It’s not just a question of, okay, yeah, we’re going to block malicious URLs. Of course we will do that. But in addition to that, we have the ability to kind of take a look at things and determine whether or not they have suspicious characteristics. And so what we’ll do is we’ll, A, warn the user, but we’ll also automatically sandbox that URL, and provide previews to the — to the end page, so that the end user can, again, take a look, make sure that it’s — the destination’s [16:00] where they intend to go.
And then attachment protection and mailbox protection are fairly self-explanatory, in terms of really supporting the users, and protecting against malicious and suspicious attachments.
So that’s really the high level of the GreatHorn security platform. I’ve talked long enough. (laughs) I’m going to go ahead and turn it over to EJ so that he can really give you an understanding from a demonstration perspective of how the platform works. EJ?
Sure. Thanks, Lorita. So we’re going to shift gears a bit here. Rather than jumping right into the platform, we’re going to start in an inbox. The reason for that: at the end of the day, it’s really about the end users. There are a number of things that we do, of course, to enhance the amount of tools that are at information security teams’ disposal. There are obviously a number of things that we’re doing to lighten the workload of the information security team, or of the IT team, but, again, at the end of the day [17:00] the threats that are being faced by our organizations are the things that are in the purview of the end user.
Something that Lorita had alluded to earlier in the presentation was really about the complexity of some of these attacks, and we’ll get into a couple of other examples, but I’d really like to start here. At this point, we’re in a real Office 365 inbox. This is belonging to Emily Post of Flying Deliveries. And if we look at what we have here, for all intents and purposes, as far as Microsoft is concerned, this particular message is from Aaron. It’s from an internal user at Flying Deliveries. We have his photo here. We take a look. We have his address. If we hover over, we get the information that pops up. We’re gonna say, yeah, this all seems to check out. We have some mail history down here. We can send him an email. The problem here, though, is that this isn’t actually a message that’s from [18:00] Aaron. And this (inaudible) something where, again, the platform itself is indicating the user, that this should be from Aaron, right? You train users to hover over the user information. You train users to hover over URLs. But what happens in a situation like this where, again, this appears to be from Aaron?
From the GreatHorn perspective, what we’re seeing here are a couple of different things. We flagged this via two of our out-of-the-box policies, one being an Auth Risk; the second being Direct Spoof. You’ll notice that we don’t have any alerts set up or any policy actions set up on this particular one for the purposes of the demo so that that initial message isn’t muddied in any way, but we’ve flagged some things here that are of note and of interest.
First and foremost, the direct spoof, as I alluded to, that’s going to fall under one of our imposter protection categories. We have a mismatch, [19:00] or an anomalous return path, as it relates to this domain. Now, what’s important to note here is that this particular anomaly isn’t pulled out in and of itself because it’s a mismatch; it’s pulled out because based upon what we’ve seen historically for Aaron, based upon what we’ve seen historically for the Flying Deliveries domain, we would not expect this return path. And there are likely a number of messages that your organizations receive on a day-to-day basis where users have gone out and signed up for a mailing list, for instance, and in all likelihood that particular mailing list is coming from a particular domain, but in all likelihood it probably has a different return path, whether it’s something like HubSpot or Marketo, whether it’s an email configuration tool that the organization that is sending the message has set up themselves. Whatever the case may be, there’s likely going to be some kind of a mismatch between that domain and that return path, and there are a number of other examples of that [20:00] taking place. That doesn’t mean that that particular message is malicious or even suspicious, for that matter; it just means that it came from a source other than the actual domain. Tends to be [fairly common?], so in an instance where you’re trying to make a determination based upon just what you’re seeing at that point in time, you may end up with a lot of messages being flagged because of something like that.
Again, the difference with GreatHorn is that what we’re doing is we scan and analyze both in your environment but also across the entirety of our dataset, all of our clients’ environments. We’re making those associations. We’re determining what the normal pattern looks like from a sending configuration perspective. Now, that also applies to the IP address, whether or not there’s a reply-to, etc. And rather than us just simply saying, hey that return path isn’t the same as the domain, going to tell you that particular return path isn’t normally used when sending from that domain.
Another area where this comes into play is around authentication. [21:00] When it comes to this particular message, again, we had — we flagged this as an auth risk, as well, and this is why here. What we’re seeing here is that DKIM is being signed, but it’s being signed by a domain other than the sending domain itself. Again, we see a number of different things across the entirety of our clients’ data. There are a number of environments where folks don’t have SPF, DKIM, and/or DMARC configured. Other times, they do have it configured, but they just have it configured very poorly. In those instances where, perhaps, someone hasn’t configured it, or someone has gone and tried to configure it but done it in the wrong way, perhaps you’re getting authentication results from them that are less than stellar, we’ll say. You don’t necessarily want to drop those messages at the perimeter. You don’t necessarily want to quarantine those messages and prevent them from getting to end users, because they could be, again, perfectly legitimate. What tends to be the recourse from there is, again, either those messages don’t reach end users [22:00] as seamlessly, or you’re having to go in and whitelist entire domains altogether, due to the fact that you don’t want to prevent those messages from getting in anymore, but at that point you’re also opening up a fairly significant gap in your defenses.
Similar to what we’re doing from an email [eradicator?] perspective, we’re also doing that for authentication. In this case, in the Flying Deliveries environment, you would expect SPF to pass, you would expect DKIM to pass, but what we wouldn’t expect is, again, for DKIM to be signed by a domain other than the Flying Deliveries domain. The same would apply across the board. If SPF normally passes, we’re seeing a soft fail. We’re going to note that. If DKIM is normally not configured and it’s suddenly failing, we’re going to note that. Things of that nature. That’s all based upon the scanning and analysis that we’re doing. It’s all based upon the pattern matching, etc. It’s not just about that one anomalous characteristic.
[23:00] So a couple of the other areas from an imposter protection perspective that we talk about, they relate to things like this where we’re seeing that Sherry is sending this message along to Emily, but we’ll note here, based upon the banner that we’re seeing, that, hey, you know, that’s not typically an address that Sherry sends from. This particular banner constitutes one of our automated response actions that we have. We’ll get into a couple of the others in a second. But the idea here is that we can give more than just simply a “this is coming from an external user.” We can give more than that context to a given recipient of these types of messages. We can say things like “This isn’t the address that Sherry typically sends from.” If a user sees this and you have some kind of a training awareness program in place, something that they may do from here is to say, “Hey, you know what? I may not have checked before but I certainly should scroll over,” and they scroll over the name and they take a look, and they realize that, yeah, that’s probably not the address that Sherry typically sends from. [24:00] So from an administrative perspective, that particular example is going to fall under our name spoofs category, so messages that are coming from addresses with display names that are typically associated with other addresses.
The other area that we’re also looking at from an imposter protection perspective in regard to these types of impersonations is domain lookalikes. So is the message coming from an address or domain that is similar to one of your own, or similar to a third party’s? So if we come in here and we take a look, again, we’re in the Flying Deliveries environment. You have the correct spelling of Flying Deliveries up here, but we’ll note in this particular domain we have that second L. If we zoom out just a little bit, we’ll note, again, this domain name is similar to one of the organization’s domains. This would apply to your primary domain and any other domains that you may typically send from, and, again, we’re also looking at third-party domains that we see both in your environment [25:00] and across the entirety of our clients’ dataset.
We’re also going to be looking at things like business services impersonations — we’ll get into a couple of examples of those in a moment — things that are coming across where they might look like they’re coming from someone’s bank, or they might look like they’re coming from Google or Microsoft. That’s really what, for the most part, is encompassing that impersonation protection piece. I’d mentioned, as well, that we have, beyond banners, other automated incident response options. We have quarantine. We can move messages to the trash. We can remove attachments, so if a message comes across and it meets a certain set of policy criteria, and we want to go ahead and just remove the attachments that are there, and still enable the users to request them back but remove the attachments nonetheless, we can do that. Again, we can add a banner. We can move the message to a folder. We can add a label or a category, depending on whether or not you’re in G Suite or Office 365. We can archive, [26:00] and then we can do any and all of the above in conjunction or without any sort of admin email alerts or end user email alerts.
The idea behind the policy actions is that you can choose them to fit different risk profiles, depending upon the policy. You can choose them to fit different roles and responsibilities within the organization. So, for instance, you may want to handle the executives’ mail a bit differently than you handle everyone else’s mail. Or when it comes to something like a wire transfer content policy, that might mean something a little bit different to the accounts payable team than it does to all the other teams in the organization who can’t actually execute a wire transfer. So because of that, you can go ahead and treat those policies in a unique fashion, depending upon which active directory group a given user may belong to.
Moving over into the link protection piece and the attachment protection piece, [27:00] the first part of that is going to be here. We’re utilizing both third-party threat intel as well as our own threat feeds to scan and analyze URLS, scan and analyze attachments. That said, there’s obviously something to be said about the fact that not all URLs are going to be known bad URLs. Not all attachments are going to be known bad attachments. So what we’ve done is we’ve gone ahead and we have this third classification of URLS. We have known bad, we have common, right — so things that we can tell you with a high degree of certainty that they’re known good — and then we have sort of the grey area in the middle, and those are going to be classified as unusual or suspicious. You’re going to end up seeing emails like this that come across, again, as I mentioned a moment ago — they may look like they’re coming from your bank, or they may look like they’re coming from Microsoft, right?
The user is going to take a look, and obviously some of the messages that come across aren’t even as well put together as this, but they’ll [28:00] have things like, if we look in the bottom left-hand corner, what appear to be perfectly legitimate URLs. And we’re going to go through, and all these seem to be just fine. And then you’ll notice on this last one here, in the bottom left-hand corner, that one’s rewritten. The other tenet behind link protection, behind the threat intel piece, is identifying these types of URLs where we’re going to rewrite them. If I, as an end user, now click on this URL, I’m going to be — the URL itself will be subject to time-of-click analysis. So at the time of ingest we perform our analysis on the URL. We don’t have a threat intel match at that point, so we’re not marking it as known bad, but we have identified it as being something that is suspicious. When I interact, I’m going to get that time-of-click analysis to see if we have perhaps an updated threat intel match.
IN the absence of that, the user clicks and the user’s brought to this particular warning page, where I’m given more intel or more [29:00] context behind what’s there. So just to go back to the inbox again for a moment, you’ll notice that this says “Sign in to the customer portal with your user ID.” My user ID is posted right there, very conveniently. But when I come here, we’re actually giving the original link. Now, I as an end user can utilize that, say, “Well, that seems a little bit odd, off.ice365.com, That doesn’t really look right. And then I come here and we can see, again, we have what appears to be a Microsoft login portal. So the user now has this context. The user can still say take me there in instances where there might be a false positive. We can report that. But ultimately what we’re doing here is giving the user this warning.
But then from the administrative perspective, we’re also going to be giving context into the different URLs that a user might be accessing. We can go ahead and block certain domains at the top level here, so things like poker-academy.bid or annas-soft.com. Probably don’t want users going there. [30:00] Things like Google, not all Google URLs are bad but some certainly are, so we can give you a little bit more granularity around those.
We take a look here, we can see, again, we can block certain URLs. We can warn on others. We can allow others. We’re also going to give you context behind who’s received a given URL, how many times that particular warning message has been displayed — so, in other words, how many times have they clicked the URL. If we go in and we take a look, and we’ll note that, hey, this particular user’s clicked on that URL ten times, that could be an indication to me that perhaps we need to send the [training?] (inaudible) module off to that particular user. We’ll also see in this case, hey, this user has actually gone ahead and clicked through, so they’ve clicked that “Take Me There” button. If I then go back, right, and I take a look and I say, huh, well, that’s actually what appears to be a credential theft site, I can grab this URL. I can go do my own research on it. I might say that user’s clicked through. That’s a credential theft site. I’m not sure whether or not they actually entered their credentials, but just to be safe [31:00] I’m going to go ahead and I’m going to reset their passwords for their accounts.
The same can also hold true if you’re going to potentially a malware website, right? You can isolate the machine, run an [AD?] scan, perhaps reimage the machine itself, whatever the case may be around those.
The last piece we’ll discuss is just the incident response piece. So we’ve talked about imposter protection. We’ve talked about link protection. We’ve talked about attachment protection. We’ve gone through the automated response piece when it comes to those, right? So when we detect something, we flag it, we’re going to take that automated action. We’re going to give the end user context. We’re going to give the end user a warning page, or subject URLs to time-of-click analysis at the inbox level, whatever the case may be.
But on the other side of the coin is what happens if we want to go out and search for something [in the?] environment, in a given mailbox, wherever we (inaudible)? [32:00] That’s where this particular search functionality comes into play. You can use this on a one-off basis, so you could search by just one piece of criteria. You can search in a (inaudible) fashion. We have a number of different options, as you can see. The reality is that when these types of attacks come across you may want to go back and, for peace of mind, check to see, well, did we get anything from that sender previously. Are we seeing a particular display name over and over again in attacks, but then maybe they’re also in, perhaps, a Microsoft announcement that is legitimate? Well, maybe we want to go and take a look at those. Perhaps you’re seeing the same URL coming across. Perhaps there’s a message with the same attachments, or the same file name associated, or the same file types that you’re concerned about.
The idea is that you can, again, use this as you see fit, and when we go in we can plug in — in this case we’ll look up a sender. [33:00] We can search on just one day’s worth of data. We can also search across a week’s worth, a month’s worth. We’ll go back. And, again, like I said, we’ll search here for about 35 days’ worth of data, just to see how many messages that we got from this particular sender here. Okay, we’ve gotten quite a few. We’ve taken action on all of these, but I just want to go ahead and get rid of them altogether. So I can do a quick select all, and in this dropdown I can come down, click “Remove from user’s mailbox,” click “Apply,” and click “Apply” again. As easy as that, we’ve gone ahead and removed all these messages from all of these end user’s inboxes. No need to go into the Microsoft security console and do any research there. No need to run a PowerShell script. Everything is done very simply and easily from the console here.
I know we covered quite a bit of ground there. I imagine you probably have a couple of questions, so [34:00] with that, Lorita, I’ll pass it back to you and see if we have any questions from anyone in the audience here.
Yeah, absolutely. Thanks so much, EJ. So yes, as a reminder, please feel free to submit your questions in the GoToWebinar control panel. EJ, we did have a couple of questions that came in. One specifically asked “Do I need to use any particular client in order to see the banners or the user notifications that you mentioned?”
Yeah, great question. The short answer is no, and that’s something that I should’ve mentioned during the presentation for you folks. I apologize there. There’s nothing — there’s no limitation from that perspective when it comes to both the Office side of things, as well as the Google side of things, regardless of the mail client you’re accessing from, regardless of the device you’re accessing from. The experience is going to be the same. The banners are going to show up the same way. The labels and categories would show up the same way. We also don’t require [35:00] any sort of secondary application. You know, that comes up from time to time in conversation, particularly around quarantine. There are tools out there that will require an end user to, particularly if they’re on mobile, download a secondary app where that’s how they have to manage the quarantine in terms of releasing anything from there. We don’t require anything of that nature. There’s no additional infrastructure on that front. And, again, the experience ends up being the same across devices and clients.
Great, thanks, EJ. The next question is more about implementation. Do we need to make any changes to, like, our MX records or anything like that?
Yeah, another really good question. So, no, you don’t have to make any changes. One of the core tenets of GreatHorn is the fact that we are API-driven. We tie directly into Office 365 or G Suite, or both in some situations. We have a handful of clients that actually do use both of the [36:00] mail platforms. But we utilize APIs that both of the organizations make available. Because of that, we don’t require you to make any changes to your MX record. There are no DNS changes. The other side of that, too, and one of the other big benefits, is the fact that there are a lot of different setups, or a lot of different ways that people may be currently running their mail environment, open security perspective, as well as just from an IT architectural perspective. Again, because of where we tie into the mail environment, you don’t have to make any of those changes, but you can obviously save a lot of time, a lot of headaches, when it comes to potentially having to entirely re-architect what is already there.
If I already have Microsoft ATP, do I need this? Does it conflict with my environment?
Yeah, that’s another really good question. It bleeds a bit into the prior one. In terms of any existing technology, you don’t technically need to get rid of anything. [37:00] Depending upon your licensing level, obviously, you’re likely in a position, too, where you may already have access to these things. We’re certainly not going to discourage folks from getting rid of something, or not using something that they’re already paying for. So from the technological perspective, there wouldn’t necessarily be any overlap or any conflict. That said, in terms of ATP itself, it’s not exactly a one-to-one comparison. With ATP, you’re — they’re really heavily reliant upon threat intel, which is helpful, but it’s not the end-all be-all. As an added layer, it’s not the worst thing in the world, but at the same time we don’t really see a ton of organizations that we’re working with running ATP in conjunction with GreatHorn.
Great. Well, that’s actually all the questions that we have, and we’re coming right up to that 40-minute timeframe that I had promised everybody we would keep it to. So if you did not have the chance to ask a question but would like to, please feel free to email us at [email protected] [38:00] and we’ll get back to you as soon as possible. We’ll be following up with this recording, as well as the slides, to today’s presentation following the event, but in the meantime I want to thank you all for joining us today, and to EJ for conducting the demonstration and the technical discussion. We hope you have a great afternoon and will join us on future webinars. Thanks so much. Bye-bye.