WEBINAR REPLAY

Evolving Email Security: An Introduction to GreatHorn

In this 45-minute webinar, GreatHorn Solutions Engineer EJ Whaley and Vice President of Marketing Lorita Ba discuss the challenges with legacy email security strategies, why they are ill-suited for a cloud architecture environment, and what an ideal solution would look like. EJ then provides a live walk through of the platform and its capabilities.

Request a Demo

Like what you hear? Contact us to learn more about GreatHorn’s sophisticated email security platform and how easy it is to set up for your Office 365 or Google Suite platform.

VIDEO TRANSCRIPT:

LORITA BA:

Hello, everyone, and thank you for joining us today.  We’re here to — with our monthly — our monthly webinar, “Email Security Redefined:  An Introduction to GreatHorn.”  I’m Lorita Ba.  I’ll be your moderator for today’s webinar.  And I’m joined by EJ Whaley, our Solutions Engineer here at GreatHorn.

A few logistics to get us started.  First, you will be on mute.  You’ll have the ability to submit questions in the Q&A panel.  This should be probably on the right-hand side of your screen, and you go to “Webinar Control Panel.”  We’ll be taking a look at the questions throughout the — throughout the webinar, which should take, oh, about 40 minutes, 30 to 40 minutes, including time for questions.  The webinar is being recorded, and will be available for replay, and the slides will be made available after the webinar, as well.

[01:00] So we’re going to go ahead and get started.  First who are we?  Well, the reality is that GreatHorn’s goal is to make email security simpler.  We’re doing this is in a couple of ways.  The first is obviously from a threat detection perspective.  We’ve focused our company on protecting your organization from all kinds of email threats, comprehensive protection against not just the traditional malware and malicious links and things that we’ve been fighting for decades, but also from the nefarious and sophisticated phishing attempts that are increasingly becoming a problem.

 

The second thing that you can expect from GreatHorn is that we treat email security as a life cycle.  It’s not jut a question of sort of detecting and preventing email from — or email threats from entering your environment, but it’s also about supporting the entire life cycle detection through remediation, and incident response.  [02:00] No email security tool is going to be 100% perfect.  Frankly, if anybody’s telling you that, they’re probably not accurate.  So our goal here is to help ensure that we’re supporting people for all of their email security needs, from detection through to email solutions.

We’ve been trusted by a number of organizations across a — across a number of industries, from small organizations to multi-billion-dollar Fortune 500s.  And our customers trust us precisely because of what I just said, from making email security email and simple, and protecting their environment in a way that they’ve not been able to be protected in the past.

So let’s get into it, right?  How big a problem is phishing?  Well, according to the FBI, American businesses lose an average of $2 million to phishing every day, and when you add all of that up, [03:00] that results in 48% of all internet-driven crime that was reported to the FBI in 2017.  So the — even though the incidence of phishing is actually smaller than a lot of these other threats that you see over on the right, the actual cost to American businesses is significantly larger.  And, of course, that — that’s compounded across the world, not just in the U.S.

The problem, of course, is, as we look at email threats and how they’ve evolved, phishing, in particular targeted phishing, spear phishing, they look like real attacks.  So what you’re seeing on your screen here are examples of actual emails that we’ve gotten very recently within the GreatHorn environment.  Now, our product, of course stops them, but we were able to take a look through them and be able to capture the screenshots.  And what you can see here is that these are — these — Kevin O’Brien is GreatHorn’s CEO.  [04:00] As a venture-backed organization, we’re often a threat — a target, especially because we’re in the security industry.  And so people go through extraordinary efforts to try to personalize things, and [commit to our control?] over that.  There’s an invoice that needs to be paid, for example, or to get us to log into Office 365 fake credentials page that looks quite real, and give up our credentials.  And this is the same kind of pattern that you’re seeing in organizations like ours and organizations that are much, much larger.

According to Verizon, one in 25 people will click on any given phishing attack.  And so the threat is — the threat is real, right?  And if you start talking about your accounting department and your HR department, or anybody that has access to sensitive information, the impact is substantial.

The reason for this, in large part, this is — so this is — what you’re seeing here is the result of a [05:00] survey that we ran over the summer of about 300 employ– 300 personnel.  About two thirds of that were email security professionals; the rest of them were laypeople.  I think what’s really interesting on this slide is that if you compare the responses from email security professionals to the average businessperson, the average businessperson characterizes almost all of the threats that they see as simply spam.  It’s not that they’re getting fewer threats; in fact, they’re probably getting at least as much, if not more, than email security professionals.  But 66% of them nearly two thirds of them, characterize these things that they see as simply spam.  And the challenge, of course, with that is that they’re not therefore taking extra precautions.  They’re not informing their email security professionals that there is maybe a widespread attack going on, or they’re just ignoring it entirely.  Or, in some cases, they’re actually responding to it, and that response is resulting in email security professionals 20% of them, having to take some kind of direct and impactful remediation action at least weekly.  [06:00] That might be running PowerShell scripts, or shutting down an account.

And so again, just trying to emphasize that the threats that we see within this category of email threats has measurable impact, not just in terms of the security of the organization, its employees, and its data, but also in terms of the time that is being spent to manage and remediate such threats.

The challenge, of course, is that when we think about traditional — oh, and by the way, that last graph was — the question was what actually reaches inboxes.  So this is after whatever email security solutions these employees had all — had in place, right?  And so whatever they are — they’re seeing in their inboxes, they’re actually seeing it even though it’s already gone through email security.  And the reason that that’s a problem is because there’s a philosophical IT shift that we all know about, right?  This movement from perimeter-based networks to [07:00] cloud architectures.  And the challenge from an email security perspective is that the philosophy behind these infrastructures are fundamentally different, right?  In the perimeter-based world, we are thinking about things in terms of how do we — how do we create a wall, right?  We’re going to be very authoritative.  It’s very permissions-based.  There’s a gate, and anything that gets through the gate is good. Anything that that we’re worried about, we stop at that gate, right?  And even today’s other email security products that don’t have that gateway heritage, a lot of them still have this kind of binary good/bad analysis that goes in that’s really indicative and reflective of the perimeter-based network.

The challenge is that organizations continue to move toward cloud architectures, the philosophy of IT’s changed, as well.  We don’t talk about shadow IT anymore very often because everybody’s spinning up a new EC2 server, or [08:00] creating — creating the requirements that they have in the cloud.  It is self-service.  It’s user-defined.  There’s an expectation of business enablement rather than hindering business for the purposes of security.  There’s also this idea that security and failure is architected into cloud architectures, right?  In order for us to trust AWS, AWS has spent, you know — and Microsoft have spent countless, countless dollars on making sure that their systems could handle things like failure, like security issues.  And so we have a certain expectation that these things are built in.

So what’s happened from a practice perspective is that traditional email security solutions, they have this judgment, right?  There’s a judgment day when an email comes in.  It’s either good or it’s bad.  It’s passed on to inbox, or it’s sent to trash or quarantine.  That doesn’t give a lot of room [09:00] for the nuance that is required from these very sophisticated phishing attacks, many of which don’t have payloads attached to them.  There’s no links attached to them.  They’re just trying to create a certain amount of trust to get the information that they need out of — out of your employees.

What we believe is that email security is a life cycle.  There isn’t a point in time when you should be — you should be managing it.  Of course you’re going to be checking it as it comes in, but you also need to be protecting the email and the inbox at all stages of the email life cycle, right?  And so that means we’re not just talking about threat detection, which is, of course, important, but we’re also talking about what’s the automated threat defense.  And then, as I said earlier, not just the automated threat defense, but what kind of incident response tools are we providing you in order to think about how to — how to address any additional threats that have made it through?

So if we take a look at that and we consider this, and we [10:00] consider email security as a life cycle, I want to show you kind of how we’ve done that at GreatHorn.  What you’re seeing here is the GreatHorn email security platform.  And what you see here really reflects that idea of protecting email at all stages, right?  At the top, you see detection.  And, of course, we’ve got common threat intelligence beads from trusted trusted security providers, right?  We also have our own proprietary community threat intelligence that we’ve developed from the millions of emails that we see within our environment, which helps us identify emergent zero day type threats.  But what really helps to set us apart is what we call adaptive threat analytics, and that’s where we’re taking a look at the communication patterns for not just the organization but also at the individual level.  What’s the relationship with people?  What are the — what — what’s the relationship with people?  What’s the relationship with the other organizations?  [11:00] How does that look from a normal basis?  What — what are our expectations from a technical and organizational fingerprinting perspective?  Does a particular domain typically fail authentication?  Well, then, for them to fail it, it’s not unusual.  But if they typically pass it and one day fail it, now we’ve suddenly got an anomaly.  And we take all of that information — sender reputation, fingerprinting, deep relationship analytics — and that all draws into our threat detection engine.

At the bottom of your screen you’re seeing our threat response, right?  And this is where we talk about both the automated threat defense but also the post-delivery incident response.  And so the automated defense, as I said — the advantage of being cloud-native and being connected directly to cloud email APIs is that we actually have access to and the ability to have far wider remediation actions than, [12:00] a secure email gateway might have, for instance.  So that means that, yeah, we — of course we can quarantine things, and things that we know we absolutely know are bad will likely be done — handled in that way.  But we also have the ability to provide nuance and context to the users.

And so what do I mean by that?  In a lot of instances you may have an email that looks anomalous but that doesn’t fit the pattern that we expect, but it’s possible that an actual person is breaking the pattern that we’ve come to expect.  And so, in such an instance, we might banner the email and say, “Hey, this doesn’t look like an email that we” — “This doesn’t look like the person that you think you’re speaking with,” or “There’s something that’s suspicious about this (inaudible) based on our threat detection.  Be careful.”  Right?  And so having that additional context and warning really helps to reinforce the security awareness training that you probably already have in place, but to do so in an immediate and context-driven manner.

[13:00] On top of that, we have the ability to sort of reinforce business processes and policies that we have in place.  So, for example, if you have a policy that wire transfers should not be authorized through email, then we can — we can add a reminder that says “This email looks like it’s about wire transfers.  Remember, you aren’t allowed to authorize wire transfers through email.  Please please call to confirm,” right?  And so all of these kinds of options provide that greater context to protect the user, protect the organization, and to give the user the knowledge and information that they need to protect the organization, as well.

 

That final piece is the incident response, right?  we all — we all are all too familiar with the habit of or the requirement to use PowerShell scripts after an incident has made it through email security, and desperately trying to figure out, okay, how widespread it is, [14:00] how quickly can I remediate it.  And the need to do that both lengthens the time to remediation, but also is inaccurate, right?  Our post-delivery incident response capabilities enable you to do sort of multi-vector searching within our environment, do a quick forensic analysis to identify what the problem is and how widespread it is, and with a couple of clicks just take all of those emails directly out of inboxes, no matter when they were delivered, whether it was five seconds ago or it was five days ago or five months ago.  So that ability to respond quickly really helps, as well.

And so what you see in the middle here is the combination of all of what we’ve learned, right?  So we’ve created these four modules that are specific to the threat topic, that are turnkey solutions, and built on the best practices and the knowledge that we’ve — that we’ve learned, [15:00] as well as the specific functionality required for each of them.  So, for example, imposter protection focuses on domain lookalikes, executive impersonations direct spoofing, business services spoofing, all of those things where an email is pretending to be someone or something that you trust, okay?

Link protection we handle — and EJ is going to really go through this in his demonstration, but link protection we handle in a little bit of a different way, right?  It’s not just a question of, okay, yeah, we’re going to block malicious URLs.  Of course we will do that.  But in addition to that, we have the ability to kind of take a look at things and determine whether or not they have suspicious characteristics.  And so what we’ll do is we’ll, A, warn the user, but we’ll also automatically sandbox that URL, and provide previews to the — to the end page, so that the end user can, again, take a look, make sure that it’s — the destination’s [16:00] where they intend to go.

And then attachment protection and mailbox protection are fairly self-explanatory, in terms of really supporting the users, and protecting against malicious and suspicious attachments.

So that’s really the high level of the GreatHorn security platform.  I’ve talked long enough.  (laughs) I’m going to go ahead and turn it over to EJ so that he can really give you an understanding from a demonstration perspective of how the platform works.  EJ?

EJ WHALEY: 

Sure.  Thanks, Lorita.  So we’re going to shift gears a bit here.  Rather than jumping right into the platform, we’re going to start in an inbox.  The reason for that:  at the end of the day, it’s really about the end users.  There are a number of things that we do, of course, to enhance the amount of tools that are at information security teams’ disposal.  There are obviously a number of things that we’re doing to lighten the workload of the information security team, or of the IT team, but, again, at the end of the day [17:00] the threats that are being faced by our organizations are the things that are in the purview of the end user.

Something that Lorita had alluded to earlier in the presentation was really about the complexity of some of these attacks, and we’ll get into a couple of other examples, but I’d really like to start here.  At this point, we’re in a real Office 365 inbox.  This is belonging to Emily Post of Flying Deliveries.  And if we look at what we have here, for all intents and purposes, as far as Microsoft is concerned, this particular message is from Aaron.  It’s from an internal user at Flying Deliveries.  We have his photo here.  We take a look.  We have his address.  If we hover over, we get the information that pops up.  We’re gonna say, yeah, this all seems to check out.  We have some mail history down here.  We can send him an email.  The problem here, though, is that this isn’t actually a message that’s from [18:00] Aaron.  And this (inaudible) something where, again, the platform itself is indicating the user, that this should be from Aaron, right?  You train users to hover over the user information.  You train users to hover over URLs.  But what happens in a situation like this where, again, this appears to be from Aaron?

From the GreatHorn perspective, what we’re seeing here are a couple of different things.  We flagged this via two of our out-of-the-box policies, one being an Auth Risk; the second being Direct Spoof.  You’ll notice that we don’t have any alerts set up or any policy actions set up on this particular one for the purposes of the demo so that that initial message isn’t muddied in any way, but we’ve flagged some things here that are of note and of interest.

First and foremost, the direct spoof, as I alluded to, that’s going to fall under one of our imposter protection categories.  We have a mismatch, [19:00] or an anomalous return path, as it relates to this domain.  Now, what’s important to note here is that this particular anomaly isn’t pulled out in and of itself because it’s a mismatch; it’s pulled out because based upon what we’ve seen historically for Aaron, based upon what we’ve seen historically for the Flying Deliveries domain, we would not expect this return path.  And there are likely a number of messages that your organizations receive on a day-to-day basis where users have gone out and signed up for a mailing list, for instance, and in all likelihood that particular mailing list is coming from a particular domain, but in all likelihood it probably has a different return path, whether it’s something like HubSpot or Marketo, whether it’s an email configuration tool that the organization that is sending the message has set up themselves.  Whatever the case may be, there’s likely going to be some kind of a mismatch between that domain and that return path, and there are a number of other examples of that [20:00] taking place.  That doesn’t mean that that particular message is malicious or even suspicious, for that matter; it just means that it came from a source other than the actual domain.  Tends to be [fairly common?], so in an instance where you’re trying to make a determination based upon just what you’re seeing at that point in time, you may end up with a lot of messages being flagged because of something like that.

Again, the difference with GreatHorn is that what we’re doing is we scan and analyze both in your environment but also across the entirety of our dataset, all of our clients’ environments.  We’re making those associations.  We’re determining what the normal pattern looks like from a sending configuration perspective.  Now, that also applies to the IP address, whether or not there’s a reply-to, etc.  And rather than us just simply saying, hey that return path isn’t the same as the domain, going to tell you that particular return path isn’t normally used when sending from that domain.

Another area where this comes into play is around authentication.  [21:00] When it comes to this particular message, again, we had — we flagged this as an auth risk, as well, and this is why here.  What we’re seeing here is that DKIM is being signed, but it’s being signed by a domain other than the sending domain itself.  Again, we see a number of different things across the entirety of our clients’ data.  There are a number of environments where folks don’t have SPF, DKIM, and/or DMARC configured.  Other times, they do have it configured, but they just have it configured very poorly.  In those instances where, perhaps, someone hasn’t configured it, or someone has gone and tried to configure it but done it in the wrong way, perhaps you’re getting authentication results from them that are less than stellar, we’ll say.  You don’t necessarily want to drop those messages at the perimeter.  You don’t necessarily want to quarantine those messages and prevent them from getting to end users, because they could be, again, perfectly legitimate.  What tends to be the recourse from there is, again, either those messages don’t reach end users [22:00] as seamlessly, or you’re having to go in and whitelist entire domains altogether, due to the fact that you don’t want to prevent those messages from getting in anymore, but at that point you’re also opening up a fairly significant gap in your defenses.

Similar to what we’re doing from an email [eradicator?] perspective, we’re also doing that for authentication.  In this case, in the Flying Deliveries environment, you would expect SPF to pass, you would expect DKIM to pass, but what we wouldn’t expect is, again, for DKIM to be signed by a domain other than the Flying Deliveries domain.  The same would apply across the board.  If SPF normally passes, we’re seeing a soft fail.  We’re going to note that.  If DKIM is normally not configured and it’s suddenly failing, we’re going to note that.  Things of that nature.  That’s all based upon the scanning and analysis that we’re doing.  It’s all based upon the pattern matching, etc.  It’s not just about that one anomalous characteristic.

[23:00] So a couple of the other areas from an imposter protection perspective that we talk about, they relate to things like this where we’re seeing that Sherry is sending this message along to Emily, but we’ll note here, based upon the banner that we’re seeing, that, hey, you know, that’s not typically an address that Sherry sends from.  This particular banner constitutes one of our automated response actions that we have.  We’ll get into a couple of the others in a second.  But the idea here is that we can give more than just simply a “this is coming from an external user.”  We can give more than that context to a given recipient of these types of messages.  We can say things like “This isn’t the address that Sherry typically sends from.”  If a user sees this and you have some kind of a training awareness program in place, something that they may do from here is to say, “Hey, you know what?  I may not have checked before but I certainly should scroll over,” and they scroll over the name and they take a look, and they realize that, yeah, that’s probably not the address that Sherry typically sends from.  [24:00] So from an administrative perspective, that particular example is going to fall under our name spoofs category, so messages that are coming from addresses with display names that are typically associated with other addresses.

 

The other area that we’re also looking at from an imposter protection perspective in regard to these types of impersonations is domain lookalikes.  So is the message coming from an address or domain that is similar to one of your own, or similar to a third party’s?  So if we come in here and we take a look, again, we’re in the Flying Deliveries environment.  You have the correct spelling of Flying Deliveries up here, but we’ll note in this particular domain we have that second L.  If we zoom out just a little bit, we’ll note, again, this domain name is similar to one of the organization’s domains.  This would apply to your primary domain and any other domains that you may typically send from, and, again, we’re also looking at third-party domains that we see both in your environment [25:00] and across the entirety of our clients’ dataset.

We’re also going to be looking at things like business services impersonations — we’ll get into a couple of examples of those in a moment — things that are coming across where they might look like they’re coming from someone’s bank, or they might look like they’re coming from Google or Microsoft.  That’s really what, for the most part, is encompassing that impersonation protection piece.  I’d mentioned, as well, that we have, beyond banners, other automated incident response options.  We have quarantine.  We can move messages to the trash.  We can remove attachments, so if a message comes across and it meets a certain set of policy criteria, and we want to go ahead and just remove the attachments that are there, and still enable the users to request them back but remove the attachments nonetheless, we can do that.  Again, we can add a banner.  We can move the message to a folder.  We can add a label or a category, depending on whether or not you’re in G Suite or Office 365.  We can archive, [26:00] and then we can do any and all of the above in conjunction or without any sort of admin email alerts or end user email alerts.

The idea behind the policy actions is that you can choose them to fit different risk profiles, depending upon the policy.  You can choose them to fit different roles and responsibilities within the organization.  So, for instance, you may want to handle the executives’ mail a bit differently than you handle everyone else’s mail.  Or when it comes to something like a wire transfer content policy, that might mean something a little bit different to the accounts payable team than it does to all the other teams in the organization who can’t actually execute a wire transfer.  So because of that, you can go ahead and treat those policies in a unique fashion, depending upon which active directory group a given user may belong to.

Moving over into the link protection piece and the attachment protection piece, [27:00] the first part of that is going to be here.  We’re utilizing both third-party threat intel as well as our own threat feeds to scan and analyze URLS, scan and analyze attachments.  That said, there’s obviously something to be said about the fact that not all URLs are going to be known bad URLs.  Not all attachments are going to be known bad attachments.  So what we’ve done is we’ve gone ahead and we have this third classification of URLS.  We have known bad, we have common, right — so things that we can tell you with a high degree of certainty that they’re known good — and then we have sort of the grey area in the middle, and those are going to be classified as unusual or suspicious.  You’re going to end up seeing emails like this that come across, again, as I mentioned a moment ago — they may look like they’re coming from your bank, or they may look like they’re coming from Microsoft, right?

The user is going to take a look, and obviously some of the messages that come across aren’t even as well put together as this, but they’ll [28:00] have things like, if we look in the bottom left-hand corner, what appear to be perfectly legitimate URLs.  And we’re going to go through, and all these seem to be just fine.  And then you’ll notice on this last one here, in the bottom left-hand corner, that one’s rewritten.  The other tenet behind link protection, behind the threat intel piece, is identifying these types of URLs where we’re going to rewrite them.  If I, as an end user, now click on this URL, I’m going to be — the URL itself will be subject to time-of-click analysis.  So at the time of ingest we perform our analysis on the URL.  We don’t have a threat intel match at that point, so we’re not marking it as known bad, but we have identified it as being something that is suspicious.  When I interact, I’m going to get that time-of-click analysis to see if we have perhaps an updated threat intel match.

IN the absence of that, the user clicks and the user’s brought to this particular warning page, where I’m given more intel or more [29:00] context behind what’s there.  So just to go back to the inbox again for a moment, you’ll notice that this says “Sign in to the customer portal with your user ID.”  My user ID is posted right there, very conveniently.  But when I come here, we’re actually giving the original link.  Now, I as an end user can utilize that, say, “Well, that seems a little bit odd, off.ice365.com,  That doesn’t really look right.  And then I come here and we can see, again, we have what appears to be a Microsoft login portal.  So the user now has this context.  The user can still say take me there in instances where there might be a false positive.  We can report that.  But ultimately what we’re doing here is giving the user this warning.

But then from the administrative perspective, we’re also going to be giving context into the different URLs that a user might be accessing.  We can go ahead and block certain domains at the top level here, so things like poker-academy.bid or annas-soft.com.  Probably don’t want users going there.  [30:00] Things like Google, not all Google URLs are bad but some certainly are, so we can give you a little bit more granularity around those.

We take a look here, we can see, again, we can block certain URLs.  We can warn on others.  We can allow others.  We’re also going to give you context behind who’s received a given URL, how many times that particular warning message has been displayed — so, in other words, how many times have they clicked the URL.  If we go in and we take a look, and we’ll note that, hey, this particular user’s clicked on that URL ten times, that could be an indication to me that perhaps we need to send the [training?] (inaudible) module off to that particular user.  We’ll also see in this case, hey, this user has actually gone ahead and clicked through, so they’ve clicked that “Take Me There” button.  If I then go back, right, and I take a look and I say, huh, well, that’s actually what appears to be a credential theft site, I can grab this URL.  I can go do my own research on it.  I might say that user’s clicked through.  That’s a credential theft site.  I’m not sure whether or not they actually entered their credentials, but just to be safe [31:00] I’m going to go ahead and I’m going to reset their passwords for their accounts.

The same can also hold true if you’re going to potentially a malware website, right?  You can isolate the machine, run an [AD?] scan, perhaps reimage the machine itself, whatever the case may be around those.

The last piece we’ll discuss is just the incident response piece.  So we’ve talked about imposter protection.  We’ve talked about link protection.  We’ve talked about attachment protection.  We’ve gone through the automated response piece when it comes to those, right?  So when we detect something, we flag it, we’re going to take that automated action.  We’re going to give the end user context.  We’re going to give the end user a warning page, or subject URLs to time-of-click analysis at the inbox level, whatever the case may be.

But on the other side of the coin is what happens if we want to go out and search for something [in the?] environment, in a given mailbox, wherever we (inaudible)?  [32:00] That’s where this particular search functionality comes into play.  You can use this on a one-off basis, so you could search by just one piece of criteria.  You can search in a (inaudible) fashion.  We have a number of different options, as you can see.  The reality is that when these types of attacks come across you may want to go back and, for peace of mind, check to see, well, did we get anything from that sender previously.  Are we seeing a particular display name over and over again in attacks, but then maybe they’re also in, perhaps, a Microsoft announcement that is legitimate?  Well, maybe we want to go and take a look at those.  Perhaps you’re seeing the same URL coming across.  Perhaps there’s a message with the same attachments, or the same file name associated, or the same file types that you’re concerned about.

The idea is that you can, again, use this as you see fit, and when we go in we can plug in — in this case we’ll look up a sender.  [33:00] We can search on just one day’s worth of data.  We can also search across a week’s worth, a month’s worth.  We’ll go back.  And, again, like I said, we’ll search here for about 35 days’ worth of data, just to see how many messages that we got from this particular sender here.  Okay, we’ve gotten quite a few.  We’ve taken action on all of these, but I just want to go ahead and get rid of them altogether.  So I can do a quick select all, and in this dropdown I can come down, click “Remove from user’s mailbox,” click “Apply,” and click “Apply” again.  As easy as that, we’ve gone ahead and removed all these messages from all of these end user’s inboxes.  No need to go into the Microsoft security console and do any research there.  No need to run a PowerShell script.  Everything is done very simply and easily from the console here.

I know we covered quite a bit of ground there.  I imagine you probably have a couple of questions, so [34:00] with that, Lorita, I’ll pass it back to you and see if we have any questions from anyone in the audience here.

LORITA:

Yeah, absolutely.  Thanks so much, EJ.  So yes, as a reminder, please feel free to submit your questions in the GoToWebinar control panel.  EJ, we did have a couple of questions that came in.  One specifically asked “Do I need to use any particular client in order to see the banners or the user notifications that you mentioned?”

EJ:

Yeah, great question.  The short answer is no, and that’s something that I should’ve mentioned during the presentation for you folks.  I apologize there.  There’s nothing — there’s no limitation from that perspective when it comes to both the Office side of things, as well as the Google side of things, regardless of the mail client you’re accessing from, regardless of the device you’re accessing from.  The experience is going to be the same.  The banners are going to show up the same way.  The labels and categories would show up the same way.  We also don’t require [35:00] any sort of secondary application.  You know, that comes up from time to time in conversation, particularly around quarantine.  There are tools out there that will require an end user to, particularly if they’re on mobile, download a secondary app where that’s how they have to manage the quarantine in terms of releasing anything from there.  We don’t require anything of that nature.  There’s no additional infrastructure on that front.  And, again, the experience ends up being the same across devices and clients.

LORITA:

Great, thanks, EJ.  The next question is more about implementation.  Do we need to make any changes to, like, our MX records or anything like that?

EJ:

Yeah, another really good question.  So, no, you don’t have to make any changes.  One of the core tenets of GreatHorn is the fact that we are API-driven.  We tie directly into Office 365 or G Suite, or both in some situations.  We have a handful of clients that actually do use both of the [36:00] mail platforms.  But we utilize APIs that both of the organizations make available.  Because of that, we don’t require you to make any changes to your MX record.  There are no DNS changes.  The other side of that, too, and one of the other big benefits, is the fact that there are a lot of different setups, or a lot of different ways that people may be currently running their mail environment, open security perspective, as well as just from an IT architectural perspective.  Again, because of where we tie into the mail environment, you don’t have to make any of those changes, but you can obviously save a lot of time, a lot of headaches, when it comes to potentially having to entirely re-architect what is already there.

LORITA:

If I already have Microsoft ATP, do I need this?  Does it conflict with my environment?

EJ:

Yeah, that’s another really good question.  It bleeds a bit into the prior one.  In terms of any existing technology, you don’t technically need to get rid of anything.  [37:00] Depending upon your licensing level, obviously, you’re likely in a position, too, where you may already have access to these things.  We’re certainly not going to discourage folks from getting rid of something, or not using something that they’re already paying for.  So from the technological perspective, there wouldn’t necessarily be any overlap or any conflict.  That said, in terms of ATP itself, it’s not exactly a one-to-one comparison.  With ATP, you’re — they’re really heavily reliant upon threat intel, which is helpful, but it’s not the end-all be-all.  As an added layer, it’s not the worst thing in the world, but at the same time we don’t really see a ton of organizations that we’re working with running ATP in conjunction with GreatHorn.

LORITA:

Great.  Well, that’s actually all the questions that we have, and we’re coming right up to that 40-minute timeframe that I had promised everybody we would keep it to.  So if you did not have the chance to ask a question but would like to, please feel free to email us at [email protected] [38:00] and we’ll get back to you as soon as possible.  We’ll be following up with this recording, as well as the slides, to today’s presentation following the event, but in the meantime I want to thank you all for joining us today, and to EJ for conducting the demonstration and the technical discussion.  We hope you have a great afternoon and will join us on future webinars.  Thanks so much.  Bye-bye.