Beyond the Nigerian Prince:
The Evolution of Phishing & How We Fight Against It
In this 45-minute webinar, dmarcian CEO Tim Draegen joins GreatHorn CEO and co-founder Kevin O’Brien as they walk through the history of phishing attacks – from the early days of the widespread Nigerian prince schemes to today’s sophisticated and highly targeted spear phishing threats. They explore how we’ve had to shift our philosophical approach to these threats as they’ve changed their approach, and discuss how the technology we use has evolved to keep up.
Lorita Ba: Hello, everyone. Thank you for joining our webinar today, “Beyond the Nigerian Prince: The Evolution of Phishing and How We Fight Against It.” I’m Lorita Ba. I’m the vice president of marketing for GreatHorn, and I’m joined today by Kevin O’Brien, GreatHorn’s CEO, as well as Tim Draegen, CEO of our partner, dmarcian. Before we get started, I just want to cover a few logistics. The webinar should take about 40 to 45 minutes. You will be on mute during the duration of the webinar, but you have the opportunity to submit questions at any time in the Q&A panel on the right side of your screen, in the go-to webinar control panel. The webinar is being recorded and will be available for replay for all registrants, as will the slides. And so with that, without further ado, I’m going to go ahead and turn it over to Kevin to give a short introduction of himself, followed by Tim, who will introduce himself as well.
Kevin O’Brien: Thanks, Lorita. Nice to be here. Kevin O’Brien, CEO and cofounder at GreatHorn. Looking forward to the [01:00] conversation. A little bit of brief biographical information. I’m a serial entrepreneur in the cyber security space. I’ve been doing this for about 20 years. We’ll be talking today about the evolution of email security threat. I’m really excited to be joined by Tim. Tim, I’ll let you introduce yourself as well.
Tim Draegen: All right, thanks, Kevin. Hi, everyone, I’m Tim Draegen, the CEO of dmarcian. Hey, thanks very much for taking time to listen in today. Today is a special day, as this webinar is the first collaboration between GreatHorn and dmarcian. The two companies both participate in the email industry, but in very different ways, and so partnering together and comparing notes has already led to some really great insight, and sharing this, on this webinar, makes the day special, for me and hopefully for you. OK, next slide.
I get the honor of kicking us off today. This is myself. This is a picture of myself as a baby, looking into the crystal ball of my future. I had no idea back then that I’d end up spending so much time working on email. [02:00] I like to think that the baby isn’t on the verge of crying, but instead, he’s expressing a look of awe. That’s what I’d like to think, at least. I’m a software engineer by trade. I spent the first 30-something years of my life in Silicon Valley, trying to work on the most difficult technology problems that I could find. When my wife and I decided to slow down and start a family, I naturally took a job at an email company, because as we all know, there really isn’t much to email. This was back in the early 2000s. It’s supposed to be a joke. During a Friday afternoon beer bash way back when, I was pulled aside by a VP to get my thoughts on the company’s plans to make the best anti-spam filter in the world. Back then, the company had a super-fast platform for sending email, because the company needed a plan to break into the enterprise. At that time, people were just starting to realize the importance of security. So why not tap into this enterprise security market with an anti-spam solution? Easy money.
Let’s see. [03:00] At the time, I made an off-the-cuff remark along the lines of, “We’ll make a lot of money, but it won’t solve the problem.” The problem back then, as I saw it, was one of basic email identity. People can’t easily tell if a piece of email is real or not. That off-the-cuff remark kicked off my own involvement in a huge cross-industry effort that finally resulted in the public release of the DMARC technical specification around 2012. Since then, I’ve been doing everything I can to get the world to adopt DMARC. But today’s webinar is not all about DMARC. Next slide, please.
What I still struggle with today is the size and the scope of email in today’s world. Everyone uses email all the time, everywhere. It’s still mind-bogglingly large. The scale of email, I believe, puts it at the same level as a utility like water, electricity, but what makes email complicated is that it isn’t a single thing. It’s better thought of as a giant pile of technical specifications that describe how all the pieces can interoperate together. [04:00] In my opinion, what makes email beautiful is that anyone can make their own version of the pieces and pull it together with the existing ecosystem. The end result is the world’s largest online communication medium, and it’s wholly owned by no one, which I think makes it worth working on. I have to think like this, or else I probably would have gone insane a long time ago.
All right, so with all this background out of the way — thanks for sticking with it — we can get to the point of this webinar: why is phishing still a problem, and what can be done about it? To answer this question, have to consider the larger context. As the internet continues to evolve and create opportunities for people, criminals, they also evolved to take advantage of new opportunities. If email is viewed as an ecosystem with lots of pieces, all interoperating with each other, and not as a monolithic thing, hopefully it starts to become clear that the pieces themselves evolve at different rates, and that there are industries in place that focus on specific pieces. So due to this kind of piecemeal evolution, email has seen significant [05:00] investment over the past couple of decades. An analyst piece from, I think, five years ago or so, put the annual spend on anti-spam technology somewhere between $7 and $10 billion a year, and that’s just for the technology dedicated to blocking spam, and that’s dated information. But despite this massive and continued investment in email security, certain threats continue to sail through existing email security solutions. I really want to jump ahead, but I’m not. So before we ask how can this be, we’ll take a look at what the threats look like. And Kevin, you’re on the front lines of this, so for the next slide, it’s all yours.
Kevin O’Brien: Thanks, Tim. It’s a good story, and since you broached the idea of analysts and their input on this, allow me to quote from one of my favorite analysts who work on the space. In a presentation they gave in a ballroom, in a hotel somewhere, they said, “Email is one of the few enterprise systems that can truly be defined as venerable, yet utterly vulnerable.” I think that’s a really salient [06:00] point for the conversation we’re having today, because email is nearly 47 years old, and yet it still has some of the largest impact on the overall cyber security industry, and by extension, on the organizations for whom those cyber security impacts have the most direct bottom-line consequences. And yet, if you ask the average organizational user of email, as we did back over the summer — we took a survey of approximately 300 professionals, split roughly 50/50 between information security professionals and non-information security professionals. Most people say biggest probably they have is spam. That’s an intriguing misunderstanding, because it’s not one that’s mirrored on the info sec side.
If you survey — and you can see it here on the slide, the left-hand column — [07:00] the organizational user is responsible for thinking about cyber security. Most of them say, we understand that there is a continuum of threat that we receive over the communication medium of email, ranging from phishing to targeted phishing, from direct financial requests that are from fraudulent senders through malware distribution, so-called payload attacks. Moreover, 20 percent of the time, those hacks are bypassing all of the legacy security solutions that these organizations have put in place, meaning that a remediation step is needed. In the info sec world, we think about two key metrics: time to detection, how long does it take you to find out that something bad is happening, and time to response, how long does it take you to go deal with that problem.
Email is ubiquitous. Every professional has it, and [08:00] 100 percent of surveyed professionals open all of their work email. They may not respond to it or do something with it, but you can guarantee that you can get to somebody by sending them an email in a professional context. The right-hand column here, though, points out that the layperson — that is, the non-info sec professional — generally speaking categorizes all of the, quote, bad email they get as spam. And so we have this really intriguing moment that has started to happen in the info sec industry, where we recognize that email is, in fact, this primary threat factor, and yet we still have this misunderstanding, by most of the people who are directly affected by that, that they are solving a spam problem. Over the course of the rest of our time today, we’re going to talk in some detail about what that really looks like, why spam is only part of the problem, [09:00] and how you can begin to think about creating an info sec response that has some nuance to it, and can respond to the real threats you get.
But I want to put some context around that first, and that is that this is not a tech problem. You have two CEOs of two tech companies presenting to you, but this isn’t a product pitch, and you can’t solve the email security problem, or the phishing problem, simply by putting tech in place, or buying something. Instead, there’s a continuum here, and that continuum ranges from industry-level controls, some of which we’ll cover, to organizational process, to individual responsibility [in context?]. Having a response plan that runs the gamut from the industry to the individual is an essential part of how we start to solve for this email problem. [10:00] Before we get there, I want to turn it over to Tim, given the background that he shared. Tim, you’ve been in the trenches. You’ve seen this evolution of threat. As we think through this idea of this continuum of response, can you take us through the history of what the problem has been, and where we are today, and how we got here?
Tim Draegen: Yes, I can, but before we go there, I just wanted to add a sidebar. While collaborating on this webinar, GreatHorn and dmarcian, we wanted to put together a clear picture of how to think about email security in today’s modern world. Doing so, it became clear that an effective response to phishing has to include more than just technology, as Kevin just said. The effective response has to look at the different actors at play. There are individuals within organizations. There are organizations that coexist on the open internet, represented on the previous slide as the industry. Traditionally, security people use the concept of defense in depth, and they apply the concept largely to things that fall into the realm of the organization: [11:00] firewalls, gateways, packet inspection, logging, event analysis. By broadening the perspective and looking at the relationship between individuals and organizations, between organizations and industries, we were able to put together a pretty clear picture on how to think about email security in a way that addresses today’s advanced threats.
We can’t get very far, though, unless we look at how the risk has evolved alongside the internet. That brings us to the guy coming out of the envelope right in the middle of the slide there. What we call email security, and how we manage the risk around it today, really started off as something different. Way back when, people ran email servers on real hardware, in a closet or under someone’s desk, believe it or not. The people doing so were largely known as sys admins. These were the smart people keeping all the machines running. They were largely viewed as people that shunned daylight and spoke in riddles, also known as best friends. At that time, spam [12:00] and unwanted email was largely viewed as a nuisance. There were some commercial offerings at the time, but those were mostly sold to people that did not have a sys admin. I think that’s represented by the frayed cables in the IT closet there. But what happened was, over time, nuisance email, it evolved to be operated by professional spammers, and the job of dealing with all the crappy email that they sent fell to dedicated IT professionals within a company. Instead of being a nuisance, the amount of spam received, it became a detriment to business operations, and was viewed as an impediment to business. Budgets became available, and the commercial anti-spam industry basically took off. In managing crap email, it was a problem that could be solved by purchasing a filter that scrubbed stuff as it flowed into your network.
But at some point, the professional spammers realized that the technology and the operational expertise that they developed while spamming, they could also use it in a different [13:00] way. Not to send stock, scam, or sex pill email, but instead to deliver email to people with an intention to deceive. It was then that the economic driver behind email security risk, it changed from using email as an inexpensive marketing channel to using email as a way to expose huge numbers of people to con jobs, at basically no cost to the criminal. This is the story of how crap email went from being a nuisance to becoming weaponized. To really understand why today’s situation is bad, we have to look at what technology has been doing for us. That should bring us to the next slide, which is way back when.
Most anti-spam technology was developed to block waves of spam, as you can see here, four of them. The waves are different, and they happen at different times. But if you look closely, you can start to see patterns. Next slide, please.
The [14:00] anti-spam technology that was developed largely works by collecting huge amounts of spam, collecting huge amounts of not-spam, also known as ham, and then using machine-learning techniques to identify patterns that match only against spam. Those patterns, they’re then productized, and they’re used to block these waves of spam as they slosh around the internet. The better you do, the more money you can make. But when spammers realized that using email to deceive could be far more lucrative than selling sex pills to a few suckers, the anti-spam technologies still had a role to play. GreatHorn had a great phrase for this that I’m going to adopt. Call it volumetric phishing. That’s the next slide. Thank you.
This is where anti-spam technology really starts to break down. The bad actors, they continue to evolve beyond this defense. These are some samples. Kevin, I’m not sure if you wanted to talk to this slide or not.
Kevin O’Brien: Yeah. I think [15:00] when we talk about volumetric phishing, one of the things we see is that we have all received messages like this to our personal addresses. If you’ve had email, you’ve seen this. It’s become a cultural joke based on how prevalent it was. The Nigerian prince, the title of the webinar itself, comes from a particularly virulent strand of this particular kind of phishing. But we saw this start to also move into the corporate world, and you still see this. The dates on these emails are real. These are real screenshots, and these are coming to real professionals from our customer set, and in some cases (inaudible) organizations. Now, I think we can lay claim to the fact that we’ve gotten pretty good at volumetric phishing detection. You can also see that a lot of these are in junk folders, and that’s because this problem, to Tim’s earlier point, has now had almost 20 [16:00] years of spam and ham collection to block what is coming in that’s unwanted when it’s something that looks very much like one of these messages. Of course, that’s not the whole problem. Back over to you, Tim.
Tim Draegen: Yeah. Next slide. This is the big uh-oh. If there’s only one bad email, then you can’t collect samples about it, you can’t write rules, and the badness has a chance of getting in front of a human. This kind of small-batch, call it artisanal, phishing, people refer to as spear phishing. It’s one thing to cast a wide net to try to defraud a bunch of people at once, as volumetric phishing does, but when the focus is on tricking a specific individual to do something that they shouldn’t, that’s a special category of phishing, hence the “spear phishing” moniker. It’s become the norm, though, so people call this type of attack just phishing now, instead of calling it spear phishing. But Kevin sees a lot more of these samples than I do at dmarcian, and that should be the next slide, our samples.
Kevin O’Brien: When I [17:00] first started GreatHorn, our initial iteration of the company was to look only at spear phishing and focus on that one aspect of email security. We do more now, but that was the starting point. I was in New York City, and I was presenting to a room full of people about this problem, and four, four and a half years ago, spear phishing was still the term that was used in the industry to describe the challenge. I remember that the venture capitalist whom I was speaking to listened politely to me for a few moments, and then scrunched her face up and asked me how we were starting a technology company that had anything to do with the maritime trade. It was obviously a misunderstanding, and I knew then that that probably wasn’t the investor for us. At least, we hadn’t impressed her sufficiently to get her to want to invest in what she thought we were doing. But the reality was that the term was one that everyone had seen; they just didn’t know it.
Think back to the survey we put up at the beginning of this presentation. [18:00] There is still a misunderstanding, and mis-differentiation, between spam and spear phishing. So what is it? Let’s clear it up. Targeted phishing, targeted attacks, spear phishing: they represent an attack where somebody impersonates typically an executive in a company, over email, through a variety of exploitation techniques, to get someone else inside of that business to do something. Give them access to sensitive data, wire them money, maybe give them additional information for a subsequent and more advanced attack. It can also be represented as an impersonation of a trusted service. You have all probably seen something like this at some point, where you get what looks like a cloud-hosted file — that is, a file that’s sitting on a Dropbox account, or a Google Drive account, or a SharePoint account — and someone’s asking you to open it up. It might be, as you can see on the bottom left, Microsoft [19:00] itself is claiming that you need to go and log in to release messages that are being delayed or failed to be delivered. Of course, these aren’t really messages from Microsoft or your executives. They are spear phishing attacks, designed to harvest your credentials or get you to do something.
I particularly like the one in the top right, because we really received that, and Lorita, who opened this webinar and did the introductions, got a message supposedly from me. I’ve been called many things in my career, but Earl Jordan 114 is not one of them, and this was a complete fraud. But it shows the kinds of attacks that people are relying upon, and so this door-knock attack that this represents is indicative of someone trying to execute what presumably will be a more complex attack in the future, using something like a lookalike domain, like you see on the bottom right, where there’s a DocuSign document that supposedly needs to be signed. But that “U” has an umlaut over it, and those two little dots are the difference between signing a sales contract and giving your credentials [20:00] to an attacker.
What do we do about this? As Tim pointed out, we can’t rely exclusively on technology to solve this problem, and the traditional approach to thinking about spam. That is, consuming large amounts of data, and once you have that corpus of data, tuning a machine to look at it and figure out where the spam is and where the ham is. [Let’s work?] on these low and slow attacks. And so, as we alluded to earlier, there is a confluence of process technology and people investment that leading organizations are now bringing to bear on the spear phishing problem, and on the email security problem writ large. I want to speak briefly about that, and I think we need to understand, first, what we mean by these three things.
Process is the formal, written-down set of rules you put in place around your business. When somebody says, “I need you to respond to my email,” or “I need you to wire [21:00] out money based on an outstanding account,” or “You have to go click on this link and release these messages,” a process-level control would say, “We don’t wire money without having a verbal confirmation and a phone call,” or “If you don’t recognize the sender, you don’t click on the link. This is good. Most organizations at scale have audit requirements that in fact require that their processes be aligned to certain standards, and that they have to demonstrate, on a typically annual basis, that these are being communicated to staff.
One of the challenges with that is that process doesn’t always get followed, and so the second tier of the response framework that we propose is an investment in technology, but technology that doesn’t attempt to solve the problem by blocking things, but rather to remind and arm the end user of what the process is and how they should behave. I’m going to let Tim speak to that in more detail in [22:00] just a moment, because contextualizing and thinking about technology and how it aligns to email is a major topic, and one we’re going to spend some time on, both today and in the future.
But the last leg of this stool is the people component, and obviously you’ve seen the examples we got, where someone targets Lorita, or a member of a team, and says, “I need you to go and do something.” There is a misunderstanding and a misconception amongst security professionals that, quote, humans are the weakest link. It’s a really unfortunate way to try to use fear to sell software and technology. We think that an end user is your best opportunity to block or prevent an attack, and having strong alignment between process and people, through the use of technology, turns that supposedly weakest link into one of your strongest assets. Tim, do you want to put a little bit of [23:00] color in here, and maybe specifically to that defense in depth concept we were talking about?
Tim Draegen: Yeah. To the point here about this is more than just technology, there’s a feeling of shared responsibility by talking about these things. When we were collaborating, we thought along these lines. We were trying to find a tidy way of how process and technology and people, how they interact at the levels of individuals, organizations, and across industries. Putting this together, we didn’t feel that a matrix with rows and columns really captured the insight. The key was in the relationships, as opposed to the traditional way of defense in depth applied to specific target areas. A better way to get there, though, I think, is for us to compare and contrast what people have been doing with what they’ve been given, versus what is effective when approached from a broader perspective, that broader [24:00] perspective being, consider how industries are interoperating, consider what the organization is doing, and consider individuals that compose an organization. So looking at it across the entire perspective, as opposed to defense in depth to shield people, to shield an organization, and to educate the general public. I think that gets us to the first part, which is user engagement, which is the next slide.
Kevin O’Brien: One of the interesting things, Tim, about user engagement is that, as you just said, we have this thinking around defense in depth that it involves preventing people from being able to get access to data. It’s a real misunderstanding and misappropriation of the term. Defense in depth is a concept that we first see emerging from the National Institute of Standards and Technology, NIST, largely out of academic research, and then later into formal policy recommendations in the mid-’90s and early 2000s. [25:00] The idea is that defense in depth is a concept that you apply to a security program to make sure you don’t have a single point of failure. The implementation, sadly, has been that we have an idea that users can’t be trusted, so we will put as many levels of, quote, defense between them and doing their work as possible. If you look at users as the problem, and you lock users out, and you use technologies that were designed for an era when the primary problem that you would have from an email perspective is that you would get, quote, unwanted mail, spam — right? — then having a modality of security that said, “Just don’t let the users see that stuff, just put it in the junk folder, just quarantine it and leave it up to the info sec team to figure out when something had broken business process,” really doesn’t align. I think when we look at the evolution of cloud infrastructure, the ability for users today to walk into [26:00] a job with their personal tablet, phone, the ability to easily gain access to what would have been enterprise-grade email systems 10 or 15 years ago, with nothing more than a five-minute sign-up to get a Gmail account, or a modern email account on the web somewhere, really shifts the landscape, and blocking users just doesn’t work. Tim, do you want to talk a little bit about the modern approach, and a bit more about what we’re seeing work today?
Tim Draegen: I just have to first agree that, from my perspective, isolating users just hasn’t worked. There’s a lot of guidance around training people to not click on, quote, bad things, but the companies that specialize in performing this kind of training, they’re reporting that users are constantly failing to avoid the bad things. That kind of approach, it’s good in terms of being a training program, but it’s not actually effective at preventing the sharpest kinds of email that come through and really damage things. So in my opinion, clearly people [27:00] need better tools when going through their email. And so the modern approach there at the bottom of the slide there, we’re starting to see a lot more functionality where users themselves are given better tools when processing email, so that if they see something strange, they can get a lot better response. It’s not the model of pushing an email into a quarantine that eventually gets released by an expert, but rather the users themselves get enough visual signal to — I don’t want to go as far as say perform their own investigation, but a lot of the sharp edges have been worn off by the technology part, so that there’s less things for users to get hurt by.
Kevin O’Brien: That’s a great segue into talking about where technology does have a role to play, of course. I like the idea that we round off the sharp edges. Where I think technology in the modern approach does have some ability to help is in taking [28:00] what are categorically bad emails out of mailboxes. This is an area where having that good/bad binary classification view of the world works well, in the same way that it works well on an endpoint device, or it works well at a network firewall. Some things — malware, file hash space detection, or static analysis of threat, malicious URLs that exist on multiple real-time blacklists — these kinds of things we can get rid of. I think you have to acknowledge that security professionals aren’t ill-informed. They’re not stupid. These things work really well in all the other areas of security. But what doesn’t work is to then take that idea of binary classification from a technological perspective, and think that you can apply it to the problem that we’re describing. I think, to Tim’s slide, looking at [29:00] the one fish jumping out of the waves, that isolated, well-crafted, and dangerous attack message will bypass the kinds of things that would work if you were solving for volumetric malware distribution.
And so the second thing we saw businesses start to do was say, “All right, we know that we’re not going to catch 100 percent of the bad stuff. Let’s train our users.” Training users has its own challenge to it. First, one of the things that’s very difficult to do is impact end user behavior. People are resistant to change, regardless of whether we’re talking about professionals in an office or somewhere else. And so telling people that they can’t use email — remember, 47-year-old system. Almost everyone has been using it for at least a decade or two. That’s not going to work. And secondly, the kinds of things we’re asking people to do don’t match our infrastructure expectations [30:00] as they exist in 2018. If you tell someone, “Hover your mouse cursor over a link that you think might be suspicious,” you’re asking them to, one, be a security expert and understand if that link is suspicious, and two, hover their mouse cursor. What if they’re on an iPad? What if they’re in an airport? What if they’re at home, looking at their smartphone, and it’s 9:00 PM, and they’re checking their email before turning in for the night? What mouse cursor? So this idea that technology and training together will solve for the phishing problem is utterly erroneous, and it really leads to why this problem is so significant and large today. But there is a better way. Tim?
Tim Draegen: Yeah, I think just a side note, in the info sec community, there’s the perception that users are the problem, and I agree, to some extent, that users are a problem. But treating people like static assets, I think, is not the most effective way to [31:00] deal with the users as a problem. People are not static assets. To Kevin’s point, they move around the world. They’re essentially a dynamic relationship with the organization, and so you have to treat them like things that are going to move around the world. So the traditional way of surrounding a user with a layer of armor to prevent bad things from getting to them, we just have to recognize that people are going to be out there in the wild, experiencing (inaudible) anticipate. So a better way would be to embed people into a process that can be tuned to reduce the risk. It’s better to treat it as a combination, as opposed to thinking that there’s one technology solution that’s going to solve everything.
Kevin O’Brien: Don’t take our word for it. Look at that survey that we started the presentation with. If the idea that technology and training alone could solve for this problem was true, [32:00] then you wouldn’t find that 20 percent of the respondents from the info sec side were reporting that, on a weekly basis, they were taking remediation action. That is a high rate of failure when you consider that for a typical organization, who might get, say, 100 million email messages over the course of a week. Having a 20 percent failure rate, where they have to go and take manual action on messages that are targeting their users, and every individual incident of a cyber security breach can cost tens of thousands to hundreds of thousands of dollars, that’s not an acceptable margin for error. This idea that technology is sufficient has a role to play, perhaps, but we can’t allow for that erroneous understanding of what works for spam will work for phishing and targeted phishing to persist. Thankfully, though, there are some things we can do. Tim, why don’t you talk about it from the industry perspective to start?
Tim Draegen: This is my favorite [33:00] one. Thank you. This one is near and dear to my heart. Most organizations, they do not take advantage of technologies that are available right now, that are used to build and maintain trust across the internet. I’m not talking about securing an online asset, like a website, using SSL certs, or protecting a perimeter using firewalls or anything like that. There are significant efforts underway, right now, to get organizations to deploy technology that allow everyone else on the internet to reliably trust whatever is online that purports to be you. If email is any indication, getting organizations to adopt these technologies will take a long time, but it has to be done. The difference is interoperating on the open internet, it’s not a product. It’s basically a set of technology specifications, just like email is. So if you’re a business and you’re on the internet, you should figure out how to make it so that everybody else that’s going to communicate with you can do so in a secure way. [34:00] You can find companies. There are some products that you could pursue that will help you deploy these technologies, but again, the technologies themselves are all about interoperability. And so using these technologies so that anybody on the open internet can actually tell that you are you really approaches the problem in a new way, and it’s less of a defensive posture, and it’s more like allowing yourself to be trusted on the internet. So don’t let people guess. Really make it so that you have an established and stable presence online. Kevin (overlapping dialogue; inaudible)
Kevin O’Brien: Yeah, I do. I think what aligns to that is that, from an information security perspective — and let’s differentiate that there is an industry standard technology adoption problem that organizations can solve for by using some of these standards. You can also then take an info sec perspective and look at that same challenge, and begin to determine whether [35:00] your organization is in a good state or not with respect to that kind of standards adoption. Why does this matter? Take something like the examples of the impersonations of me that we saw a few moments ago. I made a joke about the fact that the email address was someone else’s name. That’s hard to tell as an end user. If Kevin O’Brien, spelled the way that I spell my name, sends an email to someone on my staff, and we haven’t done the work of putting that kind of industry protection in place, that attacker can not only use the friendly name that you see when you look at mail on your phone — Kevin O’Brien — it can also use my domain name, at GreatHorn dot com. And that’s a problem, because now there is no way that my end user can differentiate between the real Kevin and the impostor. But if you do implement these standards — and they’re not technologies, they’re not things [36:00] you buy — they’re standards — you can’t become anyone inside of that company in the literal same way as the real sender, and so you end up with it being someone’s fake name. You end up with it being someone’s Gmail address. I think what’s nice about taking that approach is that that opens the door to then move back down the staff from a spectrum perspective, and think about how you can protect yourself using more than just one layer of defense. Tim, over to you.
Tim Draegen: I can’t get enough of this picture. We’ve gone over a lot in the webinar so far. When GreatHorn and dmarcian sat down and started collaborating, we went through a lot, generated a lot of material. I believe this is going to be part one of three. This picture here, though, to summarize, there’s three high-level categories to pay attention to: industry, organization, and the individual. [37:00] Traditionally, these are viewed as silos, but in our experience, a more effective response to phishing requires looking at the space between these areas. That is, the relationships between individuals and their organization, and between organizations and their industries. The relationship part really matters. From an industry perspective, there are standards-based ways to interoperate on the internet so that you can make your organization something that can be trusted. You don’t have to guess at whether something is real or not. You should go after all the technologies to interoperate correctly with the internet as an organization. This is from an info sec perspective. An organization should do this first and foremost, make itself trustworthy on the internet. From the individual perspective, GreatHorn has been pioneering, in terms of looking at the relationships between individuals and between individuals and their organization. And so looking at the entire spectrum of the problem space that phishing represents, [38:00] this is the most concise kind of diagram that we’ve been able to create so far.
Kevin O’Brien: What I think is important to hear is that there’s overlap between these, and part of the reason that Tim and I are mutually excited by the prospect of building a partnership between our organizations is because we solve different sides of this equation, but they have a section in the middle where they come together. Putting in place industry protection, and then putting context in front of a user, requires organizational commitment, and it’s often organizational commitment from the same group of people. If you have responsibility for email, if you’re thinking about information security, if you’re protecting your business from advanced cyber threat that will be deployed against your executives, related to where you are, the kinds of data you have, financing and information that’s put out on the public internet, having an [industried?] individual response plan at the organizational [39:00] level is the only way that you can protect yourself effectively. I think it’s a really good chance for us to begin to talk about what that looks like in practice. We will have subsequent webinars going in more detail around how a plan like that comes together, but so you know where we’re coming from, Tim, do you want to say a few words about what you do? I’ll talk about GreatHorn, and then we can maybe open it up for some questions.
Tim Draegen: Oh, yeah, just real quick. Dmarcian itself, it was created immediately after DMARC was made public, and its sole mission is to fold DMARC into the internet. DMARC itself, just a technology. It’s a building block, or rather a foundation, upon which trust can be built across the internet. It’s one of those pieces of interoperability that allows you to send email that other people don’t have to guess whether or not it’s real or not. Our focus is squarely in that space between the organizations and their industries.
Kevin O’Brien: And conversely, GreatHorn is an email [40:00] security provider, and what we do is help organizations that have adopted cloud email systems understand where the threats in their mailboxes are, and then, through those automatic, as well as instant, response-driven policies, take action on them, meaning that we can provide an end user with context about whether a given message is fraudulent or not, and we can help an information security team respond to those threats in seconds rather than hours or days.
Lorita Ba: Great. Thanks to both of you for both your time and the content that you’ve provided so far. I want to remind everybody that the questions tab on your go-to webinar control panel is available for questions. We do have a number that have come in already. I’m going to get to them in just a second. But just as a reminder, we are going to be sending the recording and the slides available after the webinar via email, [41:00] and both the CEOs have mentioned that we will be doing multiple webinars. The next two are listed here. We’re still finalizing the dates, but we’ll certainly let you know. The intention being that the second webinar will have a tactical discussion of some of the philosophical discussions here, and then the third will be kind of an in-depth understanding of what the two companies do.
With regard to the questions, the first one here is, “How do you guys feel about running phishing campaigns on your own employees to raise awareness?”
Kevin O’Brien: Sure, I’ll take a first crack at that, and then Tim, if you want to add some color to it. [I see?] why you would do something like that. The first is that a phishing campaign run against your own employees can give you a benchmark about where you are. Benchmarking is a good practice, and knowing what you’re trying to change, and understanding what the metrics are that you’re trying to move look like, gives you [42:00] at least a baseline for knowing where you are at. What doesn’t make sense is to think that there is a correlation between phishing your own employees and solving the fact that they are going to fall for it to a degree that’s going to be concerning to you when you see the results. Unfortunately, budgets are limited, time is finite, and we see information security teams sometimes think that they can just keep phishing our employees, and do it often enough that we push content to them, we make them watch a video every time they click on a link, and now (inaudible) hands off, we’re done, we’ve solved the problem. Doesn’t work. Stats show that, within six months of your most recent phishing simulation, your numbers go back to what the baseline was. You have employee drift, where people change roles, and worse, you end up having a security team that believes it has taken effective action against the phishing problem, [43:00] but in reality, it’s either driven it underground or created a business-side resentment of the security team for making them look foolish. So good technique for understanding where you start, but it’s just a starting point, and not a stopping point. Anything you’d add to that, Tim?
Tim Draegen: Yeah. I think, in my personal opinion, I think it’s a great way to raise awareness across an organization that there’s a problem. A lot of folks, especially if they’re not in the tech industry, they might not know that their email clients aren’t doing anything to protect them from fraud, and just the shocking amount of very dangerous and sharp things that end up in people’s inboxes. Most info sec or non-email people that I speak with, they just assume that the mailbox provider, or the IT staff, or whomever, is doing the job of making things safe. So if they go in their inbox and they start clicking links, the assumption is — there’s been a car launched into space, we’ve landed people on the moon, we’ve got satellites orbiting the earth; how come there’s something I can do with my personal computer [44:00] that can actually harm someone, right? You’re just pushing on buttons. So a lot of people just have no awareness that, in fact, the email space is quite dangerous for end users. I think as a way to raise awareness, it’s definitely useful. It doesn’t exactly solve the problem, but it might raise awareness, it might justify taking future action. It’s kind of like an info sec business case builder, if you will.
Lorita Ba: Great, thank you. The next question, “One of our customers recently got phished by someone pretending to be us. How would we prevent that?”
Kevin O’Brien: Tim, why don’t you take the first pass at that?
Tim Draegen: Let me make sure I understood the question correctly. Someone sent an email to a customer that was pretending to be you. What you should do is make it very easy for your email to be identifiable There’s technologies like DMARC that you can use. It’s invented to make email easy to identify. That would probably be the first step that I would recommend. [45:00] However, there are more robust things that you could do so you don’t have to rely upon the person sending you email to do the right thing, and I think, Kevin, you can probably talk to that.
Kevin O’Brien: One of the things we see when this kind of an attack happens is typically that an organization either hasn’t done something like implement DMARC and all of the things that go into email authentication writ large, or that they’ve only done a partial job of it. When I look at, say, a Fortune 500 company whom we work with, and they have nearly a thousand domains that are affiliated with their brand, and developers on their engineering teams are creating new services and new instances of machines that can send mail as them, they’ve done a great job showing up their fundamental email infrastructure, but then monitoring that in an ongoing way, and thinking about where new opportunities for threat might emerge given the normal operational [cadence?] of the business, that’s a daunting task if you’re trying to do it manually. Having an approach that lets you [46:00] pull all of the mail that’s going through your system, and think about protecting against a potential security vulnerability being created, is the second half of the equation there. If you’re doing both of those things together — which legacy approaches don’t do, by the way. If you’re just taking a quarantine the bad stuff model, you’re thinking about bad things coming to your users, you don’t have visibility into what Tim’s talking about. What do we look like to the outside world? You can measure that. There are technologies and processes that you can put in place that will help you get your hands around that problem. If you do that, you’re far less likely to be a company that is impersonated to your customers, meaning you don’t have those emails going out to the market.
Lorita Ba: Thank you. The next question, I think, is principally for Kevin. There’s a scenario: “Full reject with DMARC in place already. G Suite is the email backend, but most of the users prefer Outlook for [47:00] viewing and using email. They don’t use the Gmail web client. So does the approach of supplying additional context to users for suspicious emails work in this hybrid situation?”
Kevin O’Brien: The answer is that you can make it work. You can’t do it with just what the platform provider — in this case, Google — gives you. Huge gap there, and it’s one that’s not obvious unless you’ve encountered the situation the question-asker brought up. Google, if you use Google, either for your personal email or you use it at work, has a pretty robust set of anti-spam filters, and they’ve started to extend some alerting based on basic context analytics to say, “Hey, this message talks about financial information. It might be fraudulent. Make sure you know the sender.” That’s great, if you’re in front of Chrome and logged into Google. If you’ve got a user base, like many organizations, that’s on Outlook, suddenly you’re not getting that level of protection. It is possible — and this is something specific to [48:00] GreatHorn, but there are a variety of ways that you can do this — to drive that context into the message itself, and not have it be something that you have to rely upon the platform provider to do for you. If you do undertake a process like that, in the same way that you’ve already gotten yourself to a full reject policy on DMARC, so you’ve taken preventative steps around your brand being impersonated, you can also now take preventative steps around somebody targeting your users when getting around the protections that, in theory, are coming from your provider. You can do that agnostic with respect to what the end user experience is: an iPhone, an iPad, Outlook, or a browser looking at web mail.
Lorita Ba: Thanks, Kevin. This next question is for both of you. I’m going to start with Tim, and then, Kevin, you can follow up with any additional thoughts. “If an organization does not have a current user training program or technology to detect phishing attacks, how would you begin? What would be the first priority?”
Tim Draegen: [49:00] Oh, that’s a good question. Let me rephrase the question, and Lorita, maybe help me make sure I got it right. No training program in place yet, so how do you get to the first part about raising an awareness? Maybe to color the background a little bit, info sec professional at a company, have no budget, beating the drum, but no one’s listening. Lorita, is that a fair take?
Lorita Ba: Yeah. I think what it sounds like is, if you’re starting from ground zero and you don’t have either training or technology in place, so you want to tackle the phishing threat, what’s the first thing that you start with? How do you start the process?
Tim Draegen: Oh, right, OK. I think visibility is the first thing. In order to raise awareness, you have to have something to look at and report on. You can use DMARC to start collecting data for free. That will tell you how people are using your domains across the open internet. Oftentimes, info sec professionals could use that [50:00] as a starting point. You can collect data. You can process it. You can present it to other people to show them that there is an issue, that there are people out there on the open internet that are pretending to be your own organization. Normally that’s enough to get the first conversation kicked off, but there are other techniques for collecting such information. You can also look at your own email servers, try to figure out how much spam is being blocked. You can use that as just an easy metric to tell managers and colleagues that, hey, there is a real issue out there on the open internet. Those are some very basic things that you could do, with very little help from anyone else. Info sec professionals could get started to build visibility doing that.
Kevin O’Brien: I think the other thing you can start to look at there is, when you have a user base and you’re receiving messages that are dangerous or potential phish, and you don’t have a program in place around training, how do you build awareness? Well, [51:00] we’ve talked a lot about context, right? An info sec team doesn’t need to get budget to write good policy, and to say that we’re going to have a set of processes around how we handle wire transfers, how we’re going to handle requests for sensitive information, and how we want users to respond to potentially dangerous senders, especially those who might be impersonating either core executives in the business, as is often the case with a spear phishing attack, or business services that the company uses. You can put that kind of policy in place, and then put context in front of the user.
This is very different from what you’ll see with a traditional email security approach, where you have to go buy a technology, change your mail routing to push all your mail through them, start teaching users how to interact with the quarantine. That’s a heavy lift. Telling them, “Hey, you just got a message from someone whom you’ve never received mail from before. It’s asking about wire content, and it’s using the display name of somebody inside the business, and it might be a threat, because it could be an impersonation attack. [52:00] And by the way, here’s the policy that we have in place when that happens.” Those kinds of things are great starting points, and in some ways, more effective than trying to start by having a general conversation about phishing to a user base that may not yet understand what that means, because they haven’t been told, “That email you just got, that’s what we’re talking about when we talk about being careful around email security.”
Lorita Ba: Great, thank you. We’re coming up at the top of the hour, so I’m just going to bring in one other question. For anybody else that has questions too, please feel free to go ahead and continue to submit them. We’ll answer any questions that we didn’t have time to answer personally via email. This one is really, I think, more for you, Tim. It’s, “How is DMARC different than something like DKIM?”
Tim Draegen: DMARC itself is an overlay on top of DKIM. DKIM is a technology that allows someone to essentially attach a domain to a piece of [53:00] email, and that attachment travels with the message in the form of a DKIM signature. DMARC is very different. It’s more of like a framework that DKIM can plug into so that DKIM itself becomes more useful. DMARC is an overlay. It gives you feedback if you’re the domain owner, so you can actually see how your domain is being used across the internet. When you go through the process of telling the world that you’re doing the work of making your email easy to identify, you use DKIM, and you use another technology called SPF to get that actual link between an email and a domain done. DMARC itself, again, it’s an overlay that gives you feedback, and also provides the policy mechanism. The policy mechanism part of DMARC allows you to tell the world, “Hey, we’ve gone through all the work. All of our legitimate email can be identified using either SPF or DKIM.” Once you do that, you can then throw a switch to tell the world, “Hey, we’ve done all the work. If you get something that purports to [54:00] be from us but it’s not compliant with DMARC, feel free to drop it outright.” And so they’re different technologies, but they build upon each other.
Lorita Ba: Great, thank you, Tim. As I mentioned, we’re a little bit over the 45 minutes that I promised at the top of the hour, but hopefully the additional questions that we were able to answer were useful for everybody’s time. I want to thank you all again for taking the time to listen to Tim and Kevin as they walked you through the evolution of phishing, and really encourage you to keep an eye out for these next two webinars. As I said, the next one, “A How-To Guide for Modernizing Your Phishing Defenses,” will be more tactical in nature, a little bit more prescriptive, and then the third one, we’ll talk about both companies. In the meantime, keep an eye out also for our follow-up email that will have a link to the slides and the webinar recording, and feel free to reach out to either company if [55:00] you have any additional questions. We’ll be happy to answer them. Thanks again for your time, and to Tim and Kevin for your time as well, as well as the content, and I hope that everybody has a great day. Thanks so much. Bye-bye.
END OF VIDEO FILE
Request a Demo
Like what you hear? Contact us to learn more about GreatHorn’s sophisticated email security platform and how easy it is to set up for your Office 365 or Google Suite platform.