Unleash the Full Potential of Network Detection and Response
Consider the anatomy of a cyberattack. Look at gaps in security tools and cost of breach. Current challenges, gaps and different approaches to NDR. Scalable deep packet inspection requirements and considerations. Application of machine learning and encryption for the next generation Network Detection and Response.
Paul Barrett is Chief Technology Officer for NETSCOUT’s Enterprise and Federal Businesses. Paul’s role encompasses NETSCOUT’s enterprise service assurance, cybersecurity and DDOS products.
Paul has over two decades of expertise in the analysis of network communications – areas of focus have included the human perception of voice and video transmission quality, the health of applications and services delivered over IP networks, and most recently cybersecurity.
Paul has more than 25 years of experience in the IT industry and has been actively involved in international standardization for much of that time. Between 2005 and 2016, Paul was appointed to a number of ITU-T study group vice-chair and working party chair positions; he was also the United Kingdom’s head of delegation for these study groups for much of this period. The ITU is the United Nations specialized agency for information and communication technologies. Paul is a named inventor on 20 patent applications.
Full Session Transcript
I’m Paul Barrett, I’m CTO for Netscout’s enterprise and federal businesses. If you’re not familiar with NetScout We have been in business for about 30 years and pretty much everything we do relate to analyzing traffic flows on networks.
And traditionally we were doing that for large enterprises, government agencies, communication service providers, to help them use deep packet inspection, to understand the health of the network, but also the health of the applications and services running on those networks. Now in 2015 Arbor networks joined NetScout and they, one of the market leaders in the area of DDOS, detection, and mitigation, and that really got us thinking, we said, isn’t it about time?
We took deep packet inspection technology and started to think about how we can apply it at scale to solve. Some of the hardest problems in cybersecurity, particularly in the area of network detection and response. So what I’m going to do probably about 30 minutes to give time for some questions at the end just take a look at some of the big challenges in cybersecurity.
I want to focus on some specific gaps and then we can talk about different approaches. And then we can bring in scalable, deep packet inspection. And I actually want to go through some of the requirements and considerations that you have to think about if you really going to leverage the true benefit of this technology.
I also want to touch on machine learning and the relationship between deep packet inspection and successful machine learning. I want to talk, touch on encryption because people often say, Hey, deep packet inspection, what about encryption? So I think it’s it’s worthwhile addressing that.
And then we’ll just finish off by talking a bit about how all of this can come together to deliver as advanced network detection and response. So let’s start with that big picture. And Debbie in the previous session touched on this top on, but we just, none of us have got enough security staff and it’s worse than that because we’re overwhelming them with far too many events and possible incidents.
But then we’ve got to think about the data breaches. If anyone thinks they can eliminate data breaches, they are frankly kidding themselves. That’s just not going to happen, but anything we can do to either reduce the number of data breaches, or if we accept that some are going to occur. If we can stop them from progressing too far to try and control the cost of a data breach.
That is a very valuable thing to be able to do, but the area I really want to focus on is the existing security tools and the gaps. So a typical large enterprise or government agency has something like 70 to 87. Sorry, cybersecurity tools deployed. Now the problem arises if you keep investing, but you keep buying tools that basically do the same thing.
You’re not really moving the needle. So the question is, are there gaps in our existing toolset that need to be filled? Let’s just start off with kind of the anatomy of a cyber attack. Now I’m not going to focus on cyber warfare or the actions of nation, state actors. I want to focus on really what is for most of us, our biggest problem, which is basically cyber criminals, people out there to make money.
Now, the way I like to think about this is for any crime, we always talk about there being three things. There being a motive and opportunity and a means. Now, in the case of a cyber attack, the motive is access to sensitive information. It could be things like credit card records that could be healthcare records.
And by the way, a healthcare record is far more valuable on the dark web. Then a list of credit card numbers. It could be intellectual property. It could be my future product plans, or quite honestly, it could be any kind of confidential information such as sensitive financial information. There lies the motive for the attacker.
So then we need to think about the opportunity which we normally think of as vulnerabilities. Now I’m going to focus on the attack surface, but the fact of the matter is. If we’re going to deliver applications and services to our employees and customers, we’re going to have to expose those applications and services to the Internet, one of the challenges we have today is we were all driving forward with our digital transformation initiatives. And that’s great because the whole point of digital transformation is about business or mission agility. The ability to quickly stand up an application to address a new opportunity or respond to a threat or disruption.
But that has a consequence. It means that my tax surface is also changing very fast. And of course, perhaps the biggest vulnerability we have is that ultimately we have humans involved in all of this, and then we get to the means, or in this case, we tend to talk about threats. So if an attack has successfully infiltrated my network, they will have installed some form of malware inside my enterprise network.
And then of course, I’ve got attacks coming from the outside in I’ve got people launching DDoSs attacks. And once again, I’ve got those pesky humans, I’ve got insider threats. Now there’s a fairly well-established concept that when you have the combination of threats and vulnerabilities, that’s when risk arises, what do you use?
My other language when you have means that are able to be applied to an opportunity, I have risk, and of course that can be the risk of. Theft of information, it can be denial of access in the form of a DDoS attack or encryption of data. Ultimately within the scope we’re talking about here, that’s going to lead to some kind of extortion demand.
Now, cause we said you’re never going to completely eliminate breaches. That’s not a realistic goal. We just need to contain them as best we can. If I have small and occasional well-controlled breaches, I can probably survive that. But if I have. Many breaches or I have a small number of very large breaches that can start to damage the reputation of my business or my agency.
And, sadly we’ve all seen examples of where that kind of reputational damage has actually taken businesses completely out. And before I move on, I just want to mention the way that ransomware attacks have evolved and certainly how we’ve observed them to evolve in the last say, 18 months to two years now, the original form of ransomware attack, where somebody gets into your network, they find your data, they encrypt it, and then they send you an extortion demand saying we’ll only give you the keys.
If you pay us this much, Bitcoin. But we’ve all got better at backing up our soft. So what a modern ransomware attack looks like is the attacker will, first of all, infiltrate your network and they will find that sensitive information and they will exfiltrate it. And only then when they have your information, will they actually contact you and say, Hey, I have your sensitive information.
Here’s proof that I have it. And if you don’t pay me what I want, I’m going to put that. Maybe on the internet, for example, they will also follow up with the threat of encryption or even actually performing cryption because even though you may have a really good backup, you’re still going to experience a major outage.
If you’ve got to recover all of your systems from those backup. And at the interesting third leg to this stool that we’ve seen be added to these kinds of attacks is launching a DDoS attack. Unfortunately, if he wants to launch a DDoS attack today, you basically go on the dark web. You find a provider, you enter the details of the victim.
You, you basically choose how large you want the attack to be, which relates to how much you’re willing to pay. And then you can launch a DDOS attack. And we call this triple extortion and it’s all about just putting as much pressure on the victim as possible, turning the screw so that in the end they just throw their hands up and say, okay, how much do you need me to pay you? So let’s start to think about what are the gaps in our existing tool sets. Now I’m going to focus on these four. I think they’re really important. The first is advanced early warnings. So I need to be able to detect those intrusions as they happen in real time. That’s really important if I’m going to get ahead of the data breach and stop it as soon as possible, but also want to be able to triage.
I want to understand who is affected and how bad. And then we get to that attack surface. I want to understand what my attack surface really looks like. And this is interesting because actually my experience, most enterprises don’t really have accurate information about what is truly exposed to the internet.
We often find that they’ve are applications and services. The people didn’t know were actually active and being actively connected to. And then we’re back to this point about digital transformation. How is that attack surface changing how it is evolving and how do I stay on top of that? Now, so far, I’ve been giving examples around exposing applications and services to the internet.
But if I’m thinking about deploying zero trust policies, for example, in the form of network segmentation, I can apply the same concept to a network segment and think about what is the attack surface of that network segment. Now, these first two really fall under the broad category of vulnerability. Yeah, the second to contact tracing.
Now, clearly it’s a concept that has very much come to the fore during the pandemic, but as a notion, as a concept, it actually goes back to the beginning of the last century. When people realized that when there was an outbreak or an epidemic, if they could identify patient zero, For a particular outbreak and then understand who did they interact with and who did those people interact with?
They could focus their energies. They could basically almost ring fence the incident. And as I say, focus all of the energy that they needed to control it and really. The same applies in cybersecurity. When I’m thinking about lateral movement, how an attacker comes into the network and then starts to seek elevated privileges and work their way towards that sensitive information.
If I can identify the entry point and then understand the lateral movement, once again, I can focus my energy and my resources. And lastly, I want to have the ability to go back in time. Now, oftentimes as we all know, breaches, aren’t discovered until sometime after they actually occurred. So I have to be able to go back and say, okay, how did the attacker get hit?
What tactics and techniques did they use? Did they actually successfully take anything? Did they perform data analytical iteration? And I think really importantly, how can I stop that particular attack vector or those particular techniques from being used again in the future? And you can think of these two on the right as being more related to threat detection.
As I said, there’s a typical, large enterprise has perhaps 70 cybersecurity tools, but I just want to focus on these four classes of tool for a moment. Now, the first is your security information and event management your SIM and. Pretty much any organization has a SIM and most organizations, certainly large organizations are deploying a security orchestration automation and response system.
This is where your playbooks live. So for a particular type of incident, I’ve already thought about how I want to address that incident, how I want to respond, and hopefully I can do that. Automated. Yeah, I’ve got my endpoint detection and response. This is where I’m installing agents on computing hosts.
Now I would say that all three of these are absolutely necessary as part of a complete cybersecurity portfolio, but they’re necessary, but not necessarily sufficient because it’s really important. We understand and have visibility of what’s happening on the network. An attacker almost never enters the network at the location that they ultimately want to be because they’re either coming in through, they found a vulnerability through the internet, or possibly more likely they’ve got a foothold through something like a phishing attack or a spear phishing attack.
The point is they don’t instantly land on the database with all the goodies they’ve got to perform lateral movement. They’ve got to elevate those privileges and move from host to host until they actually get to where the data. So I believe it’s really important to have powerful network detection response, and really reap the full benefit of this technology.
If we’re going to complete the picture and have a comprehensive portfolio, now it may come as no surprise to you that, I’m going to propose that deep packet inspection and an in particular, highly scalable, deep packet inspection has a very important role to play. So I thought I would just take a bit of a step back here and pause and say if I truly want to deliver DPI scale, what are the kind of considerations?
What are the requirements I have to address now? The first one is that you need to work. Hundreds of protocols. Now, if you sit down and look at everything going on in your network, chances are 80% of the traffic belongs to one of, let’s say 10 common protocols. But if you’re serious about protecting all of your mission, critical applications and services, they’re going to be using lots of other protocols.
So you’ve got to have packet pauses that work with all of these protocols. You need to generate compact, but highly actionable metadata. Now I know it’s a bit of a mouthful, but I’m going to expand on that in a moment. But really what we’re saying here is I can’t store packets forever. So I’ve got to decide which pieces of information am I going to use to generate metrics and which pieces of information am I actually going to keep in their original form.
And we can talk more about that in the most. You potentially need to scale to hundreds and thousands of monitoring locations and aligned to that is the requirement operating all multi-cloud locations. All large enterprises are now saying, I’m not sure that the idea that everything’s going to move to public cloud is still totally valid in my experience.
Most organizations are trying to decide for each application and service is the best location, public cloud. Is it private cloud or is it a CoLocation. And we will see in a moment, we ended up with these very complex environments and of course you need to work at whatever the latest networking speed is. So let’s start with that.
Highly actionable and the metadata. So what I drawn here is a representation of the different types of information I can pull from the network and the width of each of these segments represents that the sort of the quantity of information that we need to store so starting at the bottom as the packets themselves.
And there are times when information in packets is just really useful. I can generate all of the metadata I want, but if I can drill down and actually look at what was the original content of a packet that gives me the answer that I need, the point is, in a typical system, I can probably keep packets for a few days.
Realistically, there are techniques I can use. Maybe only keep the important parts of the packets. And I discard some of the payloads that might help me increase the retention time of my packet. And I can even get sophisticated and say, what if I see in a suspicious event, maybe I keep put those packets into it into a separate store, but ultimately we need to be looking for generating metadata.
Now the next level. For me, this is the most precious part of deep packet inspection. This is where we’re basically using protocols specific pausing of packets. So if it’s a decrypted HTTPS packet, I can pull out the URL. I can pull out the user agent. If it’s DNS, I can see what was the query type?
What was the domain? What were the IP addresses that came back? If it’s SMTP, for example, I can see the name of an email attachment. I can look at the certificate and a TLS handshake. So this is very rich and very actionable. Above that it’s also often useful to just be able to know who was talking to whom on what protocol of what time and where, for understanding things like lateral movement, that’s really valuable information.
And then at the top we have the opportunity to generate metrics kind of key performance indicators. If you like. And here I can summarize the behavior of each of my servers, of my clients, of the applications sometimes. An application begins. The misbehave could be an indicator that the application has a problem, but it’s also oftentimes an indicator that there’s an attack starting or an attack is in progress.
So it’s really helpful to be able to look at performance and areas as well. The other thing I can do with this kind of summary information is I can use it to get my initial indication that something anomalous is happening. I can start to triage that problem, understand the extent of it who potentially is impacted.
Then I can start to troubleshoot. Then when I have the context that I need, I can start to drill down through these increasingly rich layers of information either to the transaction information or sometimes all the way down to the packet contents themselves. I also mentioned multicloud. So this is very stylized, but regionally, really, these are all the components of a typical multicloud environment.
So I have my data center, my co-lo, my public cloud, my SAS services. And I’ve probably got some remote sites. In terms of where do I put packet visibility? Typically people will seek to put a physical appliance at the edge of their data center, because that gives a really important location because I get to see what my applications, what my attack surface looks like to the outside world.
I might also instrument a bit deeper into the data center, perhaps in front of mission critical applications or in front of my shared enabling servers. We’re finding a lot of customers are deploying in colos now. And that’s because you’ve got these really important peering connections with your public cloud provider, but also increasingly with your SAS providers.
So you’ll have direct peer and connections with say office 365 or Salesforce or. And of course, mobile, large campus. It makes sense that the deploy an appliance, but there are locations where I can’t go and install a network tab. Public cloud is the obvious one. But there are techniques and methods whereby the public cloud providers will give you a copy of the packet.
In their networks. And if they won’t let you do that, there are other techniques such as deploying inline and getting access to packets that way. But you’re going to need a virtual probe because essentially your packet analysis is going to be running as a workload in that cloud. But if you take that same concept of a fully virtualized software agent, I can use it in the colo.
I can also use it in my data center to dig deeper into the application stack to start to get visibility of east west traffic. And of course, I can also use that approach for locations where it’s just not economically viable, or it doesn’t make sense to deploy physical application server, but I probably can deploy some kind of virtualize network function.
So if we take this kind of approach of using the appropriate form factor for our deep packet inspection engine, then we can cover all of these multicloud environment.
Now I put a throw away line in the slide with the requirements you’ve talked about working at the latest networking speeds. I just want to give people an exercise and you don’t have to do it right now, we throw around numbers like 10 gigabits per second, without really thinking about it.
You just take a moment at some point to write out 10 billion. That’s. 10 zeros after it. And start to think about increasingly large examples of numbers until you get to 10 billion. That’s how many bits per second you need to process. If you’re going to perform deep packet inspection on a 10 gigabit per second link on that, how many bits you would a process every second?
Now, just for fun. I came up with this number. I said let’s say you’ve got a 40 gig link. It’s occupied on average at 10 gig in each direction. How many bits of information is that in a year? And I came up with this number. I’m not entirely sure how you say it. It’s 630 million billion.
Billion million. It’s a big number and I backed into it because it also happens to be the approximate or the estimated width of our galaxy, the Milky way in miles. It’s an enormous number. And yet that represents the number of bits passing at a single network monitoring location. Yes, that might put some people off and say, oh, I an earth.
Am I ever going to process that much information? But think about it for a moment. Our core networking and switching infrastructure is now operating at a hundred gig and even 400 gig. We already have network equipment that operates at the speed. We have firewalls, web application, firewalls, and other security tools.
So actually delivering deep packet inspection for the purposes of cybersecurity is entirely possible. And there are a number of companies who have invested in this technology over the years, and it basically solved all the problems in this area and been able to keep up with the ever-increasing network speeds.
So putting all of this together, we’re starting to think about how do I apply deep packet inspection to network detection and response. So maybe we can start top left. I need that affordable visibility everywhere. So think back to the multi-cloud diagram, I’ve got to make sure I’ve got different probe form factors so that I can put a probe everywhere I need.
Now I want early detection, but if I’m processing every packet in real time, then I’m in a perfect position to spot anomolies occurring on the network and to alert somebody as soon as they happen. And I get attack surface visibility from that as well. And this is really important. I get insight into how the attack surface is actually being used.
What connections are being made, right now and not relying on somebody to run a scan every week. When anything quite frankly can happen between one scan and another, if somebody set up a rogue service or server that is being connected to, as soon as I see a connection to that, I can detect that and I can alert.
And then we get into what I’ve written the separation of the signal from the noise. And this is really about that generation of metadata, deciding which parts of the packets am I going to convert into metrics? Which parts am I going to keep? Maybe it’s something like a URL or the results of a DNS query in which can I choose to save, because if I’m going to have retrospective analysis, I have to basically be able to keep data for long periods.
And that means data reduction. So there’s this balance here about, I’ve got to decide, which is the information and I’m going to keep, and which can I actually safely get rid of now the way I like to think of generating this kind of metadata is who building a record. You’re building a history of everything that happened on the network, but the great thing is you’re adding to it in real time.
You’re constantly adding, Hey, what’s the latest thing that’s happening on my network, but at the same time, because I have a historical record, I can go back and investigate incidents in the past. And the last point about consistent data really is just saying that I want to be receiving the same metadata, regardless of whether it’s a large appliance, a small appliance or a virtual appliance that affordable visibility everywhere needs to be providing me with consistent information.
So people often ask about machine learning. And I want to make a statement that in my mind, network detection and response is not synonymous with machine learning or artificial intelligence, there’s network detection and response. I like to always spell it out. It’s a hugely powerful technology. And when you drive it with deep packet inspection, There’s a whole host of analytic.
You can do that. Do not require AI or ML, but that said, it’s not a technology we should dismiss because there are some very powerful use cases for machine learning. So I’m just going to work through this and we can think about how deep packet inspection relates to machine learning. Now, I’m going to assume that we have some operational challenge, whether it’s a network operations challenge or a security operations challenge.
And I want to operate at massive scale. I want to have digital transformation. I want to be agile. The only way I can achieve. Is through automation. I think we can all agree on that. And this is the point at which I go, maybe I can introduce a bit of machine learning to apply an extra level of automation, an extra level of detection, of classification of prediction.
Now, if you’ve ever spent time working with machine learning, You really have to use high quality data, both if it’s a supervised algorithm and you actually, you’ve got a training set really important that you have a high quality training set that covers all of your areas of anticipated operation. But even for an unsupervised approach, you’ve got to have high quality data.
It’s like the old phrase, I think it was from the 1970s or eighties in the early days of computing of garbage in garbage out. But it really is as simple as that I can have the best analytics in the world, but if I’m not feeding those analytics with high quality data, I’m probably going to be in trouble.
And of course, deep packet inspection. I certainly passionately believe I’ve been in this area for over 25 years, that the information you get from the network from the packet really does constitute one of the highest quality sources of data you can have in our industry. We have this phrase, we can talk about packets don’t lie, but they really don’t because you are looking at the actual transactions, the interactions between our computing hosts.
Now, this is another point I always like to make with this slide, which is. People I’ve heard people say Hey, I’ve eliminated human error because I’ve automated everything. And I’d like to say to people, yeah. But hang on. Most of that automation is probably software, isn’t it? And who wrote the software?
At some point there’s a human involved. All you’ve done with automation is provided the opportunity to take those mistakes and errors and to deploy them at massive scale and massive speed. So we cannot take it for granted. But our automation is doing what we think it is. And the same is true of machines.
Yeah, I’ve spent quite a lot of my career around machine learning and I will guarantee you this, it will take you by surprise. There will always be situations where it ends up doing something you just didn’t anticipate. So there is also for both automation and machine learning, a very important requirement to have some form of independent visibility in terms of what’s happening.
In my environment, whether it’s just the operation of my applications or whether, for example, making sure that my saw has actually done what I thought it was going to do. And again, this is a great application for deep packet inspection, because you can passively take a copy of what’s happening on the network and perform independent analytics.
Okay. Often people say, wow, deep packet inspection, and that’s great, but isn’t everything encrypted now. I wanted to talk about that. The other development there’s been in the area of encryption is the emergence of something called perfect forward secrecy (PFS). Where there is no longer a kind of master private key that you can share with your decryption devices.
Now, TLS 1.3 pretty much has the perfect forward secrecy on by default. And you can use it with TLS 1.2 by enabling an ephemeral Diffie-Hellman Forward Secrecy cipher suite. So what do we do in this situation? My first answer to people. Don’t panic. We’ve actually been dealing with this situation for a long time, particularly in highly regulated industries, such as healthcare and financial services and the trickiest to identify those aggregation locations, because the way I’m going to gain authorized visibility into the traffic is I’m going to take the encrypted feed.
I’m going to terminate the encryption layer of it. I’m going to perform my analysis and then I’m going to reencrypt the traffic, as it goes on towards its eventual destination. If we think about what I’ve shown here, I showed a data center runner, a public clouds zone, and I’ve got some customers, employees connecting either over an enterprise network or via the internet.
So if I’m interested in connections, Coming from my customers or employees coming into the applications and services that I’m hosting I can use. What’s called the reverse proxy, and that can perform that termination and re-encryption we spoke about. And if I’m interested in connections being made from inside my own enterprise to the outside world, it could be a business to business application.
For example, I could deploy a forward proxy, which uses essentially the same approach. But there are locations inside my data center and public cloud where I can also do this. I may want to do it in my security stack because I’m going to have a number of tools that want to be able to look at the raw packet content.
I might stick it in front of my shared enabling services. Now there are situations where I might want to actually decrypt traffic deeper into my application stack at the moment, what we’re looking for, there is an, a passive approach to decryption. And I’ll talk about that a little bit more in my next slide.
So I got asked this question a few months ago, someone said, Hey, Paul, what do you think enterprise encryption should look like in the near future? Now the first answer’s kind of obvious, but it still needs to be said. It has to be using best-in-class encryption. It has to be secure. We need to minimize the attack surface of our environment.
Another observation is it needs to be quantum safe. I think quantum computing is a lot closer than many of us realize. Now quantum computing is really good solving hard mathematical problems. And what was encryption rely on the fact that we can’t solve half difficult mathematical problems. And so we have some really smart people who are figuring out which parts of our encryption algorithms are safe from a quantum computing attack.
And which ones are vulnerable with the idea that the next generation of encryption will only use quantum safe algorithm. Now, why am I mentioning this? Here’s the problem we have. We have all these super smart people worrying about quantum safe encryption and we have a whole bunch of super smart people worrying about privacy on the public internet and quite rightly so what we’re lacking is people thinking about how do I design in visibility into the infrastructure of my data centers and my public cloud.
So that I can have sufficient visibility of what’s happening on the network for my cybersecurity needs, for my service assurance needs. And also I’ve got to be thinking about compliance. We need to be really deliberate about this. Now, if I do it right, I can also potentially achieve that passive decryption that we spoke about.
And I was involved in the definition of a standard called enterprise transport security, which introduces the notion of using all of the TLS 1.3 stack. But with an externally managed key, we’ve called it managed Diffie-Hellman. We presented this at a number of conferences, including ones organized by NIST.
And in the next couple of months, working with our partners at few weeks, we’re actually going to be demonstrating how, if you use the, a managed architecture, you can produce a very secure framework that say leverage is all the best parts of TLS. 1.3, but retains these all important components of visibility.
Okay, so just a couple of slides left. No, in my view, cybersecurity product can operate in isolation. It’s never an island, think back to those sort of four quadrants I showed earlier. So I’m just going to briefly mention. One of our products on this cyber investigator. This is an advanced NDR tool that’s based on scalable DPI.
I just want to talk through two workflows. So the first one is a northbound workflow. We’ve got our packets, we’re applying scalable, deep packet inspection, generating our metadata, and then we can perform analysis and we can identify events or even incidents and send that up to our SIM. But I also need to acknowledge that I’ve got all these other sources of data in my network that are being fed into the SIM, a large number of logs, but other sources of machine data, any of those can identify a potential incident.
But as long as I have some network context, that incident can be passed over to the NDR tool and then we can drill down and investigate it properly. Think about that pyramid. I could got the starting point at the top of that pyramid. I’ve got the context. Now I can drill down into the conversational information.
I can drill down into that transactional information. And if it’s an event that occurred fairly recently, I can even go back and look at those. Those are all important packets. So let’s just say, this is our approach to that. And we very much see it as a component of the overall ecosystem. So just to wrap up starting at the bottom of this, we’re doing everything from the network because packets don’t lie.
We’re looking at applying scalable, deep packet inspection, and we need to be able to do it, but almost unlimited scale across all the different parts of the multi-cloud environment. And if I do that, I get my advanced early warning. I get. Visibility and understanding of how the attack surface is changing.
I’ve got understanding of my vulnerabilities and then on the right, I’ve got my ability to understand lateral movement with that contact tracing analogy through my metadata record. The ability to go back in time and look at a particular instance. That’s my understanding of my threats. And as we spoke about at the beginning, if I have threats, acting on vulnerabilities, I have risk.
If I can get a good understanding of those vulnerabilities and our good understanding of those threats, actually in a position to reduce my risk. So just to finish up I started off talking about the big picture challenges. We spoke about some of the gaps that we certainly see in the existing tool sets.
We had a little bit of a, almost a retrospective and say what do I actually need to consider if I want to deliver the packet inspection at scale. We had those requirements because consideration spoke a bit about machine learning and. It doesn’t have to be a part of NDR, but it is nevertheless, a very powerful source of analytics and you need to drive it with the best quality data.
I spoke a bit about encryption. I regard it as an entirely navigable problem. We have lots of customers who are handling it today, and there’s plenty of products out there that can help you with it. But at the same time, we do need to be keeping an eye on the future and thinking about what our next generation architectures are going to look like.
And then we finished off by talking how all of this can come together to deliver the benefits of advanced network detection and response. So I’m going to stop sharing and having a look in the Q and a. So here’s a question we can maybe just take a couple of questions. How can you collect packets in the public cloud if you don’t have access to the network?
I touched on this earlier. Nearly all of the major cloud providers, AWS and Google cloud. And I think Oracle have just announced, actually provide a mechanism or means of receiving a copy or a mirror of the packets on the network. But even in the environments that don’t, so for example, at the moment as you are, it doesn’t have a direct packet mirror.
I can use techniques such as deploying myself as a virtualized network function. So effectively going in line, just like I would have fire wall or really any other network function. Basically provide a single hop router, take a copy of the packet. And in terms of route resiliency there’s lots of ways of approaching that is active standby architectures, or you can use a load balance.
I, and I’ve got one other question here which I thought I’d take if deep packet inspection is so powerful, why not more security products using it? That’s a great question. As you saw through this presentation, making DPI work in a practical and scalable way. It’s a big engineering challenge.
And a lot of people have given up, but as I’ve mentioned, there are a small number of vendors such as ourselves. Who’ve been doing this for a long time and it persevered. And as a result, we are able to overcome these challenges, deliver the technology and continue to keep up with the ever increasing network speed. So those were the, I think the only questions I had, let me just check whether we had any more. It doesn’t look like we had, so in that case I thank you very much for your attention and I’m going to close this session.