The Future of Data Loss Prevention
The Future of Data Loss Prevention
This week, Ryan chats with Chris Denbigh-White, global director of customer success at Next DLP, about data loss prevention in the hybrid w…
Choose your favorite podcast player
Nov. 29, 2022

The Future of Data Loss Prevention

The Future of Data Loss Prevention

This week, Ryan chats with Chris Denbigh-White, global director of customer success at Next DLP, about data loss prevention in the hybrid work era.

The player is loading ...
The Valuu Makers

Meet Our Guest
Chris Denbigh-White is the global director of customer success for Next DLP. He has over 14 years of experience in the cyber security space including in the office of the CISO at Deutsche Bank as well as cyber intelligence for the Metropolitan Police.

Connect with Chris on LinkedIn.

Show Links
Follow us on Twitter: @thedwwpodcast 

Email us: podcast@digitalworkspace.works 

Visit us: www.digitalworkspace.works 

Subscribe to the podcast: click here
YouTube channel: click here

★ Support this podcast on Patreon ★

Transcript

Ryan Purvis  0:00  
Hello and welcome to the digital workspace works Podcast. I'm Ryan Purvis, your host supported by producer Heather Bicknell. In this series, you'll hear stories and opinions from experts in the field story from the frontlines, the problems they face, how they solve them. The areas they're focused on from technology, people and processes to the approaches they took that will help you to get to grips with a digital workspace inner workings.

Welcome, Chris, to the digital workspace podcast, you want to tell us a bit about yourself and who you are?

Chris Denbigh-White  0:34  
Yes, hello, my name is Chris Denbigh-White. I'm the Global Director of Customer Success for a little company called next DLP. We're a kind of new next generation data loss protection platform. However, my sojourn into the kind of the vendor space is kind of pretty new for me. Prior to that, I was working in Deutsche Bank in the office of the CISO running their provision access teams. And prior to that, I was working with the Metropolitan Police in cyber intelligence in their counterterrorism division, So 15.

Ryan Purvis  1:06  
Great. Well, thanks for thanks for sharing that. I think we had a few things we could talk about there. Maybe start with an easy question. What does the digital workplace mean for you?

Chris Denbigh-White  1:14  
Well, the digital workspace in kind of a broader sense, is people's use of IT and communications platforms to collaborate to do business and to understand kind of the world around them. It's actually quite a broad reaching, broad reach definition.

Ryan Purvis  1:32  
And how have you found, you know, going from obviously, the client side to the vendor side? How is that shifted much for you?

Chris Denbigh-White  1:38  
It has actually, you know, I've always been kind of very much interested in understanding what's out there technology wise. So whenever I'd go to a trade show, from the client side, I'd make a point of speaking to kind of all the booths to understand, hey, what are people doing? What are the problems that vendors are trying to solve and understanding how I might learn stuff, it's a big learning opportunity there to do. And in those situations, you know, all these different companies are more than happy to talk to me because I am, at the end of the day, a potential lead, and something that I hadn't necessarily been as aware of, as maybe I should have been, is that when you flip over to not being on the client side, suddenly, you know, your attractiveness for conversations plummets somewhat. And, you know, there are kind of you're met by a spectrum of different responses, some quite cynical and paranoid, you know, are Am I trying to spy on another company? And the answer to that is generally, no, absolutely not. I'm just as I have always been really interested in learning what's out there, and what challenges people are facing and how people are kind of combating those challenges. So I think that cultural difference of kind of ducking behind the curtain of vendor land has been somewhat strong.

Ryan Purvis  2:54  
Yeah, I think I've been to that as well, you know, you in you get to know certain people as the customer, and then you go back to be in a vendor and and they're like, yeah, now you can see my pipe instead of instead of being bid to buy the pie. So it's an interesting dynamic. You said next generation DLP, what makes it different to the traditional DLP that are out there? 

Chris Denbigh-White  3:15  
Well, that's a great Yes. Well, that's a great question. You know, I've kind of worked in various DLP projects when I've been on the purchasing side. And they've always been quite slow, you know, it's categorise your data, then have some static rules. And for a long time, it was a box that you put inside your annotation with a whole load of rules to kind of stop bad things happening. And that, you know, from having implemented it, and having used it, to my mind has caused a lot of friction when I was working in government, and when I was working in kind of kind of in the bank, you'd be trying to do your everyday job. And all of a sudden, you'd get blocked from doing something, you know, I'm not a malicious user. But, you know, I'm stopped from doing essentially my day job. So I'm like, okay, so how do I actually do this the correct way. Now, we at next DLP have kind of turned this on its head. First and foremost, we're not a blinky box and data centre using legacy technology. Because if you remember, you know, the data loss protection market, and a lot of the technologies are almost 20-25 years old, and they're addressing the problems that were 20-25 years ago where people were sitting in offices behind computers, on corporate networks. So what we do is we start with the notion of visibility, because you know, as the SANS Institute says, If you want to identify evil, while you want to identify malicious or potentially dangerous activity, you have to first understand what normal is. So we have a lightweight endpoint agent that sits on Windows, Linux and Mac and it firstly sees all of the activities of the computer. And from there you can determine what actions you want to take or what's risky and what's not risky. layer on top of that is that we don't just kind of block people. We enter into kind of a situation where we protect To the users and the data from this them selves, and then point them to the correct way. For example, you know, if a lot of people use PDF conversion sites, so that was always a big bugbear of mine, when I was on my own site, somebody has a very sensitive spreadsheet, and they need it in a PDF to send for a document, they usually go to Google and say, How do I convert this from Excel to PDF, and Google will tell them, you know, just upload it to honest Jon's PDF conversion site, who promises that once he's done the PDF conversion, he's definitely going to destroy that data. And his servers are definitely not in the garage of his mom's house, you know, which we can all agree is quite risky. So rather than just kind of block those activities at the perimeter, we in our platform can understand this as happening, you know, we can block it, protect them from themselves, but also, at the same time, at the point of risk inform the user, hey, look, it looks like you're trying to convert an Excel to a PDF. Did you know that you can click File Save as PDF out in most Microsoft applications, or this is the security policy that you are running the risk of breaching. So it's reducing friction by lifting people from a path of non compliance, enabling them to do their job and putting them on the safe path in one smooth process. And that's that is kind of the key difference.

Ryan Purvis  6:17  
Yeah, I think there's a level of the conscious, doing the right doing the thing, knowing that you're breaching the policy and trying to get away with it, is the unconscious, doing something to get your work done, and not realising you're breaching upon your breaching policy, and there's probably a murky place in the middle between that where you did something and a breach of part, you knew that a breach of privacy, potentially, but you weren't sure how to get her to get approval to do it?

Chris Denbigh-White  6:42  
Absolutely. Because when people get on boarded into companies, they get front loaded with all these information, security policies, maybe the company has really advanced cyber training. But that's great. And often, it's a next, next, next, next complete endeavour with the cyber training. But actually, when users need the information around making wise decisions in relation to protecting companies data, is at the point of risk. So you know, I'm not saying don't do cybersecurity training, absolutely. That's an important thing to do. But if hes to sell close to these specific use cases of doing the wrong thing, it's not just about saying bad user, naughty user. It's about saying, Look, we know you have a job to do this is the correct way, let's facilitate that way in which you can do this in a safe way. Absolutely. And I think it's that murky side, that is the greatest risk.

Ryan Purvis  7:33  
Yeah, I've had situations where we've had consultants come in, they've done some work. And then they've emailed the work home, or they're trying to load the work or something like that. And we've had sophisticated DLP teams that have picked it up. And, you know, it's been escalated. And, you know, it was probably an avoidable thing, if we just picked it up with something simple at the beginning, saying, Look, this is a, you know, here's the warning, don't do this, if you still proceed in this will be flagged, and and the guy pleaded ignorance that he didn't know, that he was breaching the policy, but, you know, it's I think it's one of those things that when you when you work as a consultant, you know, that the contents not yours, you've generated for a company that belongs to the company. And then I've had the other one where, you know, as a business user has used a personal laptop for something, and they've deployed the corporate stuff on it, you know, under the BYOD policies, and then drag file from their personal Dropbox to their corporate OneDrive, and vice versa. And they never thought for a second that they bought something private, in that belongs to the company now.

Chris Denbigh-White  8:34  
No, absolutely. And that's something we see a lot and almost in reverse as well, in that a lot of companies I've worked with, and I currently work with are embracing this Microsoft journey or GCP journey, you know, other cloud platforms are available, of course, but they're saying in that, okay, SharePoint Online, OneDrive for Business, that is the place to put your sensitive Core Data, and companies build their security frameworks and their security controls around that. Yet, many companies in the back of their mind, they're thinking, hang on. So if we're protecting this OneDrive for Business, what's to stop someone signing in with a Hotmail account or a Live account and putting things either deliberately or accidentally or thinking deliberately or accidentally, to their personal account when actually it shouldn't be there and may have a lesser level of security. And that's one of the main use cases that we address it next DLP as well, in that we've got this really cool thing where we can identify the difference between personal and corporate, and alert block and guard around that basis as well. So where you have laptops configured to sink to the wrong place, we can go, Hey, look, it's sinking to the wrong place. Let's put this data in the right place. So either way, you don't have that issue. And that's something that's a big, big thing when we talk around these kinds of storage providers.

Ryan Purvis  9:50  
So I mean, if we if we look at the kind of information that is at risk for DLP, what What are you What do you currently see? I mean, what are the things that you know, a person that doesn't know? Because you mentioned the beginning people get this training. And the training is always generic, even though it's not even though it's not meant to be. Because if I think about some of the training I've been on, you know, I worked in banks, so you'd see like the banking example. But it's never stuff that I would ever do. So I wouldn't be dealing with clients, and we'd be dealing with with client documentation, you know, bank statements, and all that kind of stuff. I was always on the technology side. So my, my example should have been an architectural document, a set of IP addresses, that sort of stuff that should be protected. But the training never covers that always covers the sort of banking, financial services stuff, if you want to give examples of things that people should look for and be aware that they're potentially bordering on DLP, what would you what would you say that?

Chris Denbigh-White  10:46  
That's great. And you're absolutely right, it does depend on kind of the the industry in the use case. But I would think, in the first part, some of the easy stuff to do is things that contain PII, PCI data, so things like kind of personally identifiable information. So names, email addresses, identifiers, as you say, things like credit card numbers, identity numbers, that's kind of the easy stuff in data loss prevention. But then there's the other things around kind of cyber hygiene and making it difficult for attackers. you've alluded to some of yourself things like kind of diagrams of architecture, we do. We do a lot of businesses, with some software companies as well. And in their use cases, it's things like snippets and elements of their source code, ensuring that they are indeed pushed to the correct GitHub and not somebody's personal one, you know, either on purpose or by accident, similar use case to the Google Drive question. However, it's all around those things that the data should be in a place. Well, firstly, when we talk about data loss, it's more around data tracking, because if you imagine in the consultancy world, as you've mentioned, you know, you can have data loss without the data ever leaving the company walls, if you have, for example, a Chinese war situation within a consultancy, that actually information can't pass between this boundary between different clients, or between different kind of areas of the company, the same in banking, as well, between, you know, investment banks and private banks and core banks, being able to track the movement of that data, as well as the specifics of it is really, really key.

Ryan Purvis  12:23  
Are you doing something around tagging the content?

Chris Denbigh-White  12:28  
Well, that's Well, that's a really interesting thing, what we do is we don't do the discovery and the tagging of the content. We do however, kind of integrate if people have done tagging already, with things like AIP, MIP, or whatever, we can use those, our technology focuses more on real time content inspection. And for the reason that we really want to do data loss prevention in an entirely different way. Because the classic way of implementing data loss, speaking from experience I've seen seen this is, first step is discovery exercise. So you buy some software, or a big beefy server that crawls literally all of your data, you know, in use or not in use across your whole infrastructure. And what you get back is a kind of a big report that says something along the lines of, you have 586,000 files that may contain some data that is sensitive. And then the upshot of this, please go and read these 586,000 files to confirm whether or not this is true. And at this point, these files have not moved anywhere, they're sitting on a server inside a perimeter. And that's a you know, that kind of exercise potentially may need to happen if we're talking around, you know, retention exercises and records mapping, but that isn't really, really around data loss. And around cybersecurity, it's more around the information controllers piece. So what we do is we implement real time content inspection at the time at kind of the point of risk. So you will have classifiers to identify sensitive data. And it's when that data is touched, or moved, or passes through a clipboard that we expose that data at the point of access to a basically a question is that is this sensitive? Is this not? And if it is sensitive, okay, so what decisions around this data do you want to make? Do you want to block the fact that this document goes to a PDF conversion site? Or is emailed to these specific people or is even accessed at all or is written to USB? So we enable through that process companies to kind of install the software and start seeing insights and start controlling things very, very quickly without having to go through that often multi year process of tagging and classifying everything.

Ryan Purvis  14:37  
Yeah, that's a nightmare. It really is. I was just thinking about something while you were quoting, explaining this. And I was just wondering, like, is there a performance impact when you do that? So okay, copy and paste is probably not an issue because you either copy an image or a copying text, and that's pretty small, usually, depending on what you're copying. But is there any lag or any delay that the user would see with your technology? 

Chris Denbigh-White  14:59  
Now that's the big Use of it not necessarily at all. But our agent is designed to be specifically lightweight. So not wanting to throw around kind of vendor buzzwords, but just to clear a few things up. We're kind of a kernel level agent, which means we're not a log forwarding technology. So what we're not doing is reading a whole load of Windows Mac or Linux logfiles, trying to figure out what's happening, and then putting that into kind of a platform and then making decisions, we at the operating system level as the operating system does something, understand what's happening in relation to content inspection, we, we don't try and reinvent the wheel as well, because operating systems like windows and like Mac, already, for the majority of file types, do content inspection for their indexing, when you do a search on a machine. The OS knows what's inside those files already, because it's already indexed them. So we jump on the back of that we can do full fact content inspection as well. But more often than not, our agent doesn't need to do that. So the inspection is lightning quick, like I say, at the point of access, so it's not a case of access a file 10 minutes later, you're you have a decision as to whether or not you're allowed to access it, these things within our clients happen instantaneously, which is one of the reasons I joined the company, because it is pretty cool to see.

Ryan Purvis  16:15  
Yeah, look, I my experience with some of the vendors in DLP is that it's not that it's the opposite. And that's why it's cool, because you're obviously using some sort of heuristics to that you're applying, depending on the context of the content being transferred to cheating a little bit. And I don't mean that in a in a nasty way. But you you present your presented presuming what you have to do because of the channel of communication and the content, which optimises, the data review, is that the right way?

Chris Denbigh-White  16:49  
not so much heuristics at all, it is as simple as it sounds, you know, the real time content inspection is based on pattern matching within the file. However, a lot of the we're looking at kind of, we don't necessarily have to read every line of the file, because the OS has already done that at the bottom of writing. So that builds in kind of an optimization as well, additionally, unlike a lot of other platforms, is that all of this stuff happens on the endpoint by our agent doing it. As opposed to what we don't want to do is fire up a load of stuff into an analytics cloud to determine what's inside the file, you know, for two reasons. One, it's slow. And two, there's a certain irony to purchasing a piece of software to stop sensitive data being transferred out of your environment, when that piece of software does the transferring of sensitive data, identify sensitive data and then transfers it out of your entire environment, it just doesn't really make a lot of sense to us to do it that way. So things like the machine learning and the ideas around kind of content inspection all happen on the endpoints and the results of these actions. The metadata is what's transferred into our platform, and the control plane is done there. But we as a software provider, we don't we don't pull any customer data into any of our cloud infrastructure. Because, you know, it's the customer's data, it's not ours, why would we want it?

Ryan Purvis  18:15  
Now, that makes sense because it's an edge computing place. So you've got localised learning that shared centrally and the learnings would share not the content, which makes you know, it's a very secure way of doing it and also local event. Exactly. And how often is that updated is is updated every day, every week, every month.

Chris Denbigh-White  18:34  
That's an interesting thing. We are kind of a full on dev sec, DevOps, or whatever the latest buzzword is company. So we are constantly innovating with our platform. So generally speaking, the Cloud Platform gets updated every month, sometimes sooner as new features are developed and tested and implemented, they get put into the platform straightaway. The agent as well regular updates, sometimes twice a month, sometimes sometimes monthly as new features and new augmentations roll out brought in and that happens seamlessly to the end customer the platform updates the agents with zero downtime as well which is pretty cool.

Ryan Purvis  19:13  
Cool. That's really clever. What what is the what does the end users experience? I mean, do they do they know you're there? Do they know that this next DLP or is it branded to be the customer?

Chris Denbigh-White  19:22  
No, they know it's they know it's next DLP, it's not you know, we don't garishly splatter our logo, you know, your data security brought to you by Max DLP to the end user. They know where they're when they're required to know that we're there. So for example, when I mentioned these educational pop up, so user breach as a rule or representing a higher level of risk at the points of risks. I mentioned these educational popups to put people on the right track come up. Those are basically quite distinctive windows because something we didn't want to do is kind of go down the tooltips platform. And we found and I found in the past users where You get standard Windows alert boxes popping up, you know, users tend to just click them, like the accept cookies button. You know, there's a lot of desensitisation to that. So we've deliberately made those eye catching and customizable to the user. And so they'll see we're there. However, it's quite clear as to why we're there as well. So it's not a kind of a super creepy. What's this awful software on my endpoints? Generally speaking, when a pop up comes up and gives users information, it's something that's going to make their day easier, rather than more difficult?

Ryan Purvis  20:32  
No, it sounds good. And I think it's such an important thing nowadays, especially if you're working from home. And so easy for for forget the housing situation, we work from home in our in house and whatever, if you're in a house shake or something like that, where the screens up there and someone can see stuff. I mean, you can't obviously protect against that without some sort of physical boundary. But, you know, if you leave your laptop and attend and your housemate comes and emails themselves something, you know, that's a risk that needs to be mitigated somehow.

Chris Denbigh-White  21:02  
Well, absolutely. And we've got an interesting answer to that. It's strange, you mentioned that, that we've got in relation to we've got the DLP policies, you know, your classic ones, and that's all very, very good. But we've also got a certain degree of machine learning on the endpoint as well. And one of the policies that we've got is an unexpected user typing on a keyboard. Now I know inside of officers, and certainly inside of the police, it was a common prank, slash punishment, if you left your screen unlocked that you would either depending on how nice your coworker was, it would either be a rather rude email to the boss, or it would be an email to the team offering to buy everyone beer or doughnuts depending on what time of day it was. However, with our platform, you're able to identify an alert in relation to Okay, so usually this is the person who typed on the keyboard based on typing cadence and kind of a pattern that's learned, should someone new type on it, then alerts and remediate of action can take place all the way up to locking the machine out until security unlock it, or prompts and warnings in a security Operation Centre as well, which is quite neat.

Ryan Purvis  22:08  
That's it? Yeah. I'll be honest, we were very suspicious of those sorts of techniques when I was at UBS. But that's obviously not to say it doesn't work. I think, I think it's a great way of doing it in terms of the biometric markers, some sort of, you know, keep in cadence, I'm just thinking of what we used to do. And guys, you see the machines, I like to mean that guy's writing resignation emails, for a person who quite immature to be honest, that absolutely. 

Chris Denbigh-White  22:34  
And if you think about it, a breach of the Computer Misuse Act, but nobody necessarily thinks thinks it necessarily constitutes that because you're using somebody else's IT infrastructure under the wrong credentials. 

Ryan Purvis  22:47  
But my favourite one, which I don't know, if you've tried it, and wondered how your tool would pick this up, if you could maybe put I don't know, is used to screenshot what was on the screen. And then you'd go and hide all icons, and you replace the background image with the screenshot. So you're logged in. And then the person would reboot the machine about 10 times trying to work out why they couldn't click on anything.

Unknown Speaker  23:08  
Yes, I've seen that happen before. That's, that's actually a classic one, I really do like like that one. That's almost like an older version of the things during lockdown, where I saw people recording short videos on a loop of themselves sitting at their desk, nodding occasionally and then they'd set that as their zoom or teams background, video on a loop where they could have so they could join a meeting, appear that they were engaging in the meeting, but would actually be kind of off somewhere, you know, eating pizza or having cheating.

Ryan Purvis  23:39  
I never thought to do that. I'm clearly too committed. 

Chris Denbigh-White  23:44  
Now, I only saw that because there was one where, you know, there's this guy, at the meeting nodding and stuff. And then all of a sudden, kind of a five year old comes and sits in the chair breaks all the green screen. And it's busy playing on the desk. So it becomes quite clear that actually, the person is not sitting at their desk.

Ryan Purvis  24:04  
Now. Yeah, I'm trying to think how I want to try that out just to see how you do it. But I mean, most people just turn the camera off, and they walk around with headphones on. I mean, that's how we used to do it before we had video cameras all the time. So yeah, I'm just thinking what, you know, it's such a, it's more effort to do that than it is to actually just probably do the meeting or be engaged. But I'll probably do another work. And that's probably what they had to do

Chris Denbigh-White  24:30  
now. Exactly. And it's strangely mentioned that around the more efforts to do the wrong thing than to do the right thing. And again, that's something that we want to achieve in our platform is to present the options, you know, block the bad things, but make it easier for end users to be able to do the right thing rather than to the wrong one. But yeah, it kind of goes across to the video conferencing and the DLP space. Well, the wider cyberspace even.

Ryan Purvis  24:56  
Yeah, so basically how Just thinking about setting that up when you with what you're doing. And then just wondering about this, maybe from a from a different type of security point of view, you mentioned the biometrics with a keyboard cadence. I mean, have you ever thought of looking at the actual camera and seeing who's sitting in the seat and comparing them?

Chris Denbigh-White  25:17  
That is an interesting, that is an interesting problem position. As far as I know, as a company, we haven't. And I think one of the issues, one of the perceived issues that people see with cybersecurity and user behaviour monitoring type software is the creepiness factor of it all. Yeah, you know, on one hand, we want to have visibility into what's happening. But on the other hand, especially in the wider European Union, and certain countries that as well, you know, there is a lot of resistance, security is always in tension with privacy. So, on one hand, I think that could have potentially a lot of collateral intrusion issues in relation to kind of capturing and analysing that kind of data. And that's something as a company we take really, really seriously. You know, our platform, although it does, acts a lot like a real time flight recorder for computers, you know, everything that happens even outside of the rules is captured and analysed and kind of risk is kind of determined, however, we are acutely aware of some of the privacy concerns that can cause so our platform can be operated in standard mode, where you just see all of the events and stuff. But we also have a pseudo anonymize mode, where all of the individual identifiable data is pseudo anonymized a lot in the way that we used to do in the we used to do when I worked in intelligence, you know, substituting names for different names. So for example, Christian Denbigh-White on the platform would appear as John Martin's and or John Smith. And throughout the entire platform, I would remain to be John Martinson and John Smith. So the upshot is that investigators and security professionals can still conduct their investigations identify risks without actually knowing who the people are, that are under investigation until the point at which an Ico or someone needs to know. And then those can be with the right access roles and privilege, the De-anonymized for those specific people. So it's a really neat feature, it enables you to have the visibility without necessarily encroaching too much on that creepiness factor or invading people's privacy.

Ryan Purvis  27:25  
Yeah, I think it's so important, especially with, with a, I'd say a difference of approach in the US versus the rest of the world in some respects around privacy. And there's it's fractured in that, obviously, some states are doing things like California, I think is one talking to my cousin the other day, I think he's doing he's doing some work on that now in the on the East Coast. And you know, I was talking to someone yesterday in Cape Town and asked him about getting some internet connectivity, you know, and a poppy which is what GDPR is basically based on. It's just the the rigmarole to give it to you, it's just too much to even have even started the conversation. So if you could do something and I said to him we've teamed up with the set we've sanitised the data etc. And he said not even worth it. The guidelines aren't off. It's just just one of those things. So if it's if it's something that is almost pre packaged into your activity than that, you know, that's a win for everyone because already happening automatically. There's no need for someone to look at it further unsanitized state and then sanitise it or de anonymize to anonymize state.

Chris Denbigh-White  28:29  
Exactly. And, you know, it's the ability because I've seen other products that I've used in the past, I've done kind of anonymization. And sometimes what you get is either the old, like you see in the movies, the CIA redacted document, which is a sheet of a4 paper with one word at the front and one word at the bottom and then black in the rest of the space. And generally speaking, that's useful to no one. So we wanted to take the approach of being able to properly investigate start to finish, whilst ensuring that the person doing the investigator doesn't know who the person being investigated is, you know, partly for privacy, but also partly for kind of things like inbuilt biases, or if it's the person sitting next to them, you know, it kind of not only protects privacy, but it also protects the integrity of an investigation as well.

Ryan Purvis  29:20  
Yeah, yeah, totally great. I mean, I think I think we had a good conversation. Anything else you want to add?

Chris Denbigh-White  29:25  
No, not really, outside of if there are people listening here that would like to have a sensible conversation around DLP that just works. That won't result in being hounded by salespeople because, as I mentioned before, I'm not a salesperson for our platform. I don't you know, I work with current customers, but I am super passionate about helping people do the best they can in cybersecurity, you know, because if people can achieve things individually as companies, it has a knock on effect for the rest of us, you know, The more secure we are individually, the more secure we are as a community as well. So if anyone has any interest, please do please do reach out and for no nonsense conversation about what we do and how we might be able to help.

Ryan Purvis-30:02
And what's the best way, Linkedin, website or email address?

Chris Denbigh-White 30:19
Either on Linkedin, please feel free to reach out and connect to me or if you wanted to visit our website which is www.nextdlp.com. But yeah, like I say I really like engaging with the community so feel free to hit me up on Linkedin and ask any questions and start a conversation-I love that kind of thing

Ryan Purvis 30:38
Fantastic, great stuff thanks for coming on and sharing your expertise.

Chris Denbigh-White 30:42
That's no problem at all its been a great pleasure Ryan.

Ryan Purvis-31:21
Thank you for listening today's episode. Heather Bicknell, our producer editor. Thank you, Heather for your hard work for this episode. Please subscribe to the series and rate us on iTunes or the Google Play Store. Follow us on Twitter at the DWW podcast. The show notes and transcripts will be available on the website www.digitalworkspace.works. Please also visit our website www.digitalworkspace.works and subscribe to our news letter. And lastly, if you found this episode useful, please share with your friends or colleagues.

Chris Denbigh-White Profile Photo

Chris Denbigh-White

Head of Customer Success

Chris Denbigh-White is the global director of customer success for Next DLP.
He has over 14 years of experience in the cyber security space including in the office of the CISO at Deutsche Bank as well as cyber intelligence for the Metropolitan Police.