As demand from customers for better supply-chain security grows, so, too, does pressure on organisations to provide evidence of their security initiatives. This includes data loss prevention (DLP) programmes to protect sensitive customer and consumer information. Too often, organisations consider deploying DLP as an easy path to compliance.
Until they try traditional DLP and learn there’s a better way…
Traditional DLP often complicates securing data
It is simple to license a DLP solution but far more difficult to deploy one successfully. The problem comes from the approach taken by legacy DLP vendors. They designed their solutions for a world where all data existed within the corporate network and applications ran locally. This leads to several deployment challenges:
Legacy solutions require pre-classification of data
Before data protection can occur, all the sensitive data in an organisation must be identified and classified. This delays data protection for months (or even years) while the solution scans network shares and endpoints. It is often further extended as new types of data are identified, requiring the classification exercise to be repeated.
As this process drags on, security teams may resort to running DLP in monitor mode. This allows users (and attackers) to do whatever they want with data, hoping the security operations centre (SOC) can respond quickly when exfiltration begins.
Legacy solutions require granular rules
Once data is pre-classified, legacy DLP providers focus their efforts on rules dictating which users can take which actions with each class of data. As each new set of users or class of data is identified, rules must be added or modified. False positives are common and result in alert fatigue in the SOC and impede legitimate workflow. Users respond by seeking alternative methods of obtaining or sharing information and unauthorised workarounds become the norm. Once again, the result is often to simply deploy the DLP solution as a forensic tool in monitor mode.
Creating a baseline of ‘normal’ behaviour takes months
Legacy DLP solutions attempt to identify singular incidents of “anomalous” behaviour. This requires a system to “learn” the behaviour patterns of each group of users to create a baseline of typical activities. When administrators create a new group or class of data, the learning must begin anew. In many organisations, it can take months to create this baseline, during which time to value is delayed and data remains at risk.
Machine learning on the endpoint provides compliance with speed to value
Data loss is an immediate problem deserving of an immediate solution. For organisations deploying DLP, speed to value is paramount. Moving intelligence to the endpoint solves the speed to value problem. This means data protection without the timesink of pre-classification and the ongoing overhead of granular rule management.
Classification when users access data
Attempting to pre-classify data presents organisations with a monumental task that never ends. The solution to this is simple: classify data as it is used. Data is most at risk when users move it via e-mails, text messages, uploads, downloads, image and printing, and any movement to cloud storage or using cloud apps. Machine learning on each endpoint allows organisations to classify data at the point of risk and eliminates the primary delay in gaining value from a DLP solution.
Policy-free protection
Legacy DLP solutions rely on centralised machine learning. These solutions require months of training to baseline behaviour and communication with the intelligence engine to identify risks. By moving machine learning to each endpoint, teams can solve two problems simultaneously. First, by baselining each individual user separately, useful models are created in days instead of months, accelerating speed to value. Second, machine learning on the endpoint can identify data at risk without granular policies or communicating back to the cloud. This allows the solution to identify and address risky actions on or off the corporate network.
Enlisting end-user assistance at the moment of risk
While every user presents risk, not all users are threats. Machine learning on each endpoint can identify actions that could put data at risk and warn users before the action is allowed (or blocked). It can prompt users to review corporate security policies and provide security awareness training on specific use cases at the time the behaviour occurs. This ensures the user knows the right process to follow in the moment and in the future.
Today’s threat space requires today’s technology
Legacy DLP were designed for a threat space different from today’s. Instead of a walled garden corporate network with gold images, today’s environment involves working and sharing information from anywhere, using your own device and where cloud application is dominant. This requires a solution designed for the modern technology stack, user and threat space. Machine learning on the endpoint provides protection without delays inherent to legacy solutions.
Next DLP helps organisations to discover risks, educate employees, enforce policies and prevent data loss. Its flagship data loss protection solution, Reveal Cloud, is targeted at the 90% of cyberattacks that involve a human attack vector and haven’t been stopped by traditional security solutions. Next DLP is a human-centric solution that learns, adapts to and strengthens the individual user. Whether the employee is working on the corporate network or at home, Next DLP can make sense of unstructured data across platforms, tools and networks to get the whole picture of what normal behaviour looks like and identify malicious actions.
For more, visit www.nextdlp.com, or find Next DLP on LinkedIn, Twitter or Vimeo.
- This promoted content was paid for by the party concerned