Data Security Best Practices

{{ firstError }}
We care about security of your data. Privacy Policy

The Importance of Data Security

Today, every organization, no matter how big or small, needs to follow data protection best practices in order to reduce the risk from increasingly sophisticated ransomware, phishing and other cyberattacks. In addition, following data security guidelines is essential for achieving, maintaining and proving compliance with strict modern data protection laws like the GDPR and CCPA.

Unfortunately, 77% of organizations are underprepared for cyberattacks, according to research from the Ponemon Institute. This includes even tech giants — in 2022, hackers were able to exploit a weakness in cloud security at Microsoft, resulting in a major data breach

Concerned about your own security? This white paper reveals data security best practices to know today in order to protect your organization against breaches and compliance penalties.

Top 14 Data Security Best Practices

1. Understand data technologies and databases 

Database models

Early database systems connected users directly to data through applications. In a private network, physical security was usually enough to protect the data. 

Today, modern databases allow data to be viewed in dynamic ways based on the user’s or administrator’s needs. Models include:

  • One-tier (single-tier) model — In this model, the database and the application exist on a single system. This is common on desktop systems running a standalone database. Early Unix implementations also worked in this manner; each user would sign on to a terminal and run a dedicated application that accessed the data.
  • Two-tier model — In a two-tier model, the client workstation or system runs an application that communicates with a database running on a different server. This is a common implementation that works well for many applications.
  • Three-tier model — Commonly used today, the three-tier model isolates the end user from the database by introducing a middle-tier server. This server accepts requests from clients, evaluates them and sends them to a database server for processing. The database server sends the data back to the middle-tier server, which then sends the data to the client system. The middle server can also control access to the database and provide additional security.

SQL vs NoSQL databases

The language most commonly used to communicate with databases is Structured Query Language (SQL). SQL allows users to send queries to database servers in real time. Most commercial relational database management systems — including Oracle, Microsoft SQL Server, MySQL and PostGres —use SQL. (Don’t confuse the language SQL with Microsoft’s database product SQL Server.)

A NoSQL database is not a relational database and does not use SQL. These databases are less common than relational databases, but are often used where scaling is important. 

Here are some key differences: 

FeatureNoSQL DatabaseSQL Database
Database typeNon-relational/distributedRelational
Schema typeDynamicPre-defined
Data storageRecords are stored in a single document, often in XML formatRecords are stored as rows in tables
BenefitsCan handle large volumes of structured, semi-structured and unstructured dataWidely supported and easy to configure for structured data 
Typical scaling modelHorizontal (add more servers)Vertical (upgrade the server)
Popular vendors/ implementationsMongoDB, CouchDBOracle, Microsoft, MySQL
Susceptible to SQL injection attacks?No, but susceptible to similar injection-type attacksYes

Big Data

Some organizations store more data that can fit on a single server, so instead it is stored on a storage area network (SAN). A SAN is a separate network that is set up to appear as a server to the main network. For example, multiple servers and network storage devices might be configured as a mini-network designed to store only several terabytes of data. It is connected to the main network so users can quickly and conveniently access data in the SAN. 

SANs usually have redundant servers and are connected via high-speed fiber optic connections or iSCSI running on copper. However, Big Data may reach a size where it becomes difficult to search, store, share, back up and manage.

File systems

File systems are another way to store unstructured data and control how it is retrieved. Without a file system, information on a storage medium would be one large body of data with no indication of where one piece of information stops and the next begins. Separating the data into pieces and giving each piece a name makes the information far easier to isolate and identify.

File systems can be used on many different kinds of media, such as SSDs, magnetic tapes and optical discs. File system types depend on the operating system used. For example, Linux uses file systems such as the ext family, xfs and jfs; Windows OS uses fat, fat32 and ntfs; and MacOS uses apfs and hfs+. 

2. Identify and classify sensitive data

To protect your data effectively, you need to know exactly what types of data you have. Data discovery technology scans your data repositories and reports on the findings. From there, you can organize the data into categories using a data classification process. A data discovery engine usually uses regular expressions for its searches, allowing for more flexibility. 

Using data discovery and classification technology helps you control whether users can access critical data and prevent it from being stored in unsecure locations, reducing the risk of improper data exposure and data loss. All critical or sensitive data should be clearly labeled with a digital signature that denotes its classification, so you can protect it in accordance with its value to the organization. Third-party tools, such as Netwrix Data Classification, can make data discovery and classification easier and more accurate. 

Data should be classified based on its sensitivity and value. For instance, data can be grouped into the following categories:

  • Public data — Data that does not need special protection and can be shared freely.
  • Private data — Data that employees may access but that should be protected from the wider public.
  • Confidential data — Information that may be shared with only selected users, such as proprietary information and trade secrets.
  • Restricted data — Highly sensitive data, like medical records and financial information that is protected by regulations.

Controls should be in place to prevent users from improperly modifying the classification level of data. In particular, only selected users should be able to downgrade a classification, since that will make the data more widely available. 

Follow these guidelines to create a strong data classification policy. And don’t forget to perform data discovery and classification as part of your IT risk assessment process

3. Create a data usage policy

Of course, data classification alone is not sufficient; you also need a policy that specifies access types, conditions for data access based on classification, who has access to data, what constitutes correct data usage, and so on. Don’t forget that all policy violations should have clear consequences. 

4. Implement access controls 

You also need to apply appropriate access controls to restrict access to your data, including requiring authentication for access to any data that is not public. Access rights should follow the principle of least privilege: Each users receives only those privileges essential to carrying out their assigned responsibilities. 

Access controls can be physical, technical or administrative:

Administrative controls 

Administrative access controls are procedures and policies that all employees must follow. A security policy can list actions that are deemed acceptable, the level of risk the company is willing to undertake, the penalties in case of a violation, etc. The policy is normally created by an expert who understands the business’s objectives and applicable compliance regulations. Important components of administrative controls include:

  • Supervisory structure —Almost all organizations make managers responsible for the activities of their staff: If an employee violates an administrative control, the supervisor will be held accountable as well.
  • Training — All users should be educated on the company’s data usage policies and know that the company will actively enforce them. In addition, users should be periodically reeducated and tested to reinforce and verify their comprehension. Users also need to be educated about their level of access to data and any relevant responsibilities.
  • Employee termination procedure — To protect your systems and data, it’s critical that departing employees lose access to your IT infrastructure. Work with HR to develop an effective user termination procedure that follows these user termination best practices.

Technical controls 

Data storage

In most cases, users should not be allowed to copy or store sensitive data locally. Instead, they should be forced to manipulate the data remotely. The cache for both the client and server should be thoroughly cleaned after a user logs off or a session times out; otherwise, encrypted RAM drives should be used. Sensitive data should never be stored on a portable system of any kind. All systems should require a login and include conditions locking down the system in the event it’s used suspiciously. 


User permissions should be granted in strict accordance with the principle of least privilege. Here are the basic file permissions in Microsoft operating systems: 

  • Full Control — The user can read, execute, modify, and delete files; assign permissions; and take ownership.
  • Modify — The user can read, write, and delete the file.
  • Read and Execute — The user can read and run the executable file.
  • Read — The user can read the file, but not modify it.
  • Write — The user can read and modify the file, but not delete it.

Folders have the same permissions, plus the list folder contents permission, which allows the user to see what is in the folder but not to read the files.

Access control lists 

An access control list (ACL) is a list of who can access what resource and at what level. It can be an internal part of an operating system or application. For example, a custom application might include an ACL that lists which users have what permissions in that system.

ACLs can be based on whitelists or blacklists. A whitelist is a list of items that are allowed, such as a list of websites that users are allowed to visit using company computers, or a list of third-party software that is authorized to be installed on company computers. A blacklist is a list of things that are prohibited, such as specific websites that employees are not permitted to visit or software that is forbidden to be installed on client computers. 

In file management, whitelist ACLs are more common. They are configured at the file system level. For example, in Microsoft Windows, you can configure NTFS permissions and create NTFS access control lists from them. You can find more information about how to properly configure NTFS permissions in this list of NTFS permissions management best practices. Remember that access controls should be implemented in every application that has role base access control (RBAC), such as Active Directory groups and delegation.

Security devices and methods 

Certain devices and systems help you further restrict access to data. Here are the most commonly implemented ones:

  • Data loss prevention (DLP) — These systems monitor workstations, servers and networks to make sure that sensitive data is not deleted, removed, moved or copied. They also monitor who is using and transmitting the data to spot unauthorized use.
  • Firewall — A firewall isolates one network from another. Firewalls can be standalone systems or can be included in other infrastructure devices such as routers or servers. Firewall solutions are available as both hardware and software. Firewalls exclude undesirable traffic from entering the organization’s network, which helps prevent malware or hackers from leaking data to rogue third-party servers. Depending on the organization’s firewall policy, the firewall might completely disallow some or all traffic, or it may allow some or all of the traffic only after verification.
  • Network access control (NAC) — NAC involves restricting the availability of network resources to endpoint devices that comply with your security policy. NAC can restrict unauthorized devices from accessing your data directly from your network. Some NAC solutions can automatically fix a non-compliant node to ensure it is secure before allowing access. NAC is most useful when the user environment is fairly static and can be rigidly controlled, such as in enterprises and government agencies. It can be less practical in settings with a diverse set of users and devices that are frequently changing.
  • Proxy server — These devices act as negotiators when client software requests resources from other servers. In this process, a client connects to the proxy server, asking for some service (for example, a website). The proxy server evaluates the request and then allows or denies it. Proxy servers are usually used for traffic filtering and performance improvement. Proxy devices can restrict access to your sensitive data from the internet.

Physical controls 

Although physical security is often overlooked in data security discussions, not implementing it could lead to your data or even your network becoming fully compromised. Each workstation should be locked down so that it cannot be removed from the area. Each computer case should also be locked so that their hard drives or other storage components cannot be removed and compromised. It’s also good practice to implement a BIOS password to prevent attackers from booting into other operating systems using removable media. 

Laptop and mobile device security

If a company laptop is lost or stolen, malicious parties may be able to access the data on its hard drive. Therefore, full-disk encryption should be used on every laptop used by an organization. Also, avoid using public Wi-Fi hotspots without first using a secure communication channel such as a VPN or SSH. Account credentials can be easily hijacked through wireless attacks and can lead to entire networks being compromised. 

Mobile devices can carry viruses or other malware into an organization’s network and extract sensitive data from your servers. Because of these threats, mobile devices need to be controlled especially strictly. Devices that are allowed to connect should be scanned for viruses, and removable devices should be encrypted.

It is important to focus your security policies around data, not what type of device it’s stored on. Smartphones often contain sensitive information, yet they are typically less protected than laptops, even when they contain the same information. All mobile devices that can access sensitive data should require equally complex passwords and use the same access controls and protection software.

Smartphones with a high-quality camera and mic are another common source of data leaks. It is very hard to protect your documents from insiders with these mobile devices, or to detect a person taking a photo of a monitor or whiteboard with sensitive data. however, you still should have a policy that forbids using camera in the building.

Network segregation 

Network segmentation involves segregating a network into functional zones. Each zone can be assigned different data classification rules, set to an appropriate level of security and monitored accordingly. 

Segmentation limits the potential damage from a security incident to a single zone. Essentially, it divides one target into many, leaving attackers with two choices: Treat each segment as a separate network, or compromise one and attempt to jump the divide. Neither choice is appealing. Treating each segment as a separate network creates a great deal of additional work, since the attacker must compromise each segment individually; this approach also dramatically increases the attacker’s exposure to being discovered. Attempting to jump from a compromised zone to other zones is also difficult, because if the segments are designed effectively, the network traffic between them can be restricted. While there are always exceptions — such as communication with domain servers for centralized account management — this limited traffic is easier to identify.

Video surveillance 

All your company’s critical facilities should be monitored using video cameras with motion sensors and night vision. This is essential for catching unauthorized entrants trying to directly access your file servers, archives or backups, and for spotting anyone who may be taking photos of sensitive data in restricted areas.

Locking and recycling

Your workspace area and any equipment in it should be secured before you leave it unattended. For example, check doors, desk drawers and windows, and don’t leave papers on your desk. All hard copies of sensitive data should be locked up, then destroyed when no longer needed. Also, never share or duplicate access keys, ID cards, lock codes or other access devices.

Before discarding or recycling a disk drive, completely erase all information from it and ensure the data can no longer be recovered. Old hard disks and other IT devices that contained critical information should be physically destroyed; assign a specific IT engineer to personally handle this process. 

5. Implement change management and database auditing

Another important security measure is to track all database and file server activities in order to spot access and changes to sensitive information and associated permissions. Login activity should be maintained for at least one year for security audits. Any account that exceeds the maximum number of failed login attempts should automatically be reported to the information security administrator for investigation.

Using historical information to understand what data is sensitive, how it is being used, who is using it and where it is going helps you build accurate, effective policies and anticipate how changes in your environment might impact security. This process can also help you identify previously unknown risks. There are third-party tools that simplify change management and auditing of user activity, such as Netwrix Auditor.

6. Use data encryption

Encryption is one of the most fundamental data security best practices. All critical business data should be encrypted while at rest or in transit, whether via portable devices or over the network. Portable systems should use encrypted disk solutions if they will store important data of any kind. Encrypting the hard drives of desktop systems that store critical or proprietary information will help protect critical information even in the event physical devices are stolen. 

Encrypting File System (EFS)

The most basic way to encrypt data on your Windows systems is Encrypting File System (EFS) technology. If you use EFS to protect data, unauthorized users cannot view a file’s content even if they have full access to the device. When an authorized user opens an encrypted file, EFS decrypts the file in the background and provides an unencrypted copy to the application. Authorized users can view or modify the file, and EFS saves changes transparently as encrypted data. If unauthorized users try to do the same, they receive an “access denied” error. 

Another encryption tool from Microsoft is BitLocker. BitLocker complements EFS by providing an additional layer of protection for data stored on Windows devices. BitLocker protects devices that are lost or stolen against data theft or exposure, and it offers secure data disposal when you decommission a device. 

Hardware-based encryption

Hardware-based encryption can be applied in addition to software-based encryption. In the advanced configuration settings on some BIOS configuration menus, you can choose to enable or disable a Trusted Platform Module (TPM). A TPM is a chip that may be installed on a motherboard and can store cryptographic keys, passwords or certificates. A TPM can be used to assist with hash key generation and to protect smartphones and devices other than PCs. It can also be used to generate values for whole disk encryption, such as BitLocker.

7. Back up your data

Critical business assets should be duplicated to provide redundancy and serve as backups. At the most basic level, fault tolerance for a server requires a data backup. Backups are the periodic archiving of data so that you can retrieve it in case of a server failure. From a security standpoint, there are three primary backup types: 

  • Full — All data is archived. Making a full backup is very time consuming and resource intensive, and it will significantly impact server performance.
  • Differential — All changes since the last full backup are archived. They won’t have as much impact as the full backups, they will still slow down your network.
  • Incremental — All changes since the last backup of any type are archived.

Normally, organizations use a combination of these types of backups. For example, you might take a full backup each day at midnight, and a differential or incremental backup every two hours thereafter. If the system crashes soon after midnight, you can restore from the last full backup; if it crashes later in the day, you need to use a combination of backups. 

Whatever backup strategy you choose, you must periodically test it by restoring the backup data to a test machine. Another key best practice is to store your backups in different geographic locations to ensure you can recover from disasters such as hurricanes, fires or hard-disk failures. 

8. Use RAID on your servers

A fundamental tool for fault tolerance, RAID is a redundant array of independent disks that allows your servers to have more than one hard drive, ensuring the system functions even if its main hard drive fails. The primary RAID levels are described here: 

  • RAID 0 (striped disks) — Data is distributed across multiple disks in a way that improves speed (read/write performance), but does not offer any fault tolerance. A minimum of two disks are needed.
  • RAID 1 — This RAID level introduces fault tolerance through mirroring: For every disk needed for operations, there is an identical mirrored disk. This requires a minimum of two disks and allocates 50 percent of your total capacity for data and the other 50 percent for the mirrors. When using RAID 1, the system keeps running on the backup drive even if the primary drive fails. You can add another controller to RAID 1, which is called “duplexing.”
  • RAID 3 or 4 (striped disks with dedicated parity) — Data is distributed across three or more disks. One dedicated disk is used to store parity information to reduce the array’s storage capacity by one disk. As a result, if a disk fails, its data is only partially lost. The data on the other disks, along with the parity information, allows the data to be recovered.
  • RAID 5 (striped disks with distributed parity) — This RAID level combines three or more disks in a way that protects data against the loss of any one disk. It is similar to RAID 3, but the parity is distributed across the drive array. This way, you don’t need to allocate an entire disk for storing parity bits.
  • RAID 6 (striped disks with dual parity) — This RAID level combines four or more disks while adding an additional parity block to RAID 5, protecting data even if the system loses any two disks. Each parity block is distributed across the drive array so that parity is not dedicated to any specific drive.
  • RAID 1+0 (or 10) — This RAID level is a mirrored data set (RAID 1) which is then striped (RAID 0), hence the “1+0” name. Think of it as a “stripe of mirrors.” A RAID 1+0 array requires a minimum of four drives: two mirrored drives to hold half of the striped data, plus another two mirrored drives for the other half of the data.
  • RAID 0+1 — This RAID level is the opposite or RAID 1+0. Here, the stripes are mirrored. A RAID 0+1 array requires a minimum of four drives: two mirrored drives to replicate the data on the RAID 0 array.

9. Use clustering and load balancing. 

RAID does a fantastic job of protecting data on systems, which you can then protect further with regular backups. But sometimes you need to grow beyond single systems. Connecting multiple computers to work together as a single server is known as “clustering.” Clustered systems utilize parallel processing, which improves performance and availability, and adds redundancy (as well as costs). 

Systems can also achieve high availability through load balancing. This allows you to split the workload across multiple computers — often servers answering HTTP requests (often called a server farm), which may or may not be in the same geographic location. If you split locations, they become a “mirror site.” That mirrored copy can help prevent downtime and add geographic redundancy to allow for faster answers to requests. 

10. Harden your systems

Any technology that could store sensitive data, even temporarily, should be adequately secured based on the type of information that system could potentially access. This includes all external systems that could remotely access your internal network with significant privileges. However, usability must still be a consideration, with functionality and security appropriately determined and balanced.

Operating system baseline 

The first step to securing your systems is making sure the operating system is configured to be as secure as possible. Out of the box, most operating systems run unneeded services that give attackers additional avenues toward compromising your system. The only programs and listening services that should be enabled are those that are essential for your employees to do their jobs. If something doesn’t have a business purpose, it should be disabled. It may also be beneficial to create a secure baseline OS image for typical employees. If anyone needs additional functionality, those services or programs can be enabled on a case-by-case basis. 

Windows and Linux operating systems each have their unique hardening configurations. 


Windows is by far the most popular operating system for consumers and businesses alike. But because of this, it is also the most targeted operating system, with new vulnerabilities announced almost weekly. There are a number of different Windows versions used throughout organizations, so some configurations mentioned here may not translate to all of them. Here are some procedures that should be done to enhance security: 

  • Disable LanMan authentication.
  • Ensure that all accounts have passwords, whether the account is enabled or disabled.
  • Disable or restrict permissions on network shares.
  • Remove all services that are not required, especially the clear-text protocols telnet and ftp.
  • Enable logging for important system events.

You can find more Windows hardening best practices in this Windows Server hardening checklist.


The Linux operating system has become more popular in recent years. Even though some claim that it’s more secure than Windows, some things still must be done to harden it correctly: 

  • Disable unnecessary services and ports.
  • Disable trust authentication used by “r commands.”
  • Disable unnecessary setuid and setgid programs.
  • Reconfigure user accounts for only the necessary users.

Web servers

Thanks to their wide network reach, web servers are a favorite area for attackers to exploit. If an attacker gains access to a popular web server and exploits a weakness there, they can reach thousands (if not hundreds of thousands) of site visitors and their data. By targeting a web server, an attacker can affect all the connections from users’ web browsers and inflict harm far beyond the one machine they compromised. 

Web servers were originally simple in design, used primarily to provide HTML text and graphic content. Modern web servers, meanwhile, allow database access, chat functionality, streaming media and many other services. But every service and capability supported on a website is a potential target. Make sure that they’re kept up to the latest software standards. You must also make certain that you give users only the permissions necessary to accomplish their tasks. If users are accessing your server via anonymous accounts, then you must make certain that they have permissions needed to view web pages and nothing more. 

Two particular areas of interest for web servers are filters and controlling access to executable scripts: 

  • Filters allow you to limit what traffic is allowed through. Limiting traffic to only what is required for your business can help ward off attacks. Filters can also be applied to your network to prevent users from accessing inappropriate or non-work related sites. Not only does this increase productivity, it also reduces the likelihood of users downloading a virus from a questionable site.
  • Executable scripts, such as those written in PHP, Python, various flavors of Java, and Common Gateway Interface (CGI) scripts, often run at elevated permission levels. Under most circumstances, this isn’t a problem because the user is returned to their regular permission level after the script is executed. Problems arise, however, if the user can break out of the script while at the elevated level. For administrators, the best course of action is to verify that all scripts on your server have been thoroughly tested, debugged, and approved for use.

Email servers 

Email servers provide the communications backbone for many businesses. They typically run as an additional service on a server or as dedicated systems. Adding an active virus scanner to email servers can reduce the number of viruses introduced into your network and prevent viruses from being spread by your email server. It is worth noting, though, that most scanners can’t read Microsoft’s open files; to scan Exchange mail stores, you need a specific email AV scanner, some of which can even detect phishing and other social engineering attacks via machine learning.

Email servers are commonly inundated by automated systems that attempt to use them to send spam. Although most email servers have implemented measures against these threats, they are becoming increasingly more sophisticated. You may be able to reduce these attempts to access your system by entering the attackers’ TCP/IP addresses in your router’s ACL Deny list. Doing so will cause your router to ignore connection requests from these IP addresses, effectively improving your security. You can also set up this policy with spam filters.

FTP servers 

File Transfer Protocol (FTP) servers aren’t intended for high-security applications due to their inherent weaknesses. While most FTP servers allow you to create file areas on any system drive, it is far more secure to create a separate drive or subdirectory for file transfers. If possible, use virtual private network (VPN) or Secure Shell (SSH) connections for FTP-related activities. FTP is famously unsecured and exploitable; many FTP systems send account and password information across the network unencrypted. 

For maximum operational security, use separate logon accounts and passwords for FTP access. Doing so will prevent system accounts from being disclosed to unauthorized individuals. Also, regularly scan all files on FTP servers for viruses. 

To make FTP easier to use, most servers default to allowing anonymous access. However, these anonymous accounts should always be disabled. From a security perspective, the last thing you want is to allow anonymous users to copy files to and from your servers. Once anonymous access is disabled, the system will require the user to be a known, authenticated user in order to access it. 

But the best way to secure an FTP server is to replace it altogether. The same functionality can be found in more secure services such as Secure File Transfer Protocol (SFTP). 

11. Implement a proper patch management strategy

You need to have a patching strategy for both your operating systems and your applications. It may be tedious to ensure that all versions of your IT environment’s applications are up to date, but it’s essential for data protection. One of the best ways to ensure security is to enable automatic antivirus and system updates. For critical infrastructure, patches need to be thoroughly tested to ensure that they do not affect functionality and will not introduce vulnerabilities.

Operating system patch management 

There are three types of operating system patches, each with a different level of urgency:

  • Hotfix — A hotfix is an immediate, urgent patch. In general, these represent serious security issues and are not optional.
  • Patch — A patch provides some additional functionality or a non-urgent fix. These are sometimes optional.
  • Service pack — A service pack is the full set of hotfixes and patches to date. These should always be applied.

Test all patches before applying them in production to be sure that the update won’t cause any problems.

Application patch management 

You also need to regularly update and patch your applications. Once an exploit in an application is discovered, an attacker can take advantage of it to enter or harm a system. Most vendors post patches on a regular basis, and you should routinely scan for any available ones. Many attacks today target client systems for the simple reason that clients do not always manage application patching effectively. Establish maintenance days dedicated to patching and testing all your critical applications. 

12. Protect your data from insider threats

Although organizations continue to spend an exceptional amount of time and money to secure their networks from external attacks, insider threats are a key cause of data exposure. A Netwrix survey found that insider incidents account for more than 60 percent of all attacks; however, many insider attacks go unreported out of fear of business losses and damage to the company’s reputations.

Insider threats come in two forms. An authorized insider threat is someone who misuses their rights and privileges, either accidentally, deliberately, or because their credentials were stolen. An unauthorized insider is someone who has connected to the network behind its external defenses. This could be someone who plugged into a jack in the lobby or a conference room, or someone accessing an unprotected wireless network connected to the internal network. Insider attacks can lead to data loss or downtime, so it’s as important to monitor activity in your network as closely as activity at the perimeter.

Insiders using remote access 

With users increasingly working from home, remote access to corporate networks is also becoming commonplace, so it’s critical to secure remote connections as well. Strong authentication processes are essential when connecting remotely. It is also important that the devices used for remote network access are secured properly. In addition, remote sessions should be properly logged, or even video recorded. 

13. Use endpoint security systems to protect your data

Your network endpoints are under constant attack, so endpoint security infrastructure is crucial to protecting against data breaches, unauthorized programs and advanced malware like rootkits. With the increased use of mobile devices, network endpoints are expanding and becoming increasingly undefined. Automated tools that reside in system endpoints are essential to mitigating damage from malware. At a minimum, you should use the following technologies: 

Antivirus software 

Antivirus software should be installed and kept up to date on all servers and workstations. In addition to actively monitoring incoming files, the software should regularly conduct scans to catch any infections that may have slipped through, such as ransomware.


Anti-spyware and anti-adware tools are designed to block or remove spyware. Spyware is computer software installed without the user’s knowledge. Usually, its goal is to find out more information about the user’s behavior and collect personal information. 

Anti-spyware tools work much like antivirus tools, and many of their functions overlap. Some antispyware software is combined with antivirus packages, whereas other programs are available as standalone solutions. Regardless of which type of protection you use, you must regularly look for spyware, such as by identifying and removing tracking cookies on hosts. 

Pop-up blockers 

Pop-ups are more than just irritating; they are a security threat. Pop-ups (including pop-unders) represent unwanted programs running on the system, so they can jeopardize its well-being.

Host-based firewalls 

Personal firewalls are software-based firewalls installed on each computer in the network. They work in much the same way as larger border firewalls: by filtering out certain packets to prevent them from leaving or reaching your system. Many may not see a need for personal firewalls, especially in corporate networks with large dedicated firewalls. However, those large firewalls can’t do anything to prevent internal attacks, which, unlike ones from the internet, are usually carried out by viruses. Instead of disabling personal firewalls, simply configure a standard personal firewall according to your organization’s needs, exporting those settings to other personal firewalls as well. 

Host-based intrusion detection systems (IDSs )

Host IDSs monitor the system state and check whether it is as expected. Most host-based IDSs use integrity verification, which operates on the principle that malware will typically try to modify host programs or files as it spreads. Integrity verification tries to determine what system files have been unexpectedly modified by computing the “fingerprints” (detectable as cryptographic hashes) of files that need to be monitored in a known clean state. It then conducts a scan, issuing an alert when the fingerprint of a monitored file changes. 

However, integrity verification only detects the malware infection after the fact and will not prevent it. 

14. Perform vulnerability assessments and cybersecurity penetration tests

Vulnerability assessments usually consist of port scanners and vulnerability scanning tools such as nmap, OpenVas and Nessus. These tools scan the environment from an external machine, looking for open ports and the version numbers of those services. The results from the test can be cross-referenced with known services and patch levels that are supposed to be on the endpoint systems, allowing the administrator to make sure that the systems are adhering to endpoint security policies. 

Penetration testing is the practice of testing a system, network or application to find security vulnerabilities. It can also be used to test a security policy, adherence to compliance requirements, employee security awareness, and security incident detection and response. The process can be automated or conducted manually. Either way, organizations should perform penetration testing regularly — ideally, once a year — to ensure more consistent network security and IT management. 

Here are the main penetration test strategies used by security professionals:

  • Targeted testing is performed collaboratively by the organization's IT team and the penetration testing team. It's sometimes referred to as a "lights turned on" approach because everyone can see the test being carried out.
  • External testing targets a company's externally visible servers or devices, including domain servers, email servers, web servers and firewalls. The objective is to find out if an outside attacker can get in and how far they could go.
  • Internal testing performs an inside attack behind the firewall by an authorized user with standard access privileges. This kind of test is useful for estimating how much damage a regular employee could cause.
  • Blind testing simulates the actions and procedures of a real attacker by severely limiting the information given to the person or team performing the test. Typically, the penetration testers are given only the name of the company.
  • Double-blind testing takes the blind test and carries it a step further: Only one or two people within the organization are aware that a test is being conducted.
  • Black box testing is basically the same as blind testing, but the penetration testers receive no information before the test takes place; they must find their own way into the system.
  • White box (crystal box) testing provides the penetration testers with information about the target network before they start their work. This information can include IP addresses, network infrastructure schematics, the protocols being used and so on.

How Netwrix Can Help

As we have seen, data protection encompasses a lot of topics and controls, which can pose a daunting challenge for any security team. Working with a leading security firm like Netwrix can make the difference between success and failure.

Netwrix provides a comprehensive set of data protection solutions, including tools for data access governance, information governance and ransomware protection. Get in touch today to learn more about how Netwrix can help you ensure your sensitive data is properly protected.


Related best practices