Windows Server

Managing Windows Server 2012 Storage and File Systems : Storage Management (part 1) - Essential storage technologies

- Free product key for windows 10
- Free Product Key for Microsoft office 365
- Malwarebytes Premium 3.7.1 Serial Keys (LifeTime) 2019
7/2/2013 9:49:44 PM

1. Essential storage technologies

One of the few constants in Microsoft Windows operating system administration is that data storage needs are ever increasing. It seems that only a few years ago a 1-terabyte (TB) hard disk was huge and something primarily reserved for Windows servers rather than Windows workstations. Now Windows workstations ship with large hard disks as standard equipment, and some even ship with striped drives that allow workstations to have spanned drives that have multi-terabyte volumes—and all of that data must be backed up and stored somewhere other than on the workstations to protect it. This has meant that back-end storage solutions have had to scale dramatically as well. Server solutions that were once used for enterprise-wide implementations are now being used increasingly at the departmental level, and the underlying architecture for the related storage solutions has had to change dramatically to keep up.

INSIDE OUT: Storage technologies are in transition

Storage technologies are in transition from traditional approaches to standards-based approaches. As a result, several popular tools and favored features are being phased out. Officially, a tool or feature that is being phased out is referred to as deprecated. When Microsoft deprecates a tool or feature that means it might not be in future releases of the operating system. Rather than not cover popular tools and features, I’ve chosen to discuss what is actually available in the operating system right now. That means I discuss both favored standbys and new options.

Like other Windows operating systems before them, Windows 8 and Windows Server 2012 will have long product life cycles. For most people deploying these operating systems today, what’s in the box right now is what matters most and not what might or might not be in the box in a future release. My recommendation is to continue to use your favorite tools and features for servers you’ve already deployed and then transitioned to Windows Server 2012. Before you deploy new servers on new hardware, however, you should review the available storage options and then make informed decisions as to the tools and features to use on those new servers.

Using internal and external storage devices

To help meet the increasing demand for data storage and changing requirements, organizations are deploying servers with a mix of internal and external storage. In internal-storage configurations, drives are connected inside the server chassis to a local disk controller and are said to be directly attached. You’ll sometimes see an internal storage device referred to as direct-attached storage (DAS).

In external-storage configurations, servers connect to external, separately managed collections of storage devices that are either network-attached or part of a storage area network. Although the terms network-attached storage (NAS) and storage area network (SAN) are sometimes used as if they are one and the same, the technologies differ in how servers communicate with the external drives.

NAS devices are connected through a regular Transmission Control Protocol/Internet Protocol (TCP/IP) network. All server-storage communications go across the organization’s local area network (LAN), as shown in Figure 1, and typically use file-based protocols for communications, which can include Server Message Block (SMB), Distributed File System (DFS), and Network File System (NFS). This means the available bandwidth on the network can be shared by clients, servers, and NAS devices. For best performance, the network should be running at 1 gigabit per second (Gbps) or higher. Networks operating at slower speeds can experience a serious decrease in performance as clients, services, and storage devices try to communicate using the limited bandwidth.

INSIDE OUT: Working with NFS

You add support for NFS by adding the Server For NFS feature to a file server. Windows Server 2012 supports NFS 3 and NFS 4.1. NFS 3 brings with it support for continuous availability. NFS 4.1 adds supports for stateful connections with improved security and lower bandwidth utilization. Support for NFS 3 and NFS 4.1 also enables you to reliably deploy and run VMware ESX and VMware ESXi on virtual machines from file-based storage access over NFS. You also can deploy Server For NFS reliably in a clustered configuration.

In a NAS, server-storage communications are on the LAN.
Figure 1. In a NAS, server-storage communications are on the LAN.

A SAN typically is physically separate from the LAN and is independently managed. As shown in Figure 2, this isolates the server-to-storage communications so that traffic doesn’t affect communications between clients and servers. Several SAN technologies are implemented, including Fibre Channel Protocol (FCP), a more traditional SAN technology that delivers high reliability and performance, and Internet SCSI (iSCSI), which delivers good reliability and performance at a lower cost than Fibre Channel. As the name implies, iSCSI uses TCP/IP networking technologies on the SAN so that servers can communicate with storage devices using IP. The SAN is still isolated from the organization’s LAN.

In a SAN, server-storage communications don’t affect communications between clients and servers.
Figure 2. In a SAN, server-storage communications don’t affect communications between clients and servers.

You should be aware that iSCSI uses traditional IP facilities to transfer data over LANs, wide area networks (WANs), or the Internet. Here, iSCSI clients (initiators) send Small Computer System Interface (SCSI) commands to targeted iSCSI storage devices (targets) on remote servers. iSCSI consolidates storage and allows hosts—which can include web, application, and database servers—to access the storage as if it were locally attached. Initiators can locate storage resources using Internet Storage Name Service (iSNS). iSNS isn’t required for communications, but it does provide management services similar to those for Fibre Channel networks. iSNS emulates the fabric services of Fibre Channel and can manage both Fibre Channel and iSCSI devices.

Although Fibre Channel requires special cabling, iSCSI uses standard Ethernet cabling and technically can operate over the same network as standard IP traffic. However, if iSCSI isn’t operated on a dedicated network or subnet, performance can be severely degraded.

With TCP/IP, TCP is the transport protocol for IP networks. With Fibre Channel, FCP is a transport protocol used to transport SCSI commands over the Fibre Channel network. Fibre Channel networks can use a variety of topologies, including the following:

  • Point-to-point (FC-PTP), where two devices are connected directly.

  • Arbitrated loop (FC-AL), where all devices are in a ring, similar to token ring networking.

  • Switched fabric (FC-SW), where all devices or device rings are connected to switches, similar to Ethernet.

The standard model for Fibre Channel has five layers:

  • FC0, the physical layer, which includes cables and connectors

  • FC1, the data-link layer

  • FC2, the network layer

  • FC3, the common services layer

  • FC4, the protocol-mapping layer

Windows Server 2012 includes support for Fibre Channel over Ethernet (FCoE), a technology that allows IP network and SAN data traffic to be consolidated on a single network. FCoE encapsulates Fibre Channel frames over Ethernet and supports 10-Gbps and higher networks. With FCoE, the FC0 and FC1 layers of the Fibre Channel model are replaced with Ethernet and FCoE operates in the FC2, or network, layer. This is different from iSCSI, which runs on top of TCP and IP. Additionally, while iSCSI is routable across IP networks, FCoE isn’t routable in the IP layer and won’t work across routed IP networks.

You should also note that although Fibre Channel has priority-based flow controls, these controls aren’t part of standard Ethernet. As a result, both FCoE and iSCSI needed enhancements to support priority-based flow controls and prevent the frame loss that might occur otherwise. These enhancements, provided in the Data Center Bridging suite of Institute of Electrical and Electronics Engineers (IEEE) standards, include the encapsulation of native frames, extensions to Ethernet to prevent frame loss, and mapping between ports/IDs and Ethernet media access control (MAC) addresses.

Several competing network protocols are available to provide fabric functionality to Fibre Channel devices over an IP network and to make the technology work over long distances. One is called Internet Fibre Channel Protocol (iFCP). iFCP uses gateways and routing to enable connectivity and TCP for error detection and correction as well as congestion control. A similar technology called Fibre Channel over IP (FCIP) also is available. FCIP uses storage tunneling, where Fibre Channel frames are encapsulated and then forwarded over an IP network using TCP.

Storage-management features and tools

Windows Server 2012 includes many features for working with SANs and handling storage management in general. Volume Shadow Copy Service (VSS) allows administrators to create point-in-time copies of volumes and individual files called snapshots. This makes it possible to back up these items while files are open and applications are running and to restore them to a specific point in time. You also can use VSS to create point-in-time copies of documents on shared folders. These copies are called shadow copies.


Users can recover their own files when VSS is enabled. After you configure shadow copy, point-in-time backups of documents contained in the designated shared folders are created automatically, and users can quickly recover files that have been deleted or unintentionally altered as long as the Shadow Copy Client has been installed on their computer. 

The basic VSS functionality is built into the file and storage services and accessed through the File Server VSS provider. You can extend the basic functions in several ways. One of these ways is to add the File Server VSS Agent Service. You use this role service to create consistent snapshots of server application data, such as virtual machine files from Hyper-V. You install the agent service on a file server when you want to back up applications that are storing data files on the file server. Here, you are backing up application data stored on file shares, which is different from user data stored on file shares (which is managed using the standard File Server VSS provider).

Windows Server 2012 also includes storage providers. Storage providers make it possible for storage devices from multiple vendors to interoperate. To do this, Microsoft provides Storage Management application programming interfaces (APIs) that management tools and storage hardware can use, allowing for a unified interface for managing storage devices from multiple vendors and making it easier for administrators to manage a mixed-storage environment. Standard storage providers are built into the file and storage services.

Windows Server 2012 also supports the Storage Management Initiative (SMI-S) standard and storage providers that are compliant with this standard. Add this support by adding the Windows Standards-Based Storage Management feature. This feature enables the discovery, management, and monitoring of storage devices using management tools that support the SMI-S standard. It does this by installing related Windows Management Instrumentation (WMI) classes and cmdlets.

When your file servers are using iSCSI, Fibre Channel, or both storage device types, you might also want to install Multipath IO, iSNS Server service, and Data Center Bridging—all of which are installable features.

Multipath I/O supports SAN connectivity by establishing multiple sessions or connections to storage devices. Using Multipath I/O, you can configure as many as 32 separate physical paths to external storage devices that can be used simultaneously and load balanced if necessary. The purpose of having multiple paths is to have redundancy and possibly increased throughput. If you have multiple host bus adapters as well, you improve the chances of recovery from a path failure. However, if a path failure occurs, there might be a short period of time when the drives on the SAN aren’t accessible. Microsoft Multipath I/O (MPIO) supports iSCSI, Fibre Channel, and Serial Attached SCSI (SAS).

iSNS Server service helps iSNS clients discover iSCSI storage devices on an Ethernet network and also automates the management and configuration of iSCSI and Fibre Channel storage devices (as long as Fibre Channel devices use iFCP gateways). Data Center Bridging helps manage bandwidth allocation for offloaded storage traffic on converged network adapters, which is useful with iSCSI and FCoE.

Other file and storage features you might want to install on file servers include the following:

  • Enhanced Storage Supports additional functions made available by devices that support hardware encryption and enhanced storage. Enhanced storage devices support IEEE standard 1667 to provide enhanced security, which can include authentication at the hardware level of the storage device.

  • Windows Search Service Allows for faster file searches for resources on the server from clients that are compatible with this service. Keep in mind, however, this feature is designed primarily for desktop and small office implementations (and not for large enterprises).

  • Windows Server Backup The standard backup utility included with Windows Server 2012.

Server Manager is your primary tool for managing storage. Windows Server 2012 also has several command-line tools for managing local storage and storage-replication services. These tools include the following:

  • DiskPart Used to manage basic and dynamic disks as well as the partitions and volumes on those disks. It is the command-line counterpart to the Disk Management tool and also includes features not found in the graphical user interface (GUI) tool, such as the capability to extend partitions on basic disks.


    DiskPart cannot be used to manage Storage Spaces. Windows 8 and Windows Server 2012 might be the last versions of Windows to support Disk Management, DiskPart, and DiskRaid. The Virtual Disk Service (VDS) COM interface is being superseded by the Storage Management API. You can continue to use Disk Management and DiskPart to manage basic and dynamic disks.

  • Dfsdiag Used to perform troubleshooting and diagnostics for DFS.

  • Dfsradmin Used to manage and monitor DFS replication throughout the enterprise. You’ll use this tool for troubleshooting and diagnosing problems as well. This tool replaces Health_Chk and the other tools it worked with.

  • Dfsutil Used to configure DFS, back up and restore DFS directory trees (namespaces), copy directory trees, and troubleshoot DFS.

  • Fsutil Used to get detailed drive information and perform advanced file system maintenance. You can manage sparse files, reparse points, disk quotas, and other advanced features of NTFS.

  • Mountvol Used to manage volume automounting. By using volume mount points, administrators can mount volumes to empty NTFS folders, giving the volumes a drive path rather than a drive letter. This means it is easier to mount and unmount volumes, particularly with SANs.

  • Vssadmin Used to view and manage the Volume Shadow Copy Service and its configuration.

Many Windows PowerShell cmdlets are available for managing storage as well. These cmdlets are module-specific and correspond to the storage component you want to manage. Available modules include

  • BitsTransfer Used to manage the Background Intelligent Transfer Service (BITS).

  • BranchCache Used to configure and check the status of Windows BranchCache.

  • DFSN Used to manage DFS Namespaces.

  • FileServerResourceManager Used to manage File Server Resource Manager.

  • iSCSI Used to manage iSCSI connections, sessions, targets, and ports.

  • IscsiTarget Used to mount and manage iSCSI virtual disks.

  • SmbShare Used to configure and check the status of standard file sharing.

  • Storage Used to manage disks, partitions, and volumes, as well as storage pools and Storage Spaces. It cannot be used to manage dynamic disks.

The easiest way to learn more about these PowerShell modules is to import a particular module, determine which cmdlets are associated with it, and then examine how the cmdlets are used. You import a module using the following syntax:

Import-module ModuleName

Here, ModuleName is the name of the module to import, such as the following:

Import-module iscsi

You list the cmdlets associated with an imported module using

get-command -module ModuleName

Here, ModuleName is the name of the module you want to examine, such as the following:

get-command -module iscsi

After you list the cmdlets associated with an imported module, you can get more information about a particular cmdlet using

get-help CmdletName -detailed

Here, CmdletName is the name of the cmdlet to examine in detail, such as the following:

get-help connect-iscsitarget -detailed

Storage-management role services

You use File And Storage Services to configure your file servers. Several file and storage services are installed by default with any installation of Windows Server 2012. These include File Server, which you use to manage file shares that users can access over the network, and Storage Services, which you use to manage various types of storage, including storage pools and storage spaces. Storage pools group disks so that you can create virtual disks from the available capacity. Each virtual disk you create is a storage space. 

Windows Server 2012 also supports thin provisioning of your storage spaces. With thin provisioning, you can create large virtual disks without having the actual space available. This allows you to provision storage to meet future needs and grow storage as needed. You also can reclaim storage that is no longer needed by trimming storage. To see how thin provisioning works, consider the following scenarios:

  • Your file server is connected to a storage array with 2 TBs of actual storage, but with the capability to grow to 10 TBs as needed (by installing additional hard disks). When you set up storage, you provision it as if additional storage was already available. One way to do this is to create a storage pool that has a total size of 10 TBs and then create 5 thin disks with 2 TBs of storage each.

  • Your eight file servers are connected to a SAN with 10 TBs of actual storage, but with the capability to grow to 80 TBs as needed (by installing additional hard disks). When you set up storage, you provision it as if additional storage was already available. One way to do this is to create a storage pool on each file server that has a total size of 10 TBs. Next, within each storage pool, you create 5 thin disks with 2 TBs of storage each.

With thin-disk provisioning, volumes use space from the storage pool as needed, up to the volume size. Here, the actual storage utilization for a volume is based on the total size of the data stored on the volume. If a volume doesn’t grow, the storage space is never allocated and isn’t wasted.

Contrast this to fixed-disk provisioning, where a volume has a fixed size and uses space from the storage pool equal to its volume size. Here, the storage utilization for a volume is fixed and based on the total size of the volume itself. Because the storage is pre-allocated with a fixed size, any unused space isn’t available for other volumes.

You can enhance file storage in many ways using the additional role services that are available for File And Storage Services. One of the first role services you might consider using is BranchCache For Network Files. You add the BranchCache For Network Files role service to enable enhanced support for Windows BranchCache on your file servers and to optimize data transfer over the WAN for caching.

Windows BranchCache is a file-caching feature that works in conjunction with BITS. By enabling branch caching in Group Policy, you allow computers to retrieve documents and other types of files from a local cache rather than retrieving files from servers over the network. This improves response times and reduces transfer times.

Branch caching can be used in either a distributed cache mode or a hosted cache mode. With the distributed cache mode, desktop computers running compatible versions of Windows host and send distributed file caches, and caching servers running at remote offices are not needed. With the hosted cache mode, compatible file servers at remote offices host local file caches and send them to clients. Generally, whether distributed or hosted, the caches at one office location are separate from caches at other office locations. That said, the Active Directory configuration and the way Group Policy is applied ultimately determine whether computers are considered to be part of one office location or another.

Branch caching is designed as a WAN solution. It optimizes bandwidth usage for files transferred with either SMB or Hypertext Transfer Protocol (HTTP). Your content servers can be located anywhere on your network, as well as in public or private cloud datacenters. You enable branch caching on web servers and BITS-based application servers by adding the BranchCache feature. If you are deploying hosted cache servers, you add the BranchCache feature to these servers as well. You don’t install this feature on your file servers, however. Instead, you add the BranchCache For Network Files role service.

INSIDE OUT: Enhancing BranchCache

BranchCache For Network Files can take advantage of data deduplication techniques to optimize data transfers. Because of this, it is recommended that you also install the Data Deduplication role service on your file servers, but don’t do this without a firm understanding of what data deduplication is and how it works. If you have multiple file servers, you might also want to enable hash publication per share to improve performance. For file servers that aren’t domain members, you enable hash publication in local policy. For file servers that are domain members, you typically want to isolate your BranchCache-enabled file servers in their own organizational units (OUs) and then enable hash publication in the appropriate GPO (Group Policy Object) or GPOs that are applied to these OUs. Either way, the Hash Publication For BranchCache policy is what you want to work with. This policy is found under Computer Configuration\Administrative Templates\Network Lanman Server.

The Data Deduplication service can be installed with or without the BranchCache For Network Files role service. Data Deduplication uses subfile, variable-size chunking and compression to achieve higher storage efficiency. The service does this by segmenting files into 32-KB to 128-KB chunks, identifying duplicate chunks, and replacing the duplicates with references to a single copy. Because optimized files are stored as reparse points, files on the volume are no longer stored as data streams. Instead, they are replaced with stubs that point to data blocks within a common chunk store.

Previously, I mentioned the File Server VSS Agent Service, which you install on file servers when you want to ensure that you can make consistent backups of server application data using VSS-aware backup applications. When working with iSCSI, you also must install the iSCSI target VSS hardware provider on the initiator server you use to perform backups of iSCSI virtual disks. This ensures that the snapshots are application-consistent and can be restored at the logical unit number (LUN) level. If you don’t use the iSCSI target VSS hardware provider on the initiator, server backups might not be consistent and you might not be able to completely recover your iSCSI virtual disks. On management computers running storage-management applications, you must install the iSCSI target Virtual Disk Service (VDS) hardware provider. The iSCSI target VSS hardware provider and the iSCSI target VDS hardware provider are part of the iSCSI Target Storage Provider role service.

Another role service you might want to use with iSCSI is the iSCSI Target Server service. This role service turns any computer running Windows Server into a network-accessible block storage device. You can use this continuously available block storage to support network/diskless boot, shared storage on non-Windows iSCSI initiators, and development environments where you need to test applications prior to deploying them to SAN storage. Because the service uses standard Ethernet for its transport, no additional hardware is needed.

Although SMB is the default file-sharing protocol, other file-sharing solutions are available, including Network File System (NFS) and Distributed File System (DFS). To enable NFS on your file servers, you add the Server For NFS service. This service provides a file-sharing solution for enterprises with mixed Windows and UNIX environments. When you install Server For NFS, users can transfer files between Windows Server and UNIX operating systems using the NFS protocol. DFS, on the other hand, isn’t an interoperability solution. Instead, DFS is a robust, enterprise solution for file sharing that you can use to create a single directory tree that includes multiple file servers and their file shares.

The DFS tree can contain more than 5000 shared folders in a domain environment (or 50,000 shared folders on a standalone server), located on different servers, enabling users to find files or folders distributed across the enterprise easily. DFS directory trees can also be published in the Active Directory directory service so that they are easy to search.

DFS has two key components:

  • DFS Namespaces You can use DFS Namespaces to group shared folders located on different servers into one or more logically structured namespaces. Each namespace appears as a single shared folder with a series of subfolders. However, the underlying structure of the namespace can come from shared folders on multiple servers in different sites.

  • DFS Replication You can use DFS Replication to synchronize folders on multiple servers across local or wide area network connections using a multimaster replication engine. The replication engine uses the Remote Differential Compression (RDC) protocol to synchronize only the portions of files that have changed since the last replication.

You can use DFS Replication with DFS Namespaces or by itself. When a domain is running in a Windows 2008 domain functional level or higher, domain controllers use DFS Replication to replicate the SYSVOL directory.

DFS supports multiple roots and closest-site selection

Windows Server 2012 supports multiple DFS roots and closest-site selection. The capability to host multiple DFS roots allows you to consolidate and reduce the number of servers needed to maintain DFS. By using closest-site selection, DFS uses Active Directory site metrics to route a client to the closest available DFS server.

File Server Resource Manager (FSRM) installs a suite of tools that administrators can use to better manage data stored on servers. Using FSRM, you can do the following:

  • Define file-screening policies You use file-screening policies to block unauthorized, potentially malicious types of content. You can configure active screening, which does not allow users to save unauthorized files, or passive screening, which allows users to save unauthorized files but monitors or warns about usage (or you can configure both).

  • Configure Resource Manager disk quotas Using Resource Manager disk quotas, you can manage disk space usage by folder and by volume. You can configure quotas with a specific limit as a hard limit (meaning a limit can’t be exceeded) or a soft limit (meaning a limit can be exceeded).

  • Generate storage reports You can generate storage reports as part of disk-quota and file-screening management. Storage reports identify file usage by owner, type, and other parameters. They also help identify users and applications that violate screening policies.

Booting from SANs, and using SANs with clusters

Windows Server 2012 supports booting from a SAN, having multiple clusters attached to the same SAN, and having a mix of clusters and standalone servers attached to the same SAN. To boot from a SAN, the external storage devices and the host bus adapters of each server must be configured appropriately to allow booting from the SAN.

When multiple servers must boot from the same external storage device, you must either configure the SAN in a switched environment or you must directly attach it from each host to one of the storage subsystem’s Fibre Channel ports. A switched or direct-to-port environment allows the servers to be separate from each other, which is essential for booting from a SAN.

Fibre Channel–Arbitrated Loop isn’t allowed

The use of a Fibre Channel–Arbitrated Loop (FC-AL) configuration is not supported because hubs typically don’t allow the servers on the SAN to be isolated properly from each other—and the same is true when you have multiple clusters attached to the same SAN or a mix of clusters and standalone servers attached to the same SAN.

Each server on the SAN must have exclusive access to the logical disk from which it is booting, and no other server on the SAN should be able to detect or access that logical disk. For multiple-cluster installations, the SAN must be configured so that a set of cluster disks is accessible only by one cluster and is completely hidden from the rest of the clusters. By default, Windows Server 2012 will attach and mount every logical disk that it detects when the host bus adapter driver loads, and if multiple servers mount the same disk, the file system can be damaged.

To prevent file system damage, the SAN must be configured in such a way that only one server can access a particular logical disk at a time. You can configure disks for exclusive access using a type of logical unit number (LUN) management such as LUN masking, LUN zoning, or a preferred combination of these techniques. You can use the File And Storage Services node in the Server Manager console to manage Fibre Channel and iSCSI SANs that support Storage Management APIs and have a configured storage provider.

TROUBLESHOOTING: Detecting SAN configuration problems

On an improperly configured SAN, multiple hosts are able to access the same logical disks. This isn’t what you want to happen, but it does happen and you might be able to detect this configuration problem when you are working with the logical disks. Try using File Explorer from multiple hosts to access the logical disks on the SAN. If you try to access a logical disk and receive an Access Denied, Device Not Ready, or similar error message, this can be an indicator that another server has access to the logical disk you are attempting to use. You might see another indicator of an improperly configured SAN when you add or configure logical disks. If you notice that multiple servers report that they’ve found new hardware when adding or configuring logical disks, there is a configuration problem with the SAN. If there is a configuration problem with clusters, you can see the following error events in the System logs:

  • Warning event ID 11 with event source _ADriverName%, “The driver detected a controller error on Device\ScsiPortN.”

  • Warning event ID 50 with event source Disk, “The system was attempting to transfer file data from buffers to \Device\HarddiskVolumeN. The write operation failed, and only some of the data may have been written to the file.”

  • Warning event ID 51 with event source FTDISK, “An error was detected on device during a paging operation.”

  • Warning event ID 9 with event source _ADriverName%, “Lost Delayed Write Data: The device, \Device\ScsiPortN, did not respond within the timeout period.”

  • Warning event ID 26 with event source Application Popup, “Windows—Delayed Write Failed: Windows was unable to save all the data for the file \Device\HarddiskVolumeN\MFT$. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere.”

Working with SMB 3.0

Sever Message Block (SMB) is the standard technology used for file sharing. SMB 3.0 was released as part of Windows 8 and Windows Server 2012. Earlier releases of Windows support different versions of SMB. Windows 7 and Windows Server 2008 R2 support SMB 2.1. Windows Vista and Windows Server 2008 support SMB 2.0.

SMB 2.1 was an incremental improvement over SMB 2.0, which brought several important changes for file sharing, including support for BranchCache and large maximum transmission units (MTUs). SMB 3.0 has the following important improvements:

  • SMB Direct Provides support for network adapters that have Remote Direct Memory Access (RDMA) capability, allowing fast, offloaded data transfers and helping achieve high speeds and low latency while using few CPU resources. Previously, this capability was one of the key advantages of Fibre Channel block storage.

  • SMB encryption Provides secure data transfer by encrypting data automatically and without having to deploy Internet Protocol security (IPsec) or another encryption solution. SMB encryption can be enabled for an entire server (meaning for all its file shares) or for individual file shares as needed.

  • SMB Multichannel Allows servers to simultaneously use multiple connections and network interfaces, increasing fault tolerance and throughput. Configure network interface card (NIC) teaming to take advantage of this feature.

  • SMB scale-out Allows clustered file servers in an active-active configuration to aggregate bandwidth across the cluster. This provides simultaneous access to data files through all nodes in the cluster and allows administrators to load balance across cluster nodes simply by moving file server clients.

  • SMB signing Introduces AES-CCM and AES-CMAC for signing. Typically, signing with Advanced Encryption Standard (AES) is dramatically faster than signing with HMAC-SHA256 (which was used by SMB 2/SMB 2.1).

  • SMB Transparent Failover Allows administrators to perform maintenance on nodes in a clustered file server without affecting applications storing data on the server’s file shares. If a failure occurs, SMB clients transparently reconnect to another cluster node. This provides the benefits of a multicontroller storage array without having to purchase one.


Not only can you use the SMB Direct, SMB Multichannel, and SMB scale-out features to implement manageable, scalable active-active file shares, you also can use these features to take an existing Fibre Channel SAN and share its storage over SMB 3.0. This gives you a gateway to a SAN and extends your storage options.

Keep in mind that SMB is a client/server technology. For backward compatibility, newer clients continue to support older versions of the technology. While establishing a connection to a file share, an SMB client negotiates the SMB version to use for that connection based on the highest commonly supported SMB version. This process is referred to as dialect negotiation.

During dialect negotiation, the version downgrade is automatic, such that an SMB 3.0 client connecting to a SMB 2.1 server will use SMB 2.1 for that connection. Because older versions of SMB are less secure, forcing a client to downgrade the version used is one way someone might try to gain unauthorized access.

SMB 3.0 includes a security feature that attempts to detect forced downgrade attempts. If such an attempt is detected, the connection is disconnected and Event ID 1005 is logged in the Microsoft-Windows-SmbServer/Operational log. This security feature works only when a client tries to force a downgrade from SMB 3.0 to SMB 2.0/SMB 2.1. It doesn’t work if a client attempts to downgrade to SMB 1.0. For this reason, Microsoft recommends that you disable support for SMB 1.0.

INSIDE OUT: Checking for and disabling SMB 1.0

Before you disable SMB 1.0, you should determine whether any clients are using SMB 1.0. SMB 1.0 is used by Windows 2000, Windows XP, and Windows Server 2003. Computer Browser functionality also relies on SMB 1.0. To determine whether any SMB clients are currently using SMB 1.0, you can run the following command on each file server:

Get-SmbSession | Select ClientUserName,ClientComputerName,Dialect |
Where-Object {$_.Dialect -lt 2.00}

Keep in mind this command must be run with elevated privileges and returns information only about active connections to SMB shares. To disable SMB 1.0 support, you can run the following command at an elevated PowerShell prompt on each file server:

Set-SmbServerConfiguration -EnableSMB1Protocol $false

You can easily run this command on multiple file servers. One technique is to invoke a remote command, as shown in this example:

Invoke-command -computername fileserver12, fileserver23, fileserver45
-scriptblock {Set-SmbServerConfiguration -EnableSMB1Protocol $false}

Here, you run the code block on FileServer12, FileServer23, and FileServer45.

If you want to ensure that SMB encryption is used whenever possible, you can enable SMB encryption on either a per-server or per–file share basis. To enable encryption for an entire server and all its SMB file shares, run the following command at an elevated PowerShell prompt on the server:

Set-SmbServerConfiguration -EncryptData $true

To enable encryption for a specific file share rather than an entire server, run the following command at an elevated PowerShell prompt on the server:

Set-SmbShare -Name ShareName -EncryptData $true

Here, ShareName is the name of the share for which encryption should be used when possible, such as the following:

Set-SmbShare -Name CorpData -EncryptData $true

You can turn on encryption when you create a share as well. To do this, run the following command at an elevated PowerShell prompt on the server:

New-SmbShare -Name ShareName -Path PathName -EncryptData $true

Here, ShareName is the name of the share for which encryption should be used when possible and PathName is the path to an existing folder to share, such as the following:

New-SmbShare -Name CorpData -Path D:\Data -EncryptData $true

When you want to enable encryption support on multiple file servers, you can invoke remote commands. Consider the following example:

$servers = get-content c:\files\server-list.txt
Invoke-command -computername $servers -scriptblock {Set-SmbServerConfiguration
-EnableSMB1Protocol $false}

Here, C:\Files\Server-list.txt is the path to a text file containing a list of the file servers to configure. In this file, each file server should be listed on a separate line, as shown here:

The command will then be invoked on each of the file servers.
Other -----------------
- Windows Server 2008 : Using PowerShell to Manage Active Directory (part 2) - Working with the Domain Object, Creating a List of Domain Computers
- Windows Server 2008 : Using PowerShell to Manage Active Directory (part 1) - Using the Active Directory Module in Windows Server 2008 R2, Creating and Manipulating Objects in Windows Server 2008
- Troubleshooting Windows Home Server 2011 : Understanding Troubleshooting Strategies (part 2)
- Troubleshooting Windows Home Server 2011 : Understanding Troubleshooting Strategies (part 1)
- Troubleshooting Windows Home Server 2011 : Checking for Solutions to Problems
- Troubleshooting Windows Home Server 2011 : Replacing Your System Hard Drive
- Installing Windows Server 2012 and Server Core : Upgrading to Windows Server 2012
- Installing Windows Server 2012 and Server Core : Installing a Clean Version of Windows Server 2012 Operating System (part 2)
- Installing Windows Server 2012 and Server Core : Installing a Clean Version of Windows Server 2012 Operating System (part 1)
- Installing Windows Server 2012 and Server Core : Planning for a Server Installation
- Windows Server 2008 R2 and Windows 7 : Deploying Branchcache (part 3)
- Windows Server 2008 R2 and Windows 7 : Deploying Branchcache (part 2)
- Windows Server 2008 R2 and Windows 7 : Deploying Branchcache (part 1)
- Windows Server 2003 : Managing Daily Operations - Using the AT Command & Using cron
- Windows Server 2003 : Managing Daily Operations - Delegating Control & Using Task Scheduler
- Windows Server 2003 : Auditing Events (part 2) - Setting the Size of Event Logs
- Windows Server 2003 : Auditing Events (part 1) - Audit Settings for Objects
- Windows Server 2003 : Using the Secondary Logon
- Windows Server 2003 : Using the Microsoft Management Console - Creating an MMC-Based Console with Snap-Ins
- Installing Windows Small Business Server 2011 : Selecting Network Components (part 2) - Preparing for the Installation
Top 10
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 2) - Wireframes,Legends
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 1) - Swimlanes
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Formatting and sizing lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Adding shapes to lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Sizing containers
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 3) - The Other Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 2) - The Data Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 1) - The Format Properties of a Control
- Microsoft Access 2010 : Form Properties and Why Should You Use Them - Working with the Properties Window
- Microsoft Visio 2013 : Using the Organization Chart Wizard with new data
- First look: Apple Watch

- 3 Tips for Maintaining Your Cell Phone Battery (part 1)

- 3 Tips for Maintaining Your Cell Phone Battery (part 2)
programming4us programming4us