If you would like to read the other parts in this article series please go to:
- Configuring iSCSI Storage (Part 2)
- Configuring iSCSI Storage (Part 3)
- Configuring iSCSI Storage (Part 4)
Internet Small Computer System Interface (iSCSI) is a protocol that defines how data can be transferred between host systems (such as servers) and storage devices (such as storage area networks or SANs). iSCSI was standardized by the IETF in 2003 and combines the Small Computer System Interface (SCSI) storage protocol with the TCP/IP protocol suite to define a mechanism for block transfer of data over Ethernet networks.
iSCSI enables the transmission of SCSI block storage commands within IP packets while allowing TCP to handle flow control and ensure reliable transmission. Block storage is how SANs communicate with applications running on host systems. Data is transferred block by block in raw form between the SAN and the host system. In effect, this makes the storage on the SAN appear to the host system as if it was direct attached storage (DAS) and not network storage. The host system can create instances of disk storage called virtual disks (also called logical unit numbers or LUNs, but essentially just VHD files) within the storage array, create volumes on these virtual disks, format the volumes using a file system like NTFS, and use these volumes as if they were locally installed hard drives in the host system. In contrast to this, a network attached storage (NAS) device uses a file transfer protocol such as SMB, CIFS or NFS to transfer data between the host system and the NAS device.
iSCSI was designed as an alternative to the existing Fibre Channel (FC) architecture used by SANs that requires dedicated hardware such as FC host bus adapters (FC HBAs) and fiber optic cabling, making FC SANs relatively costly as a way of implementing networked storage. Unlike FC, iSCSI can use your existing network infrastructure (switches, routers, network adapters, and so on) instead of requiring you to purchase additional hardware. Instead, as Figure 1 illustrates the iSCSI architecture is implemented using a client/server architecture that involves the following software components:
- iSCSI initiator – Software on the host system consuming the storage, which typically would be a server running some application. This client component initiates requests to and receives responses from an iSCSI target, which represents the server side of the iSCSI architecture. The iSCSI initiator can be implemented either as a driver installed on the host system or within the hardware of an iSCSI HBA, which is basically an iSCSI-capable network adapter card. In general, hardware iSCSI initiators provide better performance because they can offload iSCSI processing from the host to the adapter for improved performance.
- iSCSI target – Software on the system providing the storage, which could be either an iSCSI storage array or a Windows server that has the iSCSI Target role service installed. The iSCSI target is the server component of the iSCSI architecture and listens for and responds to commands from iSCSI initiators on other systems on the network. Management software included with the target allows you to create virtual disks and volumes and make them accessible to iSCSI initiators for use over the network.
Figure 1: How iSCSI storage works
Microsoft support for iSCSI
Microsoft first began to support iSCSI with Windows 2000 Server platform so that small and mid-sized businesses could benefit from the less costly iSCSI technology. Version 1.0 of the Microsoft iSCSI Software Initiator supported Windows 2000, Windows XP SP2, and Windows Server 2003 SP1. Additional versions were later released that included various performance improvements and other enhancements. Beginning with Windows Server 2008 and Windows Vista, the Microsoft iSCSI Software Initiator was built into the operating system.
In addition to iSCSI initiator software, Microsoft also developed iSCSI target software called the Microsoft iSCSI Software Target. Initially, this iSCSI target software was included in Windows Unified Data Storage Server 2003 which was released only through OEM channels. In April 2011 Microsoft made version 3.3 of Microsoft iSCSI Software Target available as a free download for Windows Server 2008 R2 and Windows Server 2008 R2 SP1.
Beginning with Windows Server 2012, Microsoft iSCSI Software Target 3.3 is now a built-in operating system component. This means you can use a server running Windows Server 2012 as an iSCSI storage array for iSCSI initiators on your network to connect to. This series of articles examines how to implement iSCSI storage using the built-in Microsoft iSCSI Software Target 3.3 component of Windows Server 2012. The article also demonstrates how systems running the Microsoft iSCSI Software Initiator 3.3 can connect to and utilize such iSCSI storage.
Benefits of iSCSI
Why deploy iSCSI storage anyway in your environment? What advantages does it have over other forms of storage? I’ve already described how iSCSI storage arrays are usually cheaper to deploy than FC-based storage arrays, and they can provide about the same level of performance and reliability as well, except perhaps when very high levels of transactions are involved. iSCSI also has other benefits over FC such as simpler implementation (it uses familiar TCP/IP network protocols and infrastructure devices) which means deploying it requires less IT expertise, making it an attractive storage solution for smaller enterprises.
iSCSI also supports Multipath I/O (MPIO), a Microsoft framework that allows storage providers to develop multipath solutions for optimizing the performance and reliability of connections to storage arrays. MPIO is a protocol-independent technology that can be implemented with iSCSI, FC, and Serial Attached SCI (SAS) interfaces. By implementing MPIO together with Microsoft iSCSI Software Target, administrators can provide increased reliability and load balancing by allowing iSCSI initiators to utilize multiple redundant network paths to iSCSI storage devices. Support for MPIO was first included as an optional feature in Windows Server 2008 and is also available in Windows Server 2008 R2 and Windows Server 2012.
iSCSI usage scenarios
How can you deploy iSCSI storage in your environment? There are a number of different scenarios where it can be useful. One key area is to use it to provide shared storage for deploying failover clusters of Hyper-V hosts. For this scenario, your Hyper-V hosts will require at least two network adapters with one being dedicated to iSCSI shared storage and the other for network communication with your production network. You might also need a third network adapter in each host for communications with your management network, for example when managing hosts using System Center Virtual Machine Manager (VMM).
Another possible usage scenario for iSCSI storage could be for consolidating storage used by multiple application servers. For example, if you have one running Microsoft SQL Server and one running Microsoft Exchange Server, you could migrate the direct-attached storage used by each server to a single iSCSI SAN or to a Windows Server that has Microsoft iSCSI Software Target installed. Having a single dedicated storage array or device like this helps reduce the wasteful problem of overprovisioning direct-attached storage on your servers. It also makes it easier to manage the backup and recovery process.
A third usage is to allow diskless computers to remotely boot from a single operating system image on an iSCSI storage array. To implement such a scenario, these computers will need an iSCSI-boot capable network adapter, and there are a number of these currently available in the market. You can also boot directly from virtual machines running on Hyper-V hosts in your datacenter.
iSCSI storage can provide real benefits in a number of different scenarios, and beginning with Windows Server 2012 all the components needed to implement iSCSI storage are provided in-box. The next articles in this series will show how to deploy a Windows Server 2012-based iSCSI storage solution using both the Server Manager user interface and Windows PowerShell.
If you would like to read the other parts in this article series please go to:
1 thought on “Configuring iSCSI Storage (Part 1)”
I have 3 server nodes on which I want to test VMware vSphere ESXI6.5. I have learned that you need a shared storage to be able to use VMware features like vMotions, vSphere HA, and vSphere DRS.
Can I make the third node a shared iSCSII storage to have all the features enabled for my testing environment.