Chapter 4 Creating and Managing Storage Devices


Fibre channel? iSCSI? NAS? Should you use all three? Is one better than the others? Should you create a few large LUNs? Should you create smaller LUNs? Where do you put your ISO files? Answers to all these common questions lie ahead as we dive into the vast array of storage options, architectures, and configuration settings for ESX Server. In this chapter you will learn to:

♦ Differentiate among the various storage options available to VI3

♦ Design a storage area network for VI3

♦ Configure and manage Fibre Channel and iSCSI storage networks

♦ Configure and manage NAS storage

♦ Create and manage VMFS volumes

Understanding VI3 Storage Options

VMware Infrastructure 3 (VI3) offers several options for deploying a storage configuration as the back-end to an ESX Server implementation. These options include storage for virtual machines, ISO images, or templates for server provisioning. An ESX Server can have one or more storage options available to it, including:

♦ Fibre Channel storage

♦ iSCSI software-initiated storage

♦ iSCSI hardware-initiated storage

♦ Network Attached Storage (NAS)

♦ Local storage

ESX Server can take advantage of multiple storage architectures within the same host or even for the same virtual machine. ESX Server uses a proprietary file system called VMware File System (VMFS) that provides significant benefits for storing and managing virtual machine files. The virtual machines hosted by an ESX Server can have the associated virtual machine disk files (.vmdk) or mounted CD/DVD-ROM images (.iso) stored in different locations on different storage devices. Figure 4.1 shows an ESX Server host with multiple storage architectures. A Windows virtual machine has the virtual machine disk and CD-ROM image file stored on two different Fibre Channel storage area network (SAN) logical unit numbers (LUNs) on the same storage device. At the same time, a Linux virtual machine stores its virtual disk files on an iSCSI SAN LUN and its CD-ROM images on an NAS device.

Figure 4.1 An ESX Server host can be configured with multiple storage options for hosting files used by virtual machines, ISO images, or templates.


The Role of Local Storage

During installation, an ESX Server host is configured by default with a local VMFS storage location, named Storage1 by default. The value of this local storage, however, is severely diminished because of the inability to support VMotion, DRS, or HA, and therefore should only be used for non-mission-critical virtual machines or templates and ISO images that are not required by other ESX hosts.

For this reason, when you're sizing a new ESX host, it is not important to dedicate time and money to large storage pools connected to the internal controllers of the host. Investing in large RAID 5, RAID 1+0, or RAID 0+1 volumes for ESX hosts is extremely unnecessary. In order for you to gain the full benefits of virtualization, the virtual machine disk files must reside on a shared storage device that is accessible by multiple ESX hosts. Direct any fiscal and administrative attention to memory and CPU sizing, or even network adapters, but not to locally attached hard drives.

The purpose of this chapter is to answer all your questions about deploying, configuring, and managing a back-end storage solution for your virtualized environment. Ultimately, each implementation will differ, and therefore the various storage architectures available might be the proper solution in one scenario but not in another. As you'll see, each of the storage solutions available to VI3 provides its own set of advantages and disadvantages.

Choosing the right storage technology for your infrastructure begins with a strong understanding of each technology and an intimate knowledge of the systems that will be virtualized as part of the VI3 deployment. Table 4.1 outlines the features of the three shared storage technologies.


Table 4.1: Features of Shared Storage Technologies

Feature Fibre Channel iSCSI NAS/NFS
Ability to format VMFS Yes Yes No
Ability to hold VM files Yes Yes Yes
Ability to boot ESX Yes Yes No
VMotion, DRS, HA Yes Yes Yes
Microsoft clustering Yes No No
VCB Yes Yes No
Templates, ISOs Yes Yes Yes
Raw device mapping Yes Yes No
Microsoft Cluster Services

As of the writing of this book, VMware had not yet approved support for building Microsoft server clusters with virtual machines running on ESX Server 3.5. All previous versions up to 3.5 offered support, and once VMware has performed due diligence in testing server clusters on the latest version, it is assumed that the support will continue.

Once you have mastered the differences among the various architectures and identified the features of each that are most relevant to your data and virtual machines, you can feel confident in your decision. Equipped with the right information, you will be able to identify a solid storage platform on which your virtual infrastructure will be scalable, efficient, and secure.

The storage adapters in ESX Server will be identified automatically during the boot process and are available for configuration through the VI client or by using a set of command-line tools. The storage adapters in your server must be compatible. Remember to check the VI3 I/O Compatibility Guide before adding any new hardware to your server. You can find this guide at VMware's website (http://www.vmware.com/pdf/vi3_io_guide.pdf).

In the following sections, we'll cover both sets of tools, command line and GUI, and explore how to use them to create and manage the various storage types. Figure 4.2 and Figure 4.3 show the Virtual Infrastructure (VI) Client's configuration options for storage adapters and storage, respectively.

Figure 4.2 Storage adapters in ESX Server are found automatically because only adapters certified to work with VMware products should be used. Consult the appropriate guides before adding new hardware.


Figure 4.3 The VI Client provides an easy-to-use interface for adding new datastores located on fibre channel, iSCSI, or NAS storage devices.

Understanding a Storage Area Network

A storage area network (SAN) is a communication network designed to handle the block-level transfer of data between a storage device and the requesting servers or hosts. The block-level transfer of data makes for highly efficient and highly specialized network communication that enables the reliable, low-latency transfer of large amounts of data with minimal server overhead.

A SAN consists of several components that direct and manage the flow of data across the dedicated network. These components reside in one of three segments on the SAN:

♦ The hosts accessing the storage

♦ The network across which traffic runs

♦ The storage

The concepts of a storage area network have long revolved around using a Fibre Channel protocol for communication among nodes connected to the network. However, recently the quick adoption of iSCSI storage area networks has introduced a strong competitor to its fibre channel predecessor. Whereas fibre channel storage networks use the Fibre Channel Protocol (FCP) for communication among nodes, iSCSI provides a similar block-level data transfer over standard IP networks.

As your virtualization career moves forward, you will, at some point, most certainly be in a position where you must understand and differentiate between the two most popular SAN architectures today: Fibre Channel and iSCSI. Both architectures offer significant benefits in the areas of reliability, redundancy, scalability, performance, and security. Incorporating a shared storage back-end helps eliminate many of the network failure issues that administrators find themselves constantly fixing. ESX Server with a back-end SAN offers:

♦ Automatic failover and multipathing at the host bus adapter (HBA) and storage port

♦ A high-performance file system in VMFS-3

♦ VMotion, Distributed Resource Scheduler (DRS), and High Availability (HA)

♦ Support for Microsoft Cluster Services (MSCS)

♦ VMware Consolidate Backup (VCB)

SAN devices offer additional benefits in the areas of storage replication and mirroring. Using third-party software, you can replicate or mirror the data on your LUNs to other LUNs on the same or even different storage devices. This feature offers administrators great possibilities in the areas of disaster recovery and business continuity.

Creating and Managing LUNs

After you finish your debate on fibre channel versus iSCSI and you purchase one or the other, you will then have to spend some time devising the proper procedure for managing and implementing LUNs. We are discussing LUN creation and management separately from the fibre channel and iSCSI sections because of its independence from the actual storage architecture. Details on the configuration of fibre channel and iSCSI will follow.

A logical unit number (LUN) is a logical configuration of disk space carved from an underlying set of physical disks. The physical disks on which LUNs are configured are most often arranged as a Redundant Array of Independent Disks (RAID) to support performance and/or redundancy for the data to be stored on the LUN. This section will look at RAID architectures, LUN addressing, and the age-old question of many little LUNs versus fewer big LUNs.

No matter your storage device, fibre channel or iSCSI, you will need to create LUNs, or at least work closely with someone who will create LUNs for you. Virtual machine performance can, in some cases, come down to a matter of having a solid LUN strategy in place for the activity level of that VM. Choosing the right RAID level for a LUN is therefore an integral part of your VI3 implementation. The most common types of RAID configurations are:

RAID 0 Disks configured in RAID 0 do not offer any type of redundancy or protection against drive failure. RAID 0 does, however, provide the fastest performance times because data is written simultaneously to all drives involved. A RAID 0 volume, also commonly referred to as a stripe, can have two or more disks as part of the array. Figure 4.4 outlines the structure of a RAID 0 configuration.

Figure 4.4 A RAID 0 disk configuration provides high-speed performance for data stored across a series of disks.


RAID 1 A RAID 1 configuration puts two identically sized allocations of space from two drives together for the provisioning of a backup strategy that allows for either of the drives to fail and still maintain data. A RAID 1 volume, also commonly referred to as a mirrored array, loses 50 percent of the available drive space. For example, two 500GB LUNs configured as a RAID 1 array will only provide 500GB of storage. Figure 4.5 outlines the structure of a RAID 1 configuration.

Figure 4.5 A RAID 1 disk array, or mirrored volume, provides redundancy in the event of a single drive failure. A RAID 1 array is the most expensive disk type because it incurs a 50 percent loss in the amount of storage.


RAID 5 A RAID 5 array writes data and parity simultaneously to all drives involved in the array. RAID 5 arrays provide redundancy in the event of a single drive failure by writing parity in equal increments across all drives. Parity is a mathematical calculation that allows N-1 drives the ability to make up the data on any other drive. A RAID 5 is the most efficient array when looking at disk space loss. The RAID 5 array only loses one drive's worth of space. For example, a RAID 5 array made up of four 250GB hard drives will have approximately 750GB of storage space available. Figure 4.6 outlines the structure of a RAID 5 configuration.

Figure 4.6 A RAID 5 array is commonly used because of its data protection and limited loss of space. Both data and parity are written equally across all drives in the array.


RAID 1+0/RAID 0+1 For a more advanced disk array configuration, a RAID 1+0 or RAID 0+1 might be used. These structures combine the use of RAID 0 and RAID 1 technologies. RAID 1+0 involves mirroring a stripe, while RAID 0+1 involves striping several mirrors.

One of the most common challenges facing VI3 administrators is the process of sizing LUNs. Administrators can quite easily determine the RAID levels of a LUN (as explained in the preceding paragraphs), but sizing the LUN is an entirely different challenge. To determine the size of a LUN, administrators must have a game plan for testing virtual machine performance or have a solid understanding of the functions of the virtual machine(s) to be located on a LUN.

How Much Space Does a Virtual Machine Consume?

There is no definitive answer to this question, simply because administrators can choose to build virtual machines with virtual hard drives of varying sizes. There is, however, a generic but effective way of determining size requirements for a virtual machine. For each virtual machine, there is a set of associated files that have a direct influence on storage requirements, including the virtual machine hard disk, the suspended state, and the virtual machine swap file. Use the following formula to calculate the storage requirements for a virtual machine:

+ + = minimum storage requirement for a virtual machine

For example, if a virtual machine consisted of a 25GB Virtual Machine Disk Format (VMDK) file, a memory limit of 4GB, and a memory reservation of 2GB, the minimum storage requirement could be calculated as follows:

25GB (virtual machine) + 25GB (suspended state) + 2GB (limit - reservation) = 52GB

In this case, a 55GB LUN would suffice for the virtual machine. However, keep in mind that if you decide to increase the RAM limit to 8GB or to reduce the RAM reservation to 0, you will then be jeopardizing the accuracy of the minimum storage requirement calculation.

Storage space for snapshots should also be considered, even if only for the temporary duration of the snapshot process (see Chapter 6). Luckily for SAN and VI3 administrators, the placement of a virtual machine's files is not a permanent decision. Moving a virtual machine to different storage locations is a simple but offline process.

When it comes to LUN design and management, VMware defines two common philosophies:

♦ The adaptive scheme

♦ The predictive scheme

Each scheme offers its own set of advantages and disadvantages to VI3 administrators. Undoubtedly, you will find that neither option is the appropriate solution in every situation. It is safe to say that most administrators will find themselves incorporating a blend of both philosophies as a means of compromise and earning the best of both worlds.

Adaptive Scheme

We'll start by introducing the adaptive scheme because of its simplicity. The adaptive scheme involves creating a small number of larger LUNs for the storage of virtual machines. The adaptive scheme results in fewer requirements on the part of the SAN administrator, less effort when performing LUN masking, fewer datastores to manage, and better opportunities for virtual disk resizing.

The downside to the adaptive scheme is the increased contention for LUN access across all of the virtual machines in the datastore. For example, if a 500GB LUN holds the virtual machine disk files for 10 virtual machines, then there will be contention among all of the virtual machines for access to the LUN. This might not be an issue, as the virtual machines' disk files residing on the LUN may be for virtual machines that are not disk intensive — that is, they do not rely heavily on hard disk input/output (I/O). For the adaptive scheme to be a plausible and manageable solution, VI3 administrators must be proactive in monitoring the virtual machines stored together on a LUN. When the performance of the virtual machines begins to reach unacceptable levels, administrators must look to creating more LUNs to be made available for new or existing virtual machines. Figure 4.7 shows an implementation of the adaptive scheme for storing virtual machines.

Predictive Scheme

The predictive scheme overcomes the limitations of the adaptive scheme but introduces administrative challenges of its own. The predictive involves the additional administrative effort of customizing LUNs to be specific for individual virtual machines. Take the following example: When administrators deploy a new server to play host to a database application, it is a common practice to enhance database performance by implementing multiple disks with characteristics specific to the data stored on the disk. On a database server, this often means a RAID 1 (mirror) volume for the operating system, a RAID 5 volume for the database files, and another RAID 1 volume for the database logs. Using the predictive scheme to architect a LUN solution for this database server would result in three SAN LUNs built on RAID arrays as needed by the database server. The sizes of the LUNs would depend on the estimated sizes of the operating system, database, and log files. Figure 4.8 shows this type of predictive approach to LUN design. Table 4.2 outlines all of the pros and cons for each of the LUN design strategies.

Figure 4.7 The adaptive scheme involves creating a small number of LUNs that are larger in size and play host to virtual machine disk files for multiple virtual machines.


Figure 4.8 The predictive scheme, though administratively more involved, offers better performance measures for critical virtual machines.


Table 4.2: Adaptive and Predictive Scheme Comparisons

Type of Scheme Pros Cons
Adaptive Less need for SAN administrator Easy resizing of virtual disks Easy snapshot management Less volume management Possible undersizing of LUN, resulting in greater administrative effort to create new LUNs Possible oversizing of LUN, resulting in wasted storage space
Predictive Less contention on each VMFS More flexible share allocation and management Less wasted space on SAN storage RAID specificity for VMs Greater multipathing capability Support for Microsoft clusters Greater backup policy flexibility Greater administrative overhead for LUN masking Greater administrative effort involved in VMotion, DRS, and HA planning

As we noted earlier in this section, the most appropriate solution will most likely involve a combination of the two design schemes. You may find a handful of virtual machines where performance is unaffected by storing all the virtual machine disk files on the same LUN, and at the same time you will find those virtual machines that require a strict nonsharing approach for the virtual machine disk files. But in between the two extremes, you will find the virtual machines that require specific RAID characteristics but, at the same time, can share LUN access with multiple virtual machines. Figure 4.9 shows a LUN design strategy that incorporates both the adaptive and predictive schemes as well as a hybrid approach.

Figure 4.9 Neither the adaptive nor the predictive scheme will be the most appropriate solution in all cases, which means most environments will be built on a hybrid solution that involves both philosophies.


With all of the effort that will be put into designing the appropriate LUN structures, you will undoubtedly run into situations in which the design will require change. Luckily for the VI3 administrative community, the product is very flexible in the way virtual machine disk files are managed. In just a few short steps, a virtual machine's disk files can be moved from one LUN to another. The simplified nature of relocating disk files means that if you begin with one approach and discover it does not fit your environment, you can easily transition to a more suitable LUN structure. In Chapter 6, we'll detail the steps required to move a virtual machine from one datastore to another.

ESX Network Storage Architectures: Fibre Channel, iSCSI, and NAS

VMware Infrastructure 3 offers three shared storage options for locating virtual disk files, ISO files, and templates. Each storage technology presents its own benefits and challenges and requires careful attention. Despite their differences, there is often room for two or even all three of the technologies within a single virtualized environment.

Fibre Channel Storage

Despite its high cost, many companies rely on fibre channel storage as the backbone for critical data storage and management. The speed and security of the dedicated fibre channel storage network are attractive assets to companies looking for reliable and efficient storage solutions.

Understanding Fibre Channel Storage Networks

Fibre channel SANs can run at either 2GFC or 4GFC speeds and can be constructed in three different topologies: point-to-point, arbitrated loop, or switched fabric. The point-to-point fibre channel architecture involves a direct connection between the server and the fibre channel storage device. The arbitrated loop, as the name suggests, involves a loop created between the storage device and the connected servers. In either of these cases, a fibre channel switch is not required. Each of these topologies places limitations on the scalability of the fibre channel architecture by limiting the number of nodes that can connect to the storage device. The switched fabric architecture is the most common and offers the most functionality, so we will focus on it for the duration of this chapter and throughout the book. The fibre channel switched fabric includes a fibre channel switch that manages the flow of the SCSI communication over fibre channel traffic between servers and the storage device. Figure 4.10 displays the point-to-point and arbitrated loop architectures.

Figure 4.10 Fibre channel SANs can be constructed as point-to-point or arbitrated loop architectures.


The switched fabric architecture is more common because of its scalability and increased reliability. A fibre channel SAN is made up of several different components, including:

Logical unit numbers (LUNs) A logical configuration of disk space created from one or more underlying physical disks. LUNs are most commonly created on multiple disks in a RAID configuration appropriate for the disk usage. LUN design considerations and methodologies will be covered later in this chapter.

Storage device The storage device houses the disk subsystem from which the storage pools or LUNs are created.

Storage Processor (SP) One or more storage processors (SPs) provide connectivity between the storage device and the host bus adapters in the hosts. SPs can be connected directly or through a fibre channel switch.

Fibre channel switch A hardware device that manages the storage traffic between servers and the storage device. Although devices can be directly connected over fibre channel networks, it is more common to use a fibre channel switched network. The term fibre channel fabric refers to the network created by using fibre-optic cables to connect the fibre channel switches to the HBAs and SPs on the hosts and storage devices, respectively.

Host bus adapters (HBAs) A hardware device that resides inside a server that provides connectivity to the fibre channel network through a fibre-optic cable.

These SAN components make up the infrastructure that processes storage requests and manages the flow of traffic among the nodes on the network. Figure 4.11 shows a commonly configured fibre channel storage area network with two ESX Servers, redundant fibre channel switches, and a storage device.

Figure 4.11 Most storage area networks consist of hosts, switches, and a storage device interconnected to provide servers with reliable and redundant access to storage pools residing in the storage device.


A SAN can be an expensive investment, predominantly because of the redundant hardware built into each of the segments of the SAN architecture. As shown in Figure 4.11, the hosts were outfitted with multiple HBAs connected to the fibre channel fabric, which consisted of multiple fibre channel switches connected to multiple storage processors in the storage device. The trade-off for the higher cost is less downtime in the event of a single piece of hardware failing in the SAN structure.

Now that we have covered the hardware components of the storage area network, it is important that, before moving into ESX specifics, we discuss how the different SAN components communicate with one another.

Each node in a SAN is identified by a globally unique 64-bit hexadecimal World Wide Name (WWN) or World Wide Port Name (WWPN) assigned to it. A WWN will look something like this:

22:00:00:60:01:B9:A7:D2

The WWN for a fibre channel node is discovered by the switch and is then assigned a port address upon login to the fabric. The WWN assigned to a fibre channel node is the equivalent of the globally unique Media Access Control (MAC) address assigned to network adapters on Ethernet networks.

Once the nodes are logged in and have been provided addresses they are free to begin communication across the fibre channel network as determined by the zoning configuration on the fibre channel switches. The process of zoning involves the configuration of a set of access control parameters that determine which nodes in the SAN architecture can communicate with other nodes on the network. Zoning establishes a definition of communication between storage processors in the storage device and HBAs installed on the ESX Server hosts. Figure 4.12 shows a fibre channel zoning configuration.

Figure 4.12 Zoning a fibre channel network at the switch level provides a security boundary that ensures that host devices do not see specific storage devices.


Zoning is a highly effective means of preventing non-ESX hosts from discovering storage volumes that are formatted as VMFS. This process effectively creates a security boundary between fibre channel nodes that simplifies management in large SAN environments. The nodes within a zone, or segment, of the network can communicate with one another but not with other nodes outside their zone. The zoning configuration on the fibre channel switches dictates the number of targets available to an ESX Server host. By controlling and isolating the paths within the switch fabric, the switch zoning can establish strong boundaries of fibre channel communication.

In most VI3 deployments, only one zone will be created since the VMotion, DRS, and HA features require all nodes to have access to the same storage. That is not to say that larger, enterprise VI3 deployments cannot realize a security and management advantage by configuring multiple zones to establish a segregation of departments, projects, or roles among the nodes. For example, a large enterprise with a storage area network that supports multiple corporate departments (i.e., marketing, sales, finance, and research) might have ESX Server hosts and LUNs for each respective department. In an effort to prevent any kind of cross-departmental LUN access, the switches can establish a zone for each department ensuring only the appropriate LUN access. Proper fibre channel switch zoning is a critical tool for separating a test or development environment from a production environment.

In addition to configuring zoning at the fibre channel switches, LUNs must be presented, or not presented, to an ESX Server. This process of LUN masking, or hiding LUNs from a fibre channel node, is another means of ensuring that a server does not have access to a LUN. As the name implies, this is done at the LUN level inside the storage device and not on the fibre channel switch. More specifically, the storage processor (SP) on the storage device allows for LUNs to be made visible or invisible to the fibre channel nodes that are available based on the zoning configuration. The hosts with LUNs that have been masked are not allowed to store or retrieve data from those LUNs.

Zoning provides security at a higher, more global level, whereas LUN masking is a more granular approach to LUN security and access control. The zoning and LUN masking strategies of your fibre channel network will have a significant impact on the functionality of your virtual infrastructure. You will learn in Chapter 9 that LUN access is critical to the advanced VMotion, DRS, and HA features of VirtualCenter.

Figure 4.13 shows a fibre channel switch fabric with multiple storage devices and LUNs configured on each storage device. Table 4.3 describes a LUN access matrix that could help a storage administrator and VI3 administrator work collaboratively on planning the zoning and LUN masking strategies.

Figure 4.13 A fibre channel network consists of multiple hosts, multiple storage devices, and LUNs across each storage device. Every host does not always need access to every storage device or every LUN, so zoning and masking are a critical part of SAN design and configuration.


Fibre channel storage networks are synonymous with “high performance” storage systems. Arguably, this is in large part due to the efficient manner in which communication is managed by the fibre channel switches. Fibre channel switches work intelligently to reduce, if not eliminate, oversubscription problems in which multiple links are funnelled into a single link. Oversubscription results in information being dropped. With less loss of data on fibre channel networks, there is reduced need for retransmission of data and, in turn, processing power becomes available to process new storage requests instead of retransmitting old requests.

Configuring ESX for Fibre Channel Storage

Since fibre channel storage is currently the most efficient SAN technology, it is a common back-end to a VI3 environment. ESX has native support for connecting to fibre channel networks through the host bus adapter. However, ESX Server has limited support for the available storage devices and host bus adapters. Before investing in a SAN, make sure it is compatible and supported by VMware. Even if the SAN is capable of "working" with ESX, it does not mean VMware is going to provide support. VMware is very stringent with the hardware support for VI3; therefore, you should always implement hardware that has been tested by VMware.


Table 4.3: LUN Access Matrix

Host SD1 SD2 LUN1 LUN2 LUN3
ESX1 Yes No Yes Yes No
ESX2 Yes No No Yes Yes
ESX3 Yes No Yes Yes Yes
ESX4 No Yes Yes Yes Yes
ESX5 No Yes Yes No Yes
ESX6 No Yes Yes No Yes

Note: The processes of zoning and masking can be facilitated by generating a matrix that defines which hosts should have access to which storage devices and which LUNs.


Always check the compatibility guides before adding new servers, new hardware, or new storage devices to your virtual infrastructure.

Since VMware is the only company (at this time) that provides drivers for hardware supported by ESX, you must be cautious when adding new hardware like host bus adapters. The bright side, however, is that so long as you opt for a VMware-supported HBA, you can be certain it will work without incurring any of the driver conflicts or misconfiguration common in other operating systems.

VMware Fibre Channel SAN Compatibility

You can find a complete list of compatible SAN devices online on VMware's website at http://www.vmware.com/pdf/vi3_san_guide.pdf. Be sure to check the guides regularly as they are consistently updated. When testing a fibre channel SAN against ESX, VMware identifies compatibility in all of the following areas:

♦ Basic connectivity to the device.

♦ Multipathing capability for allowing access to storage via different paths.

♦ Host bus adapter (HBA) failover support for eliminating single point of failure at the HBA.

♦ Storage port failover capability for eliminating single point of failure on the storage device.

♦ Support for Microsoft Clustering Services (MSCS) for building server clusters when the guest operating system is Windows 2000 Service Pack 4 or Windows 2003.

♦ Boot-from-SAN capability for booting an ESX server from a SAN LUN.

♦ Point-to-point connectivity support for nonswitch-based fibre channel network configurations.

Naturally, since VMware is owned by EMC Corporation you can find a great deal of compatibility between ESX Server and the EMC line of fibre channel storage products (also sold by Dell). Each of the following vendors provides storage products that have been tested by VMware:

♦ 3PAR: http://www.3par.com

♦ Bull: http://www.bull.com

♦ Compellent: http://www.compellent.com

♦ Dell: http://www.dell.com

♦ EMC: http://www.emc.com

♦ Fujitsu/Fujitsu Siemens: http://www.fujitsu.com and http://www.fujitsu-siemens.com

♦ HP: http://www.hp.com

♦ Hitachi/Hitachi Data Systems (HDS): http://www.hitachi.com and http://www.hds.com

♦ IBM: http://www.ibm.com

♦ NEC: http://www.nec.com

♦ Network Appliance (NetApp): http://www.netapp.com

♦ Nihon Unisys: http://www.unisys.com

♦ Pillar Data: http://www.pillardata.com

♦ Sun Microsystems: http://www.sun.com

♦ Xiotech: http://www.xiotech.com

Although the nuances, software, and practices for managing storage devices across different vendors will most certainly differ, the concepts of SAN storage covered in this book transcend the vendor boundaries and can be used across various platforms.

Currently, ESX Server supports many different QLogic 236x and 246x fibre channel HBAs for connecting to fibre channel storage devices. However, because the list can change over time, you should always check the compatibility guides before purchasing and installing a new HBA.

It certainly does not make sense to make a significant financial investment in a fibre channel storage device and still have a single point of failure at each server in the infrastructure. We recommend that you build redundancy into the infrastructure at each point of potential failure. As shown in the diagrams earlier in the chapter, each ESX Server host should be equipped with a minimum of two fibre channel HBAs to provide redundant path capabilities in the event of HBA failure. ESX Server 3 supports a maximum of 16 HBAs per system and a maximum of 15 targets per HBA. The 16-HBA maximum can be achieved with four quad-port HBAs or eight dual-port HBAs provided that the server casing has the expansion capability.

Adding a new HBA requires that the physical server be turned off since ESX Server does not support adding hardware while the server is running, otherwise known as a ‘‘hot add’’ of hardware. Figure 4.14 displays the redundant HBA and storage processor (SP) configuration of a VI3 environment.

Figure 4.14 An ESX Server configured through Vir-tualCenter with two QLogic 236x fibre channel HBAs and multiple SCSI targets or storage processors (SPs) in the storage device.


Once fibre channel storage is presented to a server and the server recognizes the pools of storage, then the administrator can create datastores. A datastore is a storage pool on an ESX Server host that can be a local disk, fibre channel LUN, iSCSI LUN, or NFS share. A datastore provides a location for placing virtual machine files, ISO images, and templates.

For the VI3 administrator, the configuration of datastores on fibre channel storage is straightforward. It is the LUN masking, LUN design, and LUN management that incur significant administrative overhead (or more to the point, brainpower!). For VI3 administrators who are not responsible for SAN management and configuration, it is essential to work closely with the storage personnel to ensure performance and security of the storage pools used by the ESX Server hosts.

Later in this chapter we'll discuss LUN design in greater detail, but for now let's assume that LUNs have been created and masking has been performed. With those assumptions in place, the work required by the VI3 administrator is quick and easy. Figure 4.15 identifies five LUNs that are available to silo105.vdc.local through its redundant connection to the storage device. The ESX Server silo105.vdc.local has two HBAs connecting to a storage device, with two SPs creating redundant paths to the available LUNs. Although there are six LUNs in the targets list, the LUN with ID 0 is disregarded since it is not available to the ESX Server for storage.

A portion of the ESX Server boot process includes LUN discovery. An ESX Server, at boot-up and by default, will attempt to enumerate LUNs with LUN IDs between 1 and 255.

Even though silo105.vdc.local is presented with five LUNs, it does not mean that all five LUNs are currently being used to store data for the server. Figure 4.16 shows that silo105.vdc.local has three datastores, only two of which are LUNs presented by the fibre channel storage device. With two fibre channel SAN LUNs already in use, silo105.vdc.local has three more LUNs available when needed. Later in this chapter you'll learn how to use the LUNs as VMFS volumes.

Figure 4.15 An ESX Server discovers its available LUNs and displays them under each available SCSI target. Here, five LUNs are available to the ESX Server for storage.


Figure 4.16 An ESX Server host with a local datastore named storage1 (2) and two datastores ISOTemps (1) and LUN10 on a fibre channel storage device.


When an ESX Server host is powered on, it will process the first 256 LUNs (LUN 0 through LUN 255) on the storage devices to which it is given access. ESX will perform this enumeration at every boot, even if many of the LUNs have been masked out from the storage processor side. You can configure individual ESX Server hosts not to scan all the way up to LUN 255 by editing the Disk.MaxLUN configuration setting. Figure 4.17 shows the default configuration of the Disk.MaxLUN value that results in accessibility to the first 256 LUNs.

LUN Masking at the ESX Server

Despite the potential benefit of performing LUN masking at the ESX Server (to speed up the boot process), the work necessary to consistently manage LUNs on each ESX Server may offset that benefit. 1 suggest that you perform LUN masking at the SAN.

To change the Disk.MaxLUN setting, perform the following steps:

1. Use the VI client to connect to a VirtualCenter Server or an individual ESX Server host.

2. Select the hostname in the inventory tree and select the Configuration tab in the details pane on the right.

3. In the Software section, click the Advanced Settings link.

4. In the Advanced Settings for window, select the Disk option from the selection tree.

5. In the Disk.MaxLUN text box, enter the desired integer value for the number of LUNs to scan.

Figure 4.17 Altering the Disk.MaxLUN value can result in a faster boot or rescan process for an ESX Server host. However, it may also require attention when new LUNs must be made available that exceed the custom configuration.


You should alter the Disk.MaxLUN parameter only when you are certain that LUN IDs will never exceed the custom value. Otherwise, though a performance benefit might result, you will have to revisit the setting each time available LUN IDs must exceed the custom value.

Although LUN masking is most commonly performed at the storage processor, as it should be, it is also possible to configure LUN masking on each individual ESX Server host to speed up the boot process.

Let's take an example where an administrator configures LUN masking at the storage processor. Once the masking at the storage processor is complete, the LUNs that have been presented to the hosts are the ones numbered 117 through 127. However, since the default configuration for ESX Server is set to enumerate the first 256 LUNs by default, it will move through each potential LUN even if the storage processor is preventing the LUN from being seen. In an effort to speed up the boot process, an ESX Server administrator can perform LUN masking at the server. In this example, if the administrator were to mask LUN 1 through LUN 116 and LUN 128 through LUN 256, then the server would only be enumerating the LUNs that it is allowed to see and, as a result, would boot quicker. To enable LUN masking on an ESX Server, you must edit the Disk.MaskLUN option (which you access by clicking the Advanced Settings link on the Configuration tab). The Disk.MaskLUN text box requires this format:

: : ;

For example, to mask the LUNs from the previous example (1 through 116 and 128 through 256) that are accessible through the first HBA and two different storage processors, you'd enter the following in the Disk.MaskLUN text box entry:

vmhba1:0:1-116,128-256;vmhba1:1:1-116,128-256;

The downside to configuring LUN masking on the ESX Server is the administrative overhead involved when a new LUN is presented to the server or servers. To continue with the previous example, if the VI3 administrator requests five new LUNs and the SAN administrator provisions LUNs with LUN IDs of 136 through 140, the VI3 administrator will have to edit all of the local masking configurations on each ESX Server host to read as follows:

Vmhba1:0:1-116,128-135,141-254;vmhba1:1:1-116,128-135,141-256;

In theory, LUN masking on each ESX Server host sounds like it could be a benefit. But in practice, masking LUNs at the ESX Server in an attempt to speed up the boot process is not worth the effort. An ESX Server host should not require frequent reboots, and therefore the effect of masking LUNs on each server would seldom be felt. Additional administrative effort would be needed since each host would have to be revisited every time new LUNs are presented to the server.

ESX LUN Maximums

Be sure that storage administrators do not carve LUNs for an ESX Server that have ID numbers greater than 255. ESX hosts have a maximum capability of 256 LUNs, beginning with ID 1 and on through ID 255. Clicking the Rescan link located in the Storage Adapters node of the Configuration tab on a host will force the host to identify new LUNs or new VMFS volumes, with the exception that any LUNs with IDs greater than 255 will not be discoverable by an ESX host.

Although adding a new HBA to an ESX Server host requires you to shut down the server, presenting and finding new LUNs only requires that you initiate a rescan from the ESX Server host.

To identify new storage devices and/or new VMFS volumes that have been added since the last scan, click the Rescan link located in the Storage node of the Configuration tab. The host will launch an enumeration process beginning with the lowest possible LUN ID to the highest (1 to 255), which can be a slow process (unless LUN masking has been configured on the host as well as the storage processor).

You have probably seen by now, and hopefully agree, that VMware has done a great job of creating a graphical user interface (GUI) that is friendly, intuitive, and easy to use. Administrators also have the ability to manage LUNs from a Service Console command line on an ESX Server host.

The ability to scan for new storage is available in the VI Client using the Rescan link in the Storage Adapters node of the Configuration page, but it is also possible to rescan from a command line.

Establishing Console Access with Root Privileges

The root user account does not have secure shell (SSH) capability by default. You must set the Permit-RootLogin entry in the /etc/ssh/sshdconfig file to Yes to allow access. Alternatively, you can log on to the console as a different user and use the #su - option to elevate the logon permissions. Opting to use the #su - option still requires that you know the root user's password but does not expose the system to allowing remote root logon via SSH.

Use the following syntax to rescan vmhba1 from a Service Console command line:

1. Log on to a console session as a nonroot user.

2. Type su — and then click Enter.

3. Type the root user password and then click Enter.

4. Type esxcfg-rescan vmhba1 at the # prompt.

When multiple vmhba devices are available to the ESX Server, repeat the command, replacing vmhba# with each device.

You can identify LUNs using the physical address (i.e., vmhba#:target#:lun:partition), but the Service Console references the LUNs using the device filename (i.e., sda, sdb, etc.). You can see the device filenames when installing an ESX Server that is connected to a SAN with accessible LUNs. By using an SSH tool (putty.exe) to establish a connection and then issuing the esxcfg commands, you can perform command-line LUN management.

To display a list of available LUNs with their associated paths, device names, and UUIDs, perform the following steps:

1. Log on to a console session as a nonroot user.

2. Type su — and then click Enter.

3. Type the root user password and then click Enter.

4. Type esxcfg-vmhbadevs -m at the # prompt.

Figure 4.18 shows the resulting output for an ESX Server with an IP address of 172.30.0.106 and a nonroot user named roottoo.

Figure 4.18 The esxcfg commands offer parameters and switches for managing and identifying LUNs available to an ESX Server host.


The UUIDs displayed in the output are unique identifiers used by the Service Console and VMkernel. These values are also reflected in the Virtual Infrastructure Client; however, we do not commonly refer to them because using the friendly names or even the physical paths is much easier.

Fibre channel storage has a strong performance history and will continue to progress in the areas of performance, manageability, reliability, and scalability. Unfortunately, the large financial investment required to implement a fibre channel solution has scared off many organizations looking to deploy a virtual infrastructure that offers all the VMotion, DRS, and HA bells that VI3 provides. Luckily for the IT community, VMware now offers lower-cost (and potentially lower-performance) options in iSCSI and NAS/NFS.

iSCSI Network Storage

As a response to the needs of not-so-deep-pocketed network administrators, Internet Small Computer Systems Interface (iSCSI) has become a strong alternative to fibre channel. The popularity of iSCSI storage, which offers both lower cost and increasing speeds, will continue to grow as it finds its place in virtualized networks.

Understanding iSCSI Storage Networks

iSCSI storage provides a block-level transfer of data using the SCSI communication protocol over a standard TCP/IP network. By using block-level transfer, as in a fibre channel solution, the storage device looks like a local device to the requesting host. With proper planning, an iSCSI SAN can perform nearly as well as a fibre channel SAN — or better. This depends on other factors, but we can dive into those in a moment. And before we make that dive into the configuration of iSCSI with ESX, let's first take a look at the components involved in an iSCSI SAN. Despite the fact that the goals and overall architecture of iSCSI are similar to fibre channel, when you dig into the configuration details, the communication architecture, and individual components of iSCSI, the differences are profound.

The components that make up an iSCSI SAN architecture, shown in Figure 4.19, include:

Hardware initiator A hardware device referred to as an iSCSI host bus adapter (HBA) that resides in an ESX Server host and initiates storage communication to the storage processor (SP) of the iSCSI storage device.

Software initiator A software-based storage driver initiator that does not require specific hardware and transmits over standard, supported Ethernet adapters.

Storage device The physical device that houses the disk subsystem upon which LUNs are built.

Logical unit number (LUN) A logical configuration of disk space created from one or more underlying physical disks. LUNs are most commonly created on multiple disks in a RAID configuration appropriate for the disk usage. LUN design considerations and methodologies will be covered later in this chapter.

Storage processor (SP) A communication device in the storage device that receives storage requests from storage area network nodes.

Challenge Handshake Authentication Protocol (CHAP) An authentication protocol used by the iSCSI initiator and target that involves validating a single set of credentials provided by any of the connecting ESX Server hosts.

Ethernet switches Standard hardware devices used for managing the flow of traffic between ESX Server nodes and the storage device.

iSCSI qualified name (IQN) The full name of an iSCSI node in the format of iqn.-.com.domain:alias. For example, iqn.1998-08.com.vmware:silo1-1 represents the registration of vmware.com on the Internet in August (08) of 1998. Nodes on an iSCSI deployment will have default IQNs that can be changed. However, changing an IQN requires a reboot of the ESX Server host.

iSCSI is thus a cheaper shared storage solution than fibre channel. Of course, the reduced cost does come at the expense of the better performance that fibre channel offers. Ultimately, the question comes down to that difference in performance. The performance difference can, in large part, reflect the storage design and the disk intensity of the virtual machines stored on the iSCSI LUNs. Although this is true for fibre channel storage as well, it is less of a concern given the greater bandwidth available via a 4GB fibre channel architecture. In either case, it is the duty of the ESX Server administrator and the SAN administrator to regularly monitor the saturation level of the storage network.

Figure 4.19 An iSCSI SAN includes an overall architecture similar to fibre channel, but the individual components differ in their communication mechanisms.


When deploying an iSCSI storage network, you'll find that adhering to the following rules can help mitigate performance degradation or security concerns:

♦ Always deploy iSCSI storage on a dedicated network.

♦ Configure all nodes on the storage network with static IP addresses.

♦ Configure the network adapters to use full-duplex, gigabit autonegotiated recommended communication.

♦ Avoid funneling storage requests from multiple servers into a single link between the network switch and the storage device.

Deploying a dedicated iSCSI storage network reduces network bandwidth contention between the storage traffic and other common network traffic types such as e-mail, Internet, and file transfer. A dedicated network also offers administrators the luxury of isolating the SCSI communication protocol from ‘‘prying eyes’’ that have no legitimate need to access the data on the storage device.

iSCSI storage deployments should always utilize dedicated storage networks to minimize contention and increase security. Achieving this goal is a matter of implementing a dedicated switch or switches to isolate the storage traffic from the rest of the network. Figure 4.20 shows the differences and one that is integrated with the other network segments.

If a dedicated physical network is not possible, using a virtual local area network (VLAN) will segregate the traffic to ensure storage traffic security. Figure 4.21 shows iSCSI implemented over a VLAN to achieve better security. However, this type of configuration still forces the iSCSI communication to compete with other types of network traffic.

Figure 4.21 iSCSI can be implemented across VLANs to enhance security.



Figure 4.20 iSCSI should have a dedicated and isolated network.


Figure 4.21 iSCSI communication traffic can be isoloated from other network traffic by using vLANs.


Real World Scenario

A Common iSCSI Network Infrastructure Mistake

A common deployment error with iSCSI storage networks is the failure to provide enough connectivity between the Ethernet switches and the storage device to adequately handle the traffic requests from the ESX Server hosts. In the sample architecture shown here, four ESX Server hosts are configured with redundant connections to two Ethernet switches, which each have a connection to the iSCSI storage device. At first glance, it looks as if the infrastructure has been designed to support a redundant storage communication strategy. And perhaps it has. But what it has not done is maximize the efficiency of the storage traffic.

If each link between the ESX Server hosts and the Ethernet switches is a 1GB link, that means there is a total storage bandwidth of 8GB or 4GB per Ethernet switch. However, the connection between the Ethernet switches and the iSCSI storage device consists of a single 1GB link per switch. If each host maximizes the throughput from host to switch, the bandwidth needs will exceed the capabilities of the switch-to-storage link and will force packets to be dropped. Since TCP is a reliable transmission protocol, the dropped packets will be re-sent as needed until they have reached their destination. All of the new data processing, coupled with the persistent retries of dropped packets, consumes more and more resources and strains the communication, thus resulting in a degradation of server performance.

To protect against funneling too much data to the switch-to-storage link, the iSCSI storage network should be configured with multiple available links between the switches and the storage device. The image shown here represents an iSCSI storage network configuration that promotes redundancy and communication efficiency by increasing the available bandwidth between the switches and the storage device. This configuration will result in reduced resource usage as a result of less packet-dropping and less retrying.

To learn more about iSCSI, visit the Storage Networking Industry Association website at http://www.snia.org/tech_activities/ip_storage/iscsi.

Configuring ESX for iSCSI Storage

I can't go into the details of configuring the iSCSI storage side of things because each product has nuances that do not cross vendor boundaries, and companies don't typically carry an iSCSI SAN from each potential vendor. On the bright side, what I can and most certainly will cover in great detail is how to configure an ESX Server host to connect to an iSCSI storage device using both hardware and software iSCSI initiation.

As noted in the previous section, VMware is limited in its support for hardware device compatibility. As with fibre channel, you should always check VMware's website to review the latest SAN compatibility guide before purchasing any new storage devices. While software-initiated iSCSI has maintained full support since the release of ESX 3.0, hardware initiation with iSCSI devices did not garner full support until the ESX 3.0.1 release. The prior release, ESX 3.0, provided only experimental support for hardware-initiated iSCSI.

VMware iSCSI SAN Compatibility

Each of the manufacturers listed here provides an iSCSI storage solution that has been tested and approved for use by VMware:

♦ 3PAR: http://www.3par.com

♦ Compellent: http://www.compellent.com

♦ Dell: http://www.dell.com

♦ EMC: http://www.emc.com

♦ EqualLogic: http://www.equallogic.com

♦ Fujitsu Siemens: http://www.fujitsu-siemens.com

♦ HP: http://www.hp.com

♦ IBM: http://www.ibm.com

♦ LeftHand Networks: http://www.lefthandnetworks.com

♦ Network Appliance (NetApp): http://www.netapp.com

♦ Sun Microsystems: http://www.sun.com

An ESX Server host can initiate communication with an iSCSI storage device by using a hardware device with dedicated iSCSI technology built into the device, or by using a software-based initiator that utilizes standard Ethernet hardware and is managed like normal network communication. Using a dedicated iSCSI HBA that understands the TCP/IP stack and the iSCSI communication protocol provides an advantage over software initiation. Hardware initiation eliminates some processing overhead in the Service Console and VMkernel by offloading the TCP/IP stack to the hardware device. This technology is often referred to as the TCP/IP Offload Engine (TOE). When you use an iSCSI HBA for hardware initiation, the VMkernel needs only the drivers for the HBA and the rest is handled by the device.

For best performance, the iSCSI hardware-based initiation is the appropriate deployment. After you boot the server, the iSCSI HBA will display all its information in the Storage Adapters node of the Configuration tab, as shown in Figure 4.22. By default, as shown in Figure 4.23, iSCSI HBA devices will assign an IQN in the BIOS of the iSCSI HBA. Configuring the hardware iSCSI initiation with an HBA installed on the host is very similar to configuring a fibre channel HBA — the device will appear in the Storage Adapters node of the Configuration tab. The vmhba#'s for fibre channel and iSCSI HBAs will be enumerated numerically. For example, if an ESX Server host includes two fibre channel HBAs labeled vmhba1 and vmhba2, adding two iSCSI HBAs will result in labels of vmhba3 and vmhba4. The software iSCSI adapter in ESX Server 3.5 will always be labeled as vmhba32.. You can then configure the adapter(s) to support the storage infrastructure.

Figure 4.22 Supported iSCSI HBA devices will automatically be found and can be configured in the BIOS of the ESX host.


Figure 4.23 The BIOS of the QLogic card provides an opportunity to configure the iSCSI HBA. If it's not configured in the BIOS, the defaults pass into the configuration display in the VI Client.


iSCSI Host bus adapters for ESX 3.0

At the time this book was written, the only supported iSCSI HBA was a specific set of QLogic cards in the 4050 series. Although a card may work, if it is not on the compatibility list, obtaining support from VMware will be challenging. As with other hardware situations in VI3, always check the VMware compatibility guide prior to purchasing or installing an iSCSI HBA.

To modify the setting of an iSCSI HBA, perform the following steps:

1. In the Storage Adapters node on the Configuration tab, select the appropriate iSCSI HBA (i.e., vmhba2 or vmhba3) from the list and click the Properties link.

2. Click the Configure button.

3. For a custom iSCSI qualified name, enter a new iSCSI name and iSCSI alias in the respective text boxes.

4. If desired, entire the static IP address, subnet mask, default gateway, and DNS server for the iSCSI HBA.

5. Click OK. Do not click Close.

Once you've configured the iSCSI HBA with the appropriate IP information, you must configure it to accurately find the target available via iSCSI storage devices. ESX provides for the two types of target identification:

♦ Static discovery

♦ Dynamic discovery

As the names suggest, one method involves manual configuration of target information (static) while the other involves a less cumbersome, administratively easier means of finding storage (dynamic). The dynamic discovery method is also referred to as the SendTargets method in light of the SendTarget request made by the ESX host. To dynamically discover the available storage targets, you must configure the host manually with the IP address of at least one node. Ironically, when configuring a host to perform a SendTarget request (dynamic discovery), you configure a target on the Dynamic Discovery tab of the iSCSI initiator Properties box, and all of the dynamically identified targets appear on the Static Discovery tab. You perform static assignment as well on the Dynamic Discovery tab so that dynamic targets appear on the Static Discovery tab. Figure 4.24 details the SendTargets method of iSCSI LUN discovery.

The hardware-initiated iSCSI allows for either dynamic or static discovery of targets. The iSCSI software initiator built into ESX 3.0 only allows for the SendTargets discovery method.

To configure the iSCSI HBA for target discovery using the SendTargets method, perform the following steps:

1. In the iSCSI Initiator Properties dialog box, select the Dynamic Discovery tab and click the Add button.

2. Enter the IP address of the iSCSI device and the port number (if it has been changed from the default port of 3260).

3. Click Close.

4. Click the Rescan link.

5. Review the Static Discovery tab of the iSCSI HBA properties.

Figure 4.24 The SendTargets iSCSI LUN discovery method requires that you manually configure at least one iSCSI target to issue a SendTargets request. The iSCSI device will then return information about all the targets available.


In this section I've hinted, or, better yet, blatantly stated, that iSCSI storage networks should be isolated from the other IP networks already configured in your infrastructure. However, this is not always a possibility due to such factors as budget constraints, IP addressing challenges, host limitations, and more. If you cannot isolate the iSCSI storage network, you can configure the storage device and the ESX nodes to use the Challenge Handshake Authentication Protocol (CHAP). CHAP provides a secure means of authenticating a user account without the need for exchanging the password over publicly accessible networks.

To configure an iSCSI HBA to authenticate using CHAP, follow these steps:

1. From the Storage Adapters node on the Configuration page, select the iSCSI HBA to be configured and click the Properties link.

2. Select the CHAP Authentication tab and click Configure.

3. Insert a custom name in the CHAP Name text box or select the Use Initiator Name checkbox.

4. Type a strong and secure string in the Chap Secret text box.

5. Click OK.

6. Click Close.


Software-initiated iSCSI is a cheaper solution than the iSCSI HBA hardware initiation because it does not require any special hardware. Software-based iSCSI initiation, as the name suggests, begins in the VMkernel and utilizes a normal Ethernet adapter installed on the ESX Server host. Unlike the iSCSI HBA solution, software initiation relies on a set of drivers and a TCP/IP stack that resides in the VMkernel. In addition, the iSCSI software initiator, of which there is only one, uses the name vmhba32 as opposed to being enumerated with the rest of the HBAs within a host. Figure 4.25 outlines the architectural differences between the hardware and software initiation mechanisms on ESX Server.

Figure 4.25 ESX Server supports using an iSCSI HBA hardware-based initiation, which reduces overhead on the VMkernel. For a cheaper solution, ESX Server also supports a software-based initiation that does not require specialized hardware.


Using the iSCSI software initiation built into ESX Server provides an easy means of configuring the host to communicate with the iSCSI storage device. The iSCSI software initiator uses the SendTargets method for obtaining information about target devices. The SendTargets request requires the manual entry of one of the IP addresses on the storage device. The software initiator will then query the provided IP address in search of all additional targets.

To enable iSCSI software initiation on an ESX Server, perform the following steps:

1. Enable the Software iSCSI client in the firewall of the ESX Server host as shown in Figure 4.26:

♦ On the Configuration tab of the ESX host, click the Security Profile link.

♦ Click the Properties link.

♦ Enable the Software iSCSI Client checkbox,

or

♦ Open an SSH session with root privileges and type the following commands:

esxcfg-firewall -r swISCSIClient

Figure 4.26 The iSCSI software must be enabled in the Service Console firewall.


2. Create a virtual switch with a VMkernel port and a Service Console (vswif) port. Bind the virtual switch to a physical network adapter connected to the dedicated storage network. Figure 4.27 shows a correctly configured switch for use in connecting to an iSCSI storage device.

Creating a VMkernel Port from a Command Line

Log on to an ESX host using an SSH session and elevate the permissions using #su — if necessary. Follow these steps:

1. Add a new port group named Storage to the virtual switch on the dedicated storage network:

esxcfg-vswitch -A Storage vSwitch2

2. Configure the VMkernel NIC with an IP address of 172.28.0.106 and a subnet mask of 255.255.255.0:

esxcfg-vmknic -a -i 172.28.0.106 -n 255.255.255.0 Storage

3. Set the default gateway of the VMkernel to 172.28.0.1:

esxcfg-route 172.28.0.1

Figure 4.27 The VMkernel and Service Console must be able to communicate with the iSCSI storage device.


From the Storage Adapters node on the Configuration tab, shown in Figure 4.28, enable the iSCSI initiator. Alternatively, open an SSH session with root privileges and type the following command:

esxcfg-swiscsi -e

Figure 4.28 Enabling the iSCSI will automatically populate the iSCSI name and alias for the software initiator.


4. Select the vmhba40 option beneath the iSCSI Software Adapter and click the Properties link.

5. Select the Dynamic Discovery tab and click the Add button.

6. Enter the IP address of the iSCSI device and the port number (if it has been changed from the default port of 3260).

7. Click OK. Click Close.

8. Select the Rescan link from the Storage Adapters node on the Configuration tab.

9. Click OK to scan for both new storage devices and new VMFS volumes.

10. As shown in Figure 4.29, any available iSCSI LUNs will now be reflected in the Details section of the vmhba40 option.

Figure 4.29 After configuring the iSCSI software adapter with the IP address of the iSCSI storage target, a rescan will identify the LUNs on the storage device that have been made available to the ESX host.


The vmkiscsi-tool Command

The vmki scsi -tool [options] vmhba## command allows command-line management of the iSCSI software initiator. The options for this command-line tool include:

♦ -I is used with -l or -a to display or add the iSCSI name.

♦ -k is used with -l or -a to display or add the iSCSI alias.

♦ -D is used with -a to perform discovery of a specified target device.

♦ -T is used with -l to list found targets.

Review the following examples:

♦ To view the iSCSI name of the software initiator:

vmkiscsi-tool -I -l

♦ To view the iSCSI alias of the software initiator:

vmkiscsi-tool -k -l

♦ To discover additional iSCSI targets at 172.28.0.122:

vmkisci-tool -D -a 172.28.0.122 vmhba40

♦ To list found targets:

vmkiscsi-tool -T -l vmhba40

Network Attached Storage and Network File System

Although Network Attached Storage (NAS) devices do not hold up to the performance and efficiency of fibre channel and iSCSI networks, they most certainly have a place on some networks. Virtual machines stored on NAS devices are still capable of the advanced VirtualCenter features of VMotion, DRS, and HA. With a significantly lower cost and simplified implementation, NAS devices can prove valuable in providing network storage in a VI3 environment.

Understanding NAS and NFS

Unlike the block-level transfer of data performed by fibre channel and iSCSI networks, access to a NAS device happens at the file system level. You can access a NAS device by using Network File System (NFS) or Server Message Block (SMB), also referred to as Common Internet File System (CIFS). Windows administrators will be most familiar with SMB traffic, which occurs each time a user accesses a shared resource using a universal naming convention (UNC) like \\servername\sharename. Whereas Windows uses the SMB protocol for file transfer, Linux-based systems use NFS to accomplish the same thing.

Although you can configure the Service Console with a Samba client to allow communication with Windows-based computers, the VMkernel does not support using SMB and therefore lacks the ability to retrieve files from a computer running Windows. The VMkernel only supports NFS version 3 over TCP/IP.

Like the deployment of an iSCSI storage network, a NAS/NFS deployment can benefit greatly from being located on a dedicated IP network where traffic is isolated. Figure 4.30 shows a NAS/NFS deployment on a dedicated network.

Figure 4.30 An NAS Server deployed for shared storage among ESX Server hosts should be located on a dedicated network separated from the common intranet traffic.


Without competition from other types of network traffic (e-mail, Internet, instant messaging, etc.), the transfer of virtual machine data will be much more efficient and provide better performance.

NFS Security

NFS is unique because it does not force the user to enter a password when connecting to the shared directory. In the case of ESX, the connection to the NFS server happens under the context of root, thus making NFS a seamless process for the connecting client. However, you might be wondering about the inherent security. Security for NFS access is maintained by limiting access to only the specified or trusted hosts. In addition, the NFS server employs standard Linux file system permissions based on user and group IDs. The user IDs (UIDs) and group IDs (GIDs) of users on a client system are mapped from the server to the client. If a user or a client has the same UID and GID as a user on the server, they are both granted access to files in the NFS share owned by that same UID and GID. As you have seen, ESX Server accesses the NFS server under the context of the root user and therefore has all the permissions assigned to the root user on the NFS server.

When creating an NFS share on a Linux system, you must supply three pieces of information:

♦ The path to the share (i.e., /nfs/ISOs).

♦ The hosts that are allowed to connect to the share, which can include:

♦ A single host identified by name or IP address.

♦ Network Information Service (NIS) groups.

♦ Wildcard characters such as * and ? (i.e., *.vdc.local).

♦ An entire IP network (i.e., 172.30.0.0/24).

♦ Options for the share configuration, which can include:

♦ root_squash, which maps the root user to the nobody user and thus prevents root access to the NFS share.

♦ no_root_squash, which does not map the root user to the nobody user and thus provides the root user on the client system with full root privileges on the NFS server.

♦ all_squash, which maps all UIDs and GIDs to the nobody user for enabling a simple anonymous access NFS share.

♦ ro, for read-only access.

♦ rw, for read-write access.

♦ sync, which forces all data to be written to disk before servicing another request.

The configuration of the shared directories on an NFS server is managed through the /etc/exports file on the server. The following example shows a /etc/exports file configured to allow all hosts on the 172.30.0.0/24 network access to a shared directory named NFSShare:

root: # cat /etc/exports/mnt/NFSShare 172.30.0.0/24 (rw,no_root_squash,sync)

The next section explores the configuration requirements for connecting an ESX Server host to a shared directory on an NFS server.

Configuring ESX to Use NAS/NFS Datastores

Before an ESX Server host can be connected to an NFS share, the NFS server must be configured properly to allow the host. Creating an NFS share on a Linux system that allows an ESX Server host to connect requires that you configure the share with the following three parameters:

♦ rw (read-write)

♦ no root squash

♦ sync

To connect an ESX Server to an NAS/NFS datastore, you must create a virtual switch with a VMkernel port that has network access to the NFS server. As mentioned in the previous section, it would be ideal for the VMkernel port to be connected to the same physical network (the same IP subnet) as the NAS device. Unlike the iSCSI configuration, creating an NFS datastore does not require that the Service Console also have access to the NFS server. Figure 4.31 details the configuration of an ESX Server host connecting to a NAS device on a dedicated storage network.

Figure 4.31 Connecting an ESX Server to a NAS device with an NFS share requires the creation and configuration of a virtual switch with a VMkernel port.


To create a VMkernel port for connecting an ESX Server to a NAS device, perform these steps:

1. Use the VI Client to connect to VirtualCenter or an ESX Server host.

2. Select the hostname in the inventory panel and then click the Configuration tab.

3. Select Networking from the Hardware menu.

4. Select the virtual switch that is bound to a network adapter that connects to a physical network with access to the NAS device. (Create a new virtual switch if necessary.)

5. Click the Properties link of the virtual switch.

6. In the vSwitch# Properties box, click the Add button.

7. Select the radio button labeled VMkernel, as shown in Figure 4.32, and then click Next.

Figure 4.32 A VMkernel port is used for performing VMotion or communicating with an iSCSI or NFS storage device.


8. As shown in Figure 4.33, type a name for the port in the Network Label text box. Then provide an IP address and subnet mask appropriate for the physical network the virtual switch is bound to.

Figure 4.33 Connecting an ESX Host to an NFS server requires a VMkernel port with a valid IP address and subnet mask for the network on which the virtual switch is configured to communicate.


9. Click Next, review the configuration, and then click Finish.

VMkernel Default Gateway

If you are prompted to enter a default gateway, choose No if one has already been assigned to a Service Console port on the same switch or if the VMkernel port is configured with an IP address on the same subnet as the NAS device. Select the Yes option if the VMkernel port is not on the same subnet as the NAS device.

Unlike fibre channel and iSCSI storage, an NFS datastore cannot be formatted as VMFS. For this reason it is recommended that NFS datastores not be used for the storage of virtual machines in large enterprise environments. In non-business-critical situations such as test environments and small branch offices, or for storing ISO files and templates, NFS datastores are an excellent solution.

Once you've configured the VMkernel port, the next step is to create a new NFS datastore. To create an NFS datastore on an ESX Server host, perform the following steps:

1. Use the VI Client to connect to a VirtualCenter or an ESX Server host.

2. Select the hostname in the inventory panel and then select the Configuration tab.

3. Select Storage (SCSI, SAN, and NFS) from the Hardware menu.

4. Click the Add Storage link.

5. Select the radio button labeled Network File System, as shown in Figure 4.34.

Figure 4.34 The option to create an NFS datastore is separated from the disk or LUN option that can be formatted as VMFS.


6. Type the name or IP address of the NFS server in the Server text box.

7. Type the name of the shared directory on the NFS server in the Folder text box. Ensure that the folder listed here matches the entry of the /etc/hosts file on the NFS server. If the folder in /etc/hosts is listed as /ISOShare, then enter /ISOShare in this text box. If the folder is listed as /mnt/NFSShare then enter /mnt/NFSShare in the Folder text box of the Add Storage wizard.

8. Type a unique datastore name in the Datastore Name text box and click Next, as shown in Figure 4.35.

Figure 4.35 To create an NFS datastore, you must enter the name or IP address of the NFS server, the name of the directory that has been shared, and a unique datastore name.


9. Click Finish to view the NFS datastore in the list of Storage locations for the ESX Server host.

Creating and Managing VMFS Datastores

Microsoft has NTFS, Linux has EXT3, and so it is only fair that VMware have its own proprietary file system: VMFS. The VMware File System (VMFS) is a high-performance, distributed journaling file system used to house virtual machine files, ISO files, and templates in a VI3 environment. Any fibre channel, iSCSI, or local storage pool can be formatted as VMFS and used by an ESX Server host. Network storage located on NAS devices cannot be formatted as VMFS data-stores but still offer some of the same advantages. The VMFS found in the latest version of ESX Server is VMFS-3, which presents a significant upgrade from its predecessor VMFS-2.

VMFS protects against data integrity problems by writing all updates to a serial log on the disk before updating the original log. Postfailure, the server will restore the data to the prefailure state and recover any unsaved data by writing it to the intended prefailure location. Perhaps the most significant enhancement to VMFS-3 is its ability to have subdirectories, thus allowing virtual machine disk files to be located in respective folders under the VMFS volume parent label.

A VMFS volume stores all of the files needed by virtual machines, including:

♦ .vmx, the virtual machine configuration file.

♦ .vmx.lck, the virtual machine lock file created when a virtual machine is in a powered-on state.

♦ .nvram, the virtual machine BIOS.

♦ .vmdk, the virtual machine hard disk.

♦ .vmsd, the dictionary file for snapshots and the associated vmdk.

♦ .vmem, the virtual machine memory mapped to a file when the virtual machine is in a powered-on state.

♦ .vmss, the virtual machine suspend file created when a virtual machine is in a suspended state.

♦ -Snapshot#.vmsn, the virtual machine snapshot configuration.

♦ .vmtm, the configuration file for virtual machines in a team.

♦ -flat.vmdk, a pre-allocated disk file that holds virtual machine data.

♦ f001.vmdk, the first extent of pre-allocated disk files that hold virtual machine data split into 2GB files; additional files increment the numerical value.

♦ s001.vmdk, the first extent of expandable disk files that are split into 2GB files; additional files increment the numerical value.

♦ -delta.vmdk, the snapshot differences file.

VMFS is a clustered file system that allows multiple physical ESX Server hosts to simultaneously read and write to the same shared storage location. The block size of a VMFS volume can be configured as 1 MB, 2 MB, 4 MB, or 8 MB. Each of the block sizes corresponds to a maximum file size of 256GB, 512GB, 1024GB, and 2048GB, respectively.

2TB Limit?

Although the VMFS file system does not allow for files larger than 2048GB or 2TB, don't think of this as a limitation that prevents you from virtualizing specific servers. There might be scenarios in which large enterprise databases consume more than 2TB of space, but keep in mind that good file management practices should not have all of the data confined in a single database file. With the capabilities of Raw Device Mappings (RDMs), it is possible to virtualize even in situations where storage requirements exceed the 2TB limit that exists for a file on a VMFS volume.

To create a VMFS datastore on a fibre channel or iSCSI LUN using the VI Client, follow these steps:

1. Use the VI Client to connect to a VirtualCenter or an ESX Server host.

2. Select the hostname in the inventory panel and then click the Configuration tab.

3. Select Storage (SCSI, SAN, and NFS) from the Hardware menu.

4. Click the Add Storage link.

5. As shown in Figure 4.36, select the radio button labeled Disk/LUN and then click Next.

Figure 4.36 The Disk/LUN option in the Add Storage wizard formats allows you to configure a fibre channel or iSCSI SAN LUN as a VMFS volume.


6. As shown in Figure 4.37, select the appropriate SAN device for the VMFS datastore (i.e., vmhba1:0:21) and click Next.

Figure 4.37 The list of SAN LUNs will include only the non-VMFS LUNs as available candidates for the new VMFS volume.


7. Click Next on the Current Disk Layout page.

8. Type a name for the new datastore in the Datastore Name text box and then click Next.

9. As shown in Figure 4.38, select a maximum file size and block size from the Maximum File Size drop-down list.

Figure 4.38 VMFS volumes can be configured to support a greater block size, which translates into greater file size capacity.


10. (Optional) If desired, though it's not recommended, the new VMFS datastore can be a portion of the full SAN LUN. Deselect the Maximize Capacity checkbox and specify a new size.

11. Review the VMFS volume configuration and then click Finish.

Don't Partition LUNs

When creating datastores, avoid partitioning LUNs. Use the maximum capacity of the LUN for each datastore created.

Although using the VI Client to create VMFS volumes through VirtualCenter is a GUI-oriented and simplified approach, it is also possible to create VMFS volumes from a command line using the fdisk and vmkfstools utilities. Perform these steps to create a VMFS volume from a command line:

1. Log in to a console session or use an application like Putty.exe to establish an SSH session to an ESX Server host.

2. Type the following command at the # prompt:

esxcfg-vmhbadevs

3. Determine if a valid partition table exists for the respective LUN by typing the following command at the # prompt:

fdisk -l /dev/sd?

where ? is the letter for the respective LUN. For example, /dev/sdf is shown in Figure 4.39.

Figure 4.39 The esxcfg-vmhbadevs command identifies all the available LUNs for an ESX Server host; the fdisk -l /dev/sd? command will identify a valid partition table on a LUN.


4. To create a new partition on a LUN, type the following at a # prompt:

fdisk /dev/sd?

where ? is the letter for the respective LUN.

Figure 4.40 To create a VMFS volume from the command line, you must create a partition on the target LUN.


5. Type n to add a new partition.

6. Type p to create the partition as a primary partition.

7. Type 1 for a partition number of 1.

8. Press the Enter key twice to accept the default values for the first and last cylinders.

9. Type p to view the partition configuration. Steps 5 through 9 are displayed in Figure 4.40.

10. Once you've created the partition, define the partition type. At the Command (m for help) prompt, press the T key to enter the partition type.

11. At the Hex code prompt (type L to list codes), type fb to select the unknown code that corresponds to VMFS.

12. Type w to save the configuration changes. Figure 4.41 shows steps 10 through 12.

Figure 4.41 Once the partition is created, adjust the partition type to reflect the VMFS volume that will be created.


13. As shown in Figure 4.42, type the following command at the # prompt:

vmkfstools -C vmfs3 -S <VMFSNAME> vmhbaw:x:y:z

where is the label to provide the VMFS volume, w is the HBA to use, x is the target ID for the storage processor, y is the LUN ID, and z is the partition.

Figure 4.42 After the LUN is configured, use vmkfstools to create the VMFS volume.


Aligning VMFS Volumes

Using the VI Client to create VMFS volumes will properly align the volume to achieve optimal performance in virtual environments with significant I/O. However, if you opt to create VMFS volumes using the command line, I recommend that you perform a few extra steps in the fdisk process to align the partition properly.

In the previous exercise insert the following steps between steps 11 and 12.

1. Type x to enter expert mode.

2. Type b to set the starting block number.

3. Type 1 to choose partition 1.

4. Type 128.

Note that the tests to identify the performance boosts were conducted against an EMC CX series storage area network device. The recommendations from VMware on alignment are consistent across fibre solutions and are not relevant for IP-based storage technologies. The tests concluded that proper alignment of the VMFS volume and the virtual machine disks produces increased throughput and reduced latency. Chapter 6 will list the recommended steps for aligning virtual machine file systems.

Once the VMFS volume is created, the VI Client provides an easy means of managing the various properties of the VMFS volume. The LUN properties offer the ability to add extents as well as to change the datastore name, the active path, and the path selection policy.

Adding extents to a datastore lets you increase the size of an extent greater than the 2TB limit. This does not allow file sizes to exceed the 2TB limit but only the size of the VMFS volume. Be careful when working with adding extents because the LUN that is added as an extent is wiped of all its data during the process. This can result in unintentional data loss. LUNs that are available as extent candidates are those that are not already formatted with VMFS, leaving empty or NTFS-formatted LUNS as viable candidates. (In other words, ESX will not eat its own!)

Adding VMFS Extents

When adding an extent to a VMFS volume, be sure to check for any existing data on the extent candidate. A simple method to do this is to compare the maximum size of the LUN with the available space to identify whether there is any existing data on the LUN. If the two sizes are almost identical, it is safe to say there is no data. For example, a 10GB LUN might reflect 9.99GB of free space. Check and double-check the LUNs. Since adding the extent will wipe all data from the extent candidate, you cannot be too sure.

While adding an extent through the VI Client or command line is an easy way to provide more space, it is a better practice to manage VMFS volume size from the storage device. To add an extent to an existing datastore, perform these steps:

1. Use the VI Client to connect to a VirtualCenter or an ESX Server host.

2. Select the hostname from the inventory pane and then click the Configuration tab.

3. Select Storage (SCSI, SAN, and NFS) from the Hardware menu.

4. Select the datastore to which the extent will be added.

5. Click the Properties link.

6. Click the Add Extent button, shown in Figure 4.43, on the datastore properties.

Figure 4.43 The properties page of a datastore lets you add an extent to increase the available storage space a datastore offers.


7. Choose an extent candidate on the Extent Device page of the Add Extent wizard, shown in Figure 4.44, and then click Next.

Figure 4.44 When adding an extent, be sure the selected extent candidate does not currently hold any critical data. All data is removed from an extent candidate when added to a datastore.


8. Click Next on the Current Disk Layout page.

9. (Optional) Although it is not recommended, the Maximum Capacity checkbox can be deselected and a custom value can be provided for the amount of space to use from the LUN.

10. Click Next on the Extent Size page.

11. Review the settings for the new extent and then click Finish.

12. Identify the new extent displayed in the Extents list and the new Total Formatted Capacity of the datastore, and then click Close.

13. As shown in Figure 4.45, identify the additional extent in the details pane for the datastore.

Figure 4.45 All extents for a datastore are reflected in the Total Formatted Capacity for the datastore and can be seen in the Extents section of the datastore details.


Once an extent is complete, the datastore will maintain the relationship until the datastore is removed from the host computer. An individual extent cannot be removed from a datastore.

Real World Scenario

Resizing a Virtual Machine's System Volume

The time will come when a critical virtual machine in your Windows environment will run out of space on the system volume. If you've been around Windows long enough, you know that it is not a fun issue to have to deal with. Though adding an extent can make a VMFS volume bigger, it does nothing to help this situation. There are third-party solutions to this problem; for example, use Symantec Ghost to create an image of the virtual machine and then deploy the image back to a new virtual machine with a larger hard drive. The solution described here comes completely at the hand of tools that are already available to you within ESX and Windows and will incur no financial charge.

As a first step to solving this problem, you must increase the size of the VMDK file that corresponds to the C drive. Using the vmkfstools command, you can expand the VMDK file to a new size. For example, to increase the size of a VMDK file named server1.vmdk from 20GB to 60GB:

1. Use the virtual machines properties to resize the virtual machine disk file size.

2. Mount the server1.vmdk file as a secondary drive in a different virtual machine.

3. Open a command window in the second virtual machine.

4. At the command prompt, type diskpart.exe.

5. To display the existing volumes, type list volume.

6. Type select volume , where is the number of the volume to extend.

7. To add the additional 40GB of space to the drive, type extend size=40000.

8. To quit diskpart.exe, type exit.

9. Shut down the second virtual machine to remove server1.vmdk.

10. Turn on the original virtual machine to reveal a new, larger C drive.

Free third-party utilities like QtParted and GParted can resize most types of file systems, including those from Windows and Linux. No matter which tool or procedure you use, be sure to always back up your VMDK before resizing.

If budget is not a concern, you can replace the mounting of the VMDK and use of the diskpart.exe utility with a third-party product like Acronis Disk Director. Disk Director is a graphical utility that simplifies managing volumes, even system volumes, on a Windows computer.

With the release of Windows Server 2008, Microsoft has now added the native ability to grow and shrink the system volume making it even easier to make these same types of adjustments without third part tools or fancy tricks.

All the financial, human, and time investment put into building a solid virtual infrastructure would be for nothing if ESX Server did not offer a way of ensuring accessing to VMFS volumes in the face of hardware failure. ESX Server has a native multipathing feature that allows for redundant access to VMFS volumes across an available path.

ESX Server Multipathing

ESX Server does not require third-party software, like EMC PowerPath, to gain the benefits of understanding and/or identifying redundant paths to network storage devices.

It doesn't seem likely that critical production systems with large financial investments in network storage would be left susceptible to single points of failure — which is why, in most cases, a storage infrastructure built to host critical data is done with redundancy at each level of the deployment. For example, a solid fibre channel infrastructure would include multiple HBAs per ESX host, connected to multiple fibre channel switches, connected to multiple storage processors on the storage device. In a situation where an ESX host has two HBAs and a storage device has two storage processors, there are four (2×2) different paths that can be assembled to provide access to LUNs. This concept, called multipathing, involves the use of redundant storage components to ensure consistent and reliable transfer of data. Figure 4.46 depicts a fibre channel infrastructure with redundant components at each level, which provides for exactly four distinct paths for accessing the available LUNs.

The multipathing capability built into ESX Server offers two different methods for ensuring consistent access: the most recently used (MRU) and the fixed path. As shown in Figure 4.47, the details section of an ESX datastore will identify the current path selection policy as well as the total number of available paths. The default policy, MRU, provides failover when a device is not functional but does not failback to the original device when the device is repaired. As the name suggests, an ESX host configured with an MRU policy will continue to transfer data across the most recently used path until that path is no longer available or is manually adjusted.

Figure 4.46 ESX Server has a native multipathing capability that allows for continued access to storage data in the event of hardware failure. With two HBAs and two storage processors, there exist exactly four paths that can be used to reach LUNs on a fibre channel storage device.


Figure 4.47 The Details of a datastore configured on a fibre channel LUN identifies the current path selection policy.


The second policy, fixed path, requires administrators to define the order of preference for the available paths. The fixed path policy, like the MRU, provides failover in the event that hardware fails, but it also provides failback upon availability of any preferred path as defined.

MRU vs. Fixed Path

Virtual infrastructure administrators should strive to spread the I/O loads over all available paths. An optimal configuration would utilize all HBAs and storage processors in an attempt to maximize data transfer efficiency. Once the path selections have been optimized, the decision will have to be made regarding the path selection policy. When comparing the MRU with fixed policies, you will find that each provides a set of advantages and disadvantages.

The MRU policy requires very little, if any, effort on the front end but requires an administrative reaction once failure (failover) has occurred. Once the failed hardware is fixed, the administrator must regain the I/O balance achieved prior to the failure.

The fixed path policy requires significant administrative overhead on the front end by requiring the administrator to define the order of preference. The manual path definition that must occur is a proactive effort for each LUN on each ESX Server host. However, after failover there will be an automatic failback, thereby eliminating any type of reactive steps on the part of the administrator.

Ultimately, it boils down to a ‘‘pay me now or pay me later’’ type configuration. Since hardware failure is not something we count on and is certainly something we hope happens on an infrequent basis, it seems that the MRU policy would require the least amount of administrative effort over the long haul.

Perform the following steps to edit the path selection policy for a datastore:

1. Use the VI Client to connect to a VirtualCenter or an ESX Server host.

2. Select the hostname in the inventory panel and then click the Configuration tab.

3. Select Storage (SCSI, SAN, and NFS) from the Hardware menu.

4. Select a datastore and review the details section.

5. Click the Properties link for the selected datastore.

6. Click the Manage Paths button in the Datastore properties box.

7. Click the Change button in the Policy section of the properties box.

8. As shown in Figure 4.48, select the Fixed radio button.

Figure 4.48 You can edit the path selection policy on a per-LUN basis.


9. Click OK.

10. Click OK.

11. Click Close.

Perform the following steps to change the active path for a LUN:

1. Use the VI Client to connect to a VirtualCenter or an ESX Server host.

2. Select the hostname in the inventory panel and then click the Configuration tab.

3. Select Storage (SCSI, SAN, and NFS) from the Hardware menu.

4. Select a datastore and review the details section.

5. Click the Properties link for the selected datastore.

6. Click the Manage Paths button in the Datastore properties box, shown in Figure 4.49.

Figure 4.49 The Manage Paths detail box identifies the active and standby paths for a LUN and can be used to manually select a new active path.


7. Select the existing Active path and then click the Change button beneath the list of available paths.

8. Click the Disabled radio button to force the path to change to a different available path, shown in Figure 4.50.

Figure 4.50 Disabling the active path of a LUN forces a new active path.


9. Repeat the process until the desired path is shown as the Active path.

Regardless of the LUN design strategy or multipathing policy put in place, virtual infrastructure administrators should take a very active approach to monitoring virtual machines to ensure that their strategies continue to maintain adequate performance levels.

The Bottom Line

Differentiate among the various storage options available to VI3. The storage technologies available for VMware Infrastructure 3 offer a wide range of performance and cost options. From the high-speed, high-cost fibre channel solution to the efficient, cost-effective iSCSI solution, to the slower, yet cheaper NAS/NFS, each solution has a place in any organization on a mission to virtualize.

Master It Identify the characteristics of each storage technology and which VI3 features each supports.

Design a storage area network for VI3. Once you've selected a storage technology, begin with the implementation of a dedicated storage network to optimize the transfer of storage traffic. A dedicated network for an iSCSI or NAS/NFS deployment will isolate the storage traffic from the e-mail, Internet, and file transfer traffic of the standard corporate LAN. From there, the LUN design for a fibre channel or iSCSI storage solution will work itself out in the form of the adaptive approach, predictive approach, or a hybrid of the two.

Master It Identify use cases for the adaptive and predictive LUN design schemes.

Configure and manage Fibre Channel and iSCSI storage networks. Deploying a fibre channel SAN involves the development of zoning and LUN masking strategies that ensure data security across ESX Server hosts while still providing for the needs of VMotion, HA, and DRS. The nodes in the fibre channel infrastructure are identified by 64-bit unique addresses called World Wide Names (WWNs). The iSCSI storage solution continues to use IP and MAC addresses for node identification and communication. ESX Server hosts use a four-part naming structure for accessing pools of storage on a SAN. Communication to an iSCSI storage device requires that both the Service Console and the VMkernel be able to communicate with the device.

Master It Identify the SAN LUNs that have been available to an ESX Server host.

Configure and manage NAS storage. NAS storage offers a cheap solution for providing a shared storage pool for ESX Server hosts. Since the ESX Server host connects under the context of root, the NFS server must be configured with the no root squash parameter. A VMkernel port with access to the NAS server is required for an ESX Server host to function.

Master It Identify the ESX Server and NFS server requirements for using a NAS/NFS device.

Create and manage VMFS volumes. VMFS is the proprietary, highly efficient file system used by ESX Server hosts for storing virtual machine files, ISO files, and templates. VMFS volumes can be extended to overcome the 2TB limitation, but the file sizes within the VMFS volume still keep a maximum of 2TB. VMFS is managed through the VI Client or from a series of command-line tools, including vmkfstools and esxcfg-vmhbadevs.

Master It Increase the size of a VMFS volume.

Master It Balance the I/O of an ESX Server to use all existing hardware.

Загрузка...