Chapter 2 Planning and Installing ESX Server

Now that you've been introduced to VMware Infrastructure 3 (VI3) and its suite of applications in Chapter 1, you're aware that ESX Server 3 is the foundation of VI3. The deployment, installation, and configuration of the ESX Server requires adequate planning for a VMware-supported installation.

In this chapter you will learn to:

♦ Understand ESX Server compatibility requirements

♦ Plan an ESX Server deployment

♦ Install ESX Server

♦ Perform postinstallation configuration

♦ Install the Virtual Infrastructure Client (VI Client)

Planning a VMware Infrastructure 3 Deployment

In the world of information technology management, there are many models that reflect the project management lifecycle. In each of the various models, it is almost guaranteed that you'll find a step that involves planning. Though these models might stress this stage of the lifecycle, the reality is that planning is often passed over very quickly if not avoided altogether. However, a VI3 project requires careful planning due to hardware constraints for the ESX Server software. In addition, the server planning can have a significant financial impact when calculating the return on investment for a VI3 deployment.

VMware ESX Server includes stringent hardware restrictions. Though these hardware restrictions provide a limited environment for deploying a supported virtual infrastructure, they also ensure the hardware has been tested and will function as expected as a platform for VMware's VMkernel hypervisor. Although not every vendor or whitebox configuration can play host to ESX Server, the list of supported hardware platforms will continue to change as newer models and more vendors are tested by VMware. The official VMware Systems Compatibility guide can be found on VMware's website at http://www.vmware.com/pdf/vi3_systems_guide.pdf. With a quick glance at the systems compatibility guide, you will notice Dell, HP, and IBM among a dozen or so lesser-known vendors. Within the big three, you will find different server models that provide a tested and supported platform for ESX Server.

The Right Server for the Job

Selecting the appropriate server is undoubtedly the first step in ensuring a successful VI3 deployment. In addition, it is the only way to ensure VMware will provide any needed support.

A deeper look into a specific vendor, like Dell, will reveal that the compatibility guide identifies server models of all sizes (see Figure 2.1) as valid ESX Server hosts, including:

♦ The 1U PowerEdge 1950

♦ The 2U PowerEdge 2950 and 2970

♦ The 4U PowerEdge R900

♦ The 6U PowerEdge 6850 and 6950

♦ The PowerEdge 1955 Blade Server

Figure 2.1 Servers on the compatibility list come in various sizes and models.


The model selected as the platform has a direct effect on server configuration and scalability, which will in turn influence the return on investment for a virtual infrastructure.

Calculating the Return on Investment

In today's world, every company is anxious and hoping for the opportunity for growth. Expansion is often a sign that a company is fiscally successful and in a position to take on the new challenges that come with an increasing product line or customer base. For the IT managers, expansion means planning and budgeting for human capital, computing power, and spatial constraints.

As many organizations are figuring out, virtualization is a means of reducing the costs and overall headaches involved with either consistent or rapid growth. Virtualization offers solutions that help IT managers address the human, computer, and spatial challenges that accompany corporate demands.

Let's look at a common scenario facing many successful medium-to-large business environments. Take the fictitious company Learn2Virtualize (L2V) Inc. L2V currently has 40 physical servers and an EMC fibre channel storage device in a datacenter in St. Petersburg, Florida. During the coming fiscal year, through acquisitions, new products, and new markets L2V expects to grow to more than 100 servers. If L2V continues to grow using the traditional information systems model, they will buy close to 100 physical servers during their rapid expansion. This will allow them to continue minimizing services on hosts in an effort to harden the operating systems. This practice is not uncommon for many IT shops. As a proven security technique, it is best to minimize the number of services provided by a given server to reduce the exposure to vulnerability across different services. Using physical server deployment will force L2V to look at their existing and future power and datacenter space consumption. In addition, they will need to consider the additional personnel that might be required. With physical server implementations, L2V might be looking at expenses of more than $150,000 in hardware costs alone. And while that might be on the low side, consider that power costs will rise and that server CPU utilization, if it is consistent with industry norms, might sit somewhere between 5 and 10 percent. The return on investment just doesn't seem worth it.

Now let's consider the path to virtualization. Let's look at several options L2V might have if they move in the direction of server consolidation using the VI3 platform. Since L2V already owns a storage device, we'll refrain from including that as part of the return on investment (ROI) calculation for their virtual infrastructure. L2V is interested in the enterprise features of VMotion, DRS, and HA, and therefore they are included in each of the ROI calculations.

The Price of Hardware

The prices provided in the ROI calculations were abstracted from the small and medium business section of Dell's website, at http://www.dell.com. The prices should be used only as a sample for showing how to determine the ROI. It is expected that you will work with your preferred hardware vendor on server make, model, and pricing while using the information given here as a guide for establishing the right hardware for your environment and budget.

Each of the following three ROI calculations identifies various levels of availability, including single server failure, two-server failure, or no consideration for failover. All of the required software licenses have been included as part of the calculation; however, annual licensing fees have not been included since there are several options and they are recurring annual charges.


Scenario 1: Quad Core 3 Server Cluster

3 Dell 2950 III Energy Smart 2U Servers $35,000 ($7,000 × 5)
Two Quad-Core Intel CPUs
16GB of RAM
Two 73GB 10K RPM SAS hard drives in RAID1
Two QLogic 2460 4Gbps fibre channel HBAs
Dell Remote Access Controller (DRAC)
Six network adapters (two onboard, one quad-port card)
3-Year Gold 7 × 24,4-hour response support
VMware Midsize Acceleration Kit $21,824
3 VMware Infrastructure 3 Enterprise licenses (6 procs)
Virtual SMP
VirtualCenter Agent
VMFS
VMotion and Storage VMotion
DRS
HA
Update Manager
VCB
1 VirtualCenter 2.5 Foundation license
10 CPU Windows Server 2003 Datacenter Licenses $25,000 ($2,500 × 10)
Hardware and licensing total $71,824
Per virtual machine costs
One server HA failover capacity: Average 10,1GB VMs per host (30 VMs) $2,394 per VM
Maximum capacity: Average 14,1GB VMs per host (42 VMs) $1,710 per VM

Scenario 2: Quad Core Four Server Cluster

4 Dell R900 Servers $164,000 ($41,000 × 4)
Four Quad-Core Intel processors
128GB of RAM
Two 73GB 10K RPM SAS hard drives in RAID1
Two QLogic 2460 4Gbps fiber channel HBAs
Dell Remote Access Controller (DRAC)
Six network adapters (two onboard, one quad port card)
3-Year Gold 7 × 24,4-hour response support
8 CPU VI3 Enterprise licenses $75,328 ($9,416 × 8)
8 VMware Infrastructure 3 Enterprise licenses (16 processors)
Virtual SMP
VirtualCenter Agent
VMFS
VMotion and Storage VMotion
DRS
HA
Update Manager
VCB
1 VMware Virtual Center 2.0 License $8,180
16 CPU Windows Server 2003 Datacenter Licenses $40,000 ($2,500 × 16)
Hardware and licensing totals $287,508
Per virtual machine costs
One server HA failover capacity: Average 80,1GB VMs per host (320 VMs) $898 per VM
Two server HA failover capacity: Average 60,1GB VMs per host (240 VMs) $1,197 per VM

Although both scenarios present a different deployment, the consistent theme is that using VI3 reduces the cost per server by introducing them as virtual machines. At the lowest cost, virtual machines would each cost $898, and even at the highest cost, they would run $2,394 per machine. These cost savings do not include the intrinsic savings on power consumption, space requirements, and additional employees required to manage the infrastructure.

Though your environment may certainly differ from the L2V Inc. example, the concepts and processes of identifying the ROI will be similar. Use these examples to identify the sweet spot for your company based on your existing and future goals.

The Best Server for the Job

With several vendors and even more models to choose from, it is not difficult to choose the right server for a VI3 deployment. However, choosing the best server for the job means understanding the scalability and fiscal implications while meeting current and future needs. The samples provided are simply guidelines that can be used. They do not take into consideration virtual machines with high CPU utilization. The assumption in the previous examples is that memory will be the resource with greater contention. You may adjust the values as needed to determine what the ROI would be for your individualized virtual infrastructure.

No matter the vendor or model selected, ESX Server 3.5 has a set of CPU and memory maximums, as shown in Table 2.1.

ESX Server Maximums

Where appropriate, each chapter will include additional values for ESX Server 3.5 maximums for NICS, storage configuration, virtual machines, and so forth.


Table 2.1: ESX Server 3.5 Maximums

Component Maximum
No. of virtual CPUs per host 128
No. of cores per host 32
No. of logical CPU (hyperthreading enabled) 32
No. of virtual CPUs per core 8
Amount of RAM per host 128GB

ESX Server Installation

In addition to the choice of server vendor, model, and hardware specification, the planning process involves a decision between using ESX Server 3.5 versus ESXi 3.5. This chapter will cover the installation of ESX Server 3.5, while Chapter 13 will examine the specifics of ESXi 3.5.

Installing ESX Server 3.5 can be done in a graphical mode or a text-based installation, which limits the intricacy of the screen configuration during the installation. The graphical mode is the more common of the two installation modes. The text mode is reserved for remote installation scenarios where the wide area network is not strong enough to support the graphical nature of the graphical installation mode.

ESX Server Disk Partitioning

Before we offer step-by-step instructions for installing ESX Server, it is important to review some of the functional components of the disk architecture upon which ESX Server will be installed. Because of its roots in Red Hat Linux, ESX Server does not use drive letters to represent the partitioning of the physical disks. Instead, like Linux, ESX Server uses mount points to represent the various partitions. Mount points involve the association of a directory with a partition on the physical disk. Using mount points for various directories under the root file system protects the root file system by not allowing a directory to consume so much space that the root becomes full. Since most folks are familiar with the Microsoft Windows operating system, think of the following example. Suppose you have a server that runs Windows using a standard C: system volume label. What happens when the C drive runs out of space? Without going into detail let's just leave the answer as a simple one: bad things. Yes, bad things happen when the C drive of a Windows computer runs out of space. In ESX Server, as noted, there is no C drive. The root of the operating system file structure is called exactly that: the root. The root is noted with the / character. Like Windows, if the / (root) runs out of space, bad things happen. Figure 2.2 compares Windows disk partitioning and notation against the Linux disk partitioning and notation methods.

Figure 2.2 Windows and Linux represent disk partitions in different ways. Windows, by default uses drive letters, while Linux uses mount points.


In addition, because of the standard x86 architecture, the disk partitioning strategy for ESX Server involves creating three primary partitions and an extended partition that contains multiple logical partitions. The standard x86 disk partitioning strategy does not allow for more than three primary partitions to be created.

Allow Me

It is important to understand the disk architecture for ESX Server; however, as you will soon see, the installation wizard provides a selection that creates all the proper partitions automatically.

With that said, the partitions created are enough for ESX Server 3.5 to run properly, but there is room for customizing the defaults. The default partitioning strategy for ESX Server 3.5 is shown in Table 2.2.


Table 2.2: Default ESX Partition Scheme

Mount point name Type Size
/boot Ext3 100MB
/ Ext3 5000MB (5GB)
(none) VMFS3 Varies
(none) Swap 544MB
/var/log Ext3 2000MB (2GB)
(none) vmkcore 100MB
The /boot Partition

The /boot partition, as its name suggests, stores all the files necessary to boot and ESX Server. The default size of 100MB is ample space for the necessary files. This 100MB size, however, is twice the size of the default boot partition created during the installation of the ESX 2 product. It is not uncommon to find recommendations of doubling this to 200MB in anticipation of a future increase. By no means is this a requirement — it is just a suggestion. The assumption is that an existing installation is already configured for support of the next version of ESX, presumably ESX 4.0.

The / Partition

The / partition is the root of the Service Console operating system. We have already alluded to the importance of the / (root) of the file system, but now we should detail the implications of its configuration. Is 5GB enough for the / of the console operating system? The obvious answer is that 5GB must be enough if that is what VMware chose as the default. The minimum size of the / partition is 2.5GB, so the default is twice the size of the minimum. So why change the size of the / partition? Keep in mind that the / partition is where any third-party applications would install by default. This means that six months, eight months, or a year from now when there are dozens of third-party applications available for ESX Server, all of these applications will likely be installed into the / partition. As you can imagine, 5GB can be used rather quickly. One of the last things on any administrator's list of things to do is reinstallations of each of their ESX Servers. Planning for future growth and the opportunity to install third-party programs into the Service Console means creating a / partition with plenty of space to grow. I, as well as many other consultants, often recommend that the / partition be given more than the default 5GB of space. It is not uncommon for virtualization architects to suggest root partition sizes of 20GB to 25GB. However, the most important factor is to choose a size that fits your comfort for growth.

The Swap Partition

The swap partition, as the name suggests, is the location of the Service Console swap file. This partition defaults to 544MB. As a general rule, swap files are created with a size equal to two times the memory allocated to the operating system. The same holds true for ESX Server. The swap partition is 544MB in size by default because the Service Console is allocated 272MB of RAM by default. By today's standards, 272MB of RAM seems low, but only because we are used to Windows servers requiring more memory for better performance. The Service Console is not as memory intensive as Windows operating systems can be. This is not to say that 272MB is always enough. Continuing with ideas from the previous section, if the future of the ESX Server deployment includes the installation of third-party products into the Service Console, then additional RAM will certainly be warranted. Unlike Windows or Linux, the Service Console is limited to only 800MB of RAM. The Post-Installation Configuration section of this chapter will show exactly how to make this change, but it is important to plan for this change during the installation so that the swap partition can be increased accordingly. If the Service Console is to be adjusted up to the 800MB maximum, then the swap partition should be increased to 1600MB (2 × 800MB).

The /var/log Partition

The /var/log partition is created with a default size of 2000MB, or 2GB of space. This is typically a safe value for this partition. However, I recommend that you make a change to this default configuration. ESX Server uses /var directory during patch management tasks. Since the default partition is /var/log, this means that the /var partition is still under the / (root) partition. Therefore, space consumed in /var is space consumed in / (root). For this reason I recommend that you change the mount point to /var instead of /var/log and that you increase the space to a larger value like 10GB or 15GB. This alteration provides ample space for patch management without jeopardizing the / (root) file system and still providing a dedicated partition to store log data.

The vmkcore Partition

The vmkcore partition is the dump partition where ESX Server writes information about a system halt. We are all familiar with the infamous Windows blue screen of death (BSOD) either from experience or the multitude of jokes that arose from the ever-so-frequent occurrences. When an ESX Server crashes, it, like Windows, writes detailed information about the system crash. This information is written to the vmkcore type partition. Unlike Windows, an ESX Server system crash results in a purple screen of death (PSOD) that many administrators have never seen. The size of this partition does not need to be altered.

The vmfs3 Partition

You might have noticed that I skipped over the VMFS3 partition. I did so for a reason. The VMFS3 partition is created, by default, with a size equal to the disk size minus the default sizes of all other partitions. In other words, ESX Server creates all the other partition types and then uses the remaining free space as the local VMFS3 storage. In most VI3 infrastructures, the local VMFS3 storage device will be negligible in light of the dedicated storage devices that will be in place. Fibre channel and iSCSI storage devices that provide the proper infrastructure for VMotion, DRS, and HA reduce the need for large amounts of local VMFS3 storage.

All That Space and Nothing to Do

Although local disk space is useless in the face of a dedicated storage network, there are ways to take advantage of local storage rather than let it go to waste. LeftHand Networks (http://www.lefthandnetworks.com) has developed a virtual storage appliance (VSA) that presents local ESX Server storage space as an iSCSI target. In addition, this space can be combined with other local storage on other servers to provide data redundancy. And the best part of being able to present local storage as virtual shared storage units is the availability of VMotion, DRS, and HA.

Table 2.3 provides a customized partitioning strategy that offers strong support for any future needs in an ESX Server installation.


Table 2.3: Custom ESX Partition Scheme

Mount point name Type Size
/boot Ext3 200MB
/ Ext3 25,000MB (25GB)
(none) VMFS3 Varies
(none) Swap 1,600MB(1.6GB)
/var Ext3 12,000MB (12GB)
(none) vmkcore 100MB

Local Disks, Redundant Disks

Just because local VMFS 3 storage might not hold much significance in an ESX Server deployment does not mean that all local storage is irrelevant. The availability of the /(root) file system, vmkcore, Service Console swap, and so forth is critical to a functioning ESX Server. For the safety of the installed Service Console always install ESX Server on a hardware-based RAID array. Unless you intend to use a product like LeftHand Networks' VSA, there is little need to build a RAID 5 array with three or more large hard drives. A RAID1 (mirrored) array provides the needed reliability while minimizing the disk requirements.

ESX Server 3.5 offers a CD-based installation and an unattended installation that uses the same kickstart file technology commonly used for unattended Linux installations. We'll begin by looking at a standard CD installation and then transition into the automated ESX Server installation method.

CD-ROM-Based Installation

Readers who have already done ESX Server installs are probably wondering what we could be talking about in this section given that the installation can be completed by simply clicking Next until the Finish button shows up. And though this is true, there are some significant decisions to be made through the installation — decisions that affect the future of the ESX Server deployment as well as decisions that could cause severe damage to company data. For this reason, it is important for the experienced administrator and the installation newbie to read this section carefully and understand how best to install ESX Server to support the current and future demands of the VI3 deployment.

Perform the following steps to install ESX Server 3.5 from a CD:

1. Configure the server to boot from the CD, insert the VMware ESX Server 3.5 CD, and reboot the computer.

2. Select the graphical installation mode by pressing the Enter key at the boot options screen, shown in Figure 2.3.

Figure 2.3 ESX Server 3.5 includes a graphical installation mode, which includes an enhanced GUI and a text-based installation mode better suited for installing over a wide area network.


3. At the CD Media Test screen, shown in Figure 2.4, click the Skip button to continue with the installation. Click the Test button to identify any errors in the installation media.

Figure 2.4 To prevent installation errors due to bad media, the CD can be tested early in the install procedure.


4. Click the Next button on the Welcome to the ESX Server 3.5 Installer screen.

5. Select the U.S. English keyboard layout, or whichever is appropriate for your installation, as shown in Figure 2.5. Then click the Next button.

Figure 2.5 ESX Server 3.5 offers support for numerous keyboard layouts.


6. Select the Wheel Mouse (PS/2), shown in Figure 2.6. Or if you choose to match your mouse model exactly, select the appropriate option.

Figure 2.6 ESX Server 3.5 offers support for numerous models of mouse devices.


7. Select the Yes button to initialize any device to be used for storing the ESX Server 3.5 installation partitions, shown in Figure 2.7.

Figure 2.7 Unknown devices must be initialized for ESX Server 3.5 to be installed.


Warning! You Could Lose Data if You Don't Read This…

If SAN storage has already been presented to the server being installed, it could be possible to initialize SAN LUNs with production data. As a precaution, it is an excellent idea to disconnect the server from the SAN or ensure LUN masking has been performed to prevent the server from accessing LUNs.

Access to the SAN is only required during installation if a boot from SAN configuration is required.

8. As shown in Figure 2.8, select the check box labeled I Accept the Terms of the License Agreement and click the Next button.

Figure 2.8 The ESX Server 3.5 license agreement must be accepted; however no licenses are configured during the installation wizard.


9. As shown in Figure 2.9, select the Recommended radio button option to allow the installation wizard to automatically partition the local disk. Ensure that the local disk option is selected in the Install ESX Server on the drop-down list. To protect any existing VMFS data, ensure that the Keep Virtual Machines and the VMFS (Virtual Machine File System) That Contains Them option is selected.

Figure 2.9 The ESX Server 3.5 installation wizard offers automatic partitioning of the selected disk and protection for any existing data that resides in a VMFS-formatted partition.


10. Click the Yes button on the partition removal warning, shown in Figure 2.10.

11. Review the partitioning strategy, as shown in Figure 2.11, and click the Next button to continue the installation.

Figure 2.10 The ESX Server 3.5 installation wizard offers a warning before removing all partitions on the selected disk.


Figure 2.11 ESX Server 3.5 default partitioning provides a configuration that offers successful installation and system operation.


Stray from the Norm

As discussed in the previous section, it might be necessary to alter the default partitioning strategy. This does not mean that all partitions must be built from scratch. To change the default partition strategy, select the partition to change and click the Edit button.

Start the partition customization by reducing the space allocated to the local partition with a type of VMFS 3. Once this partition is reduced, the other partitions — /boot, /swap, and /var/log — can be edited. After these partitions have been reconfigured, any leftover space can be given back to the local VMFS 3 partition and the installation can proceed.

12. Ensure that the ESX Server 3.5 installation wizard has selected to boot from the same drive that was selected for partitioning. By default, the selection should be correct and should not be configurable without selecting the option to allow editing. As shown in Figure 2.12, this screen provides a default configuration consistent with the previous installation configuration. This avoids misconfiguration in which the installation is performed on a local disk but the server is booted from a SAN LUN, or vice versa.

Figure 2.12 An ESX Server 3.5 host should be booted from the same device where the installation partitions have been configured.


13. As shown in Figure 2.13, select the network interface card (NIC) through which the Service Console should communicate. Assign a valid IP address, as well as subnet mask, default gateway, DNS servers, and host name for the ESX Server 3.5 host.

Figure 2.13 A NIC must be selected and configured for Service Console communication over the appropriate physical network.


If the Service Console must communicate over a virtual LAN (VLAN), enter the appropriate VLAN ID in the VLAN Settings text box.

If virtual machines must communicate over the same physical subnet as the Service Console, leave the Create a Default Network for Virtual Machines option selected. The outcome of this option can always be modified during postinstallation configuration. Once the Network Configuration page is configured correctly, click the Next button.

Do I Have to Memorize the PCI Addresses of My NICs?

Although the configuration screen for the Service Console is not very user friendly with respect to identifying the physical NICs in the computer, it is not a big deal to fix the NIC association should the wrong NIC be selected during the installation wizard. As part of the PostInstallation Configuration section of this chapter, we will detail how to recover if the wrong NIC is selected during the installation wizard.

The bright side is that if your hardware remains consistent, then the PCI addresses would also remain consistent. Therefore, company policy could document the PCI address to be selected during any new ESX Server deployments.

Keep in mind that since the NIC was incorrect, access to the server via SSH, web page, or VI Client will fail. The fix to be detailed later in the chapter requires direct access to the console or an out-of-band management tool like Dell's Remote Access Controller, which provides console access from a dedicated Ethernet port.

14. Select the appropriate time zone for the ESX Server host and then click the Next button, as shown in Figure 2.14.

Figure 2.14 ESX Server 3.5 can be configured with one of many time zones from around the world.


15. Set and confirm a root password, as shown in Figure 2.15.

Figure 2.15 Each ESX Server 3.5 host maintains its own root user and password configuration. The password must be at least six characters.


16. Review the installation configuration parameters, as shown in Figure 2.16.

Figure 2.16 The installation review offers a final chance to double-check the server configuration.


If everything looks correct, click the Next button to begin the installation procedure.

17. As shown in Figure 2.17, the installation will begin.

18. As shown in Figure 2.18, click the Finish button to reboot the computer once the installation is complete.

19. During the reboot, the GRUB boot loader will show the boot options, as shown in Figure 2.19. Ensure that the VMware ESX Server option is selected and press the Enter key (or select nothing and the option will be selected by default).

20. Upon completion of the server reboot, the console session will display the information for accessing the server from a remote computer, as shown in Figure 2.20.

Figure 2.17 Installing ESX Server 3.5


Figure 2.18 The new ESX Server 3.5 host must be rebooted to finalize the installation.


Despite the ease with which ESX Server 3.5 can be installed, it is still not preferable to perform manual, attended installations of a bulk number of servers. Nor is it preferable to perform manual, attended installation in environments that are rapidly deploying new ESX Server hosts. To support large numbers or rapid-deployment scenarios, ESX Server 3.5 can be installed in an unattended fashion.

Unattended ESX Server Installation

Installing an ESX Server 3.5 host in an unattended fashion can be done using third-party imaging tools or using the native VMware tools. Using the native tools requires several network-accessible components, including:

♦ An existing ESX Server 3.5 installation

♦ An NFS server accessible by the host to be installed

♦ A copy of the ESX Server 3.5 installation media

♦ An installation script with the appropriate configuration parameters

Figure 2.19 ESX Server 3.5 uses the GRUB boot loader.


Figure 2.20 After a reboot, the console offers the data necessary for accessing the server from a remote computer.



Figure 2.21 details the infrastructure components needed to complete an unattended ESX Server 3.5 installation using the tools built into ESX Server 3.5.

The unattended installation procedure involves booting the computer and reading the installation files, and reading the unattended installation script. The destination host can be booted from CD, floppy, or PXE boot and then directed to the location of the installation files and answer files. The installation files and/or answer script can be stored and accessed from any of the following locations:

♦ An HTTP URL

♦ A shared NFS directory

♦ An FTP directory

♦ A CD (install files only)

♦ A floppy disk (answer files only)

Table 2.4 outlines the various methods and the boot options required for each option set. The boot option is typed at the boot prompt on ESX Server 3.5 graphical versus text mode selection screen.

Figure 2.21 Performing an unattended ESX Server 3.5 installation requires the proper network servers and services.


Table 2.4: Unattended Installation Methods

If the computer boots from And the media is stored on a And the answer file is stored on a Then the boot option is
PXE (Media) URL (Answer) URL
esx ks= method= ksdevice=
CD CD URL
esx ks= ksdevice=
CD CD Floppy
esx ks=
Floppy URL Floppy
esx ks=

The kickstart answer file is created from a web-based wizard accessible from the default home page of an ESX Server host, as shown in Figure 2.22.

By default, an ESX Server 3.5 host is not configured to allow access to the scripted installer. An error, shown in Figure 2.23, identifies clearly that a given host has not been configured.

Figure 2.22 The home page for an ESX Server host provides access to the Scripted Installer, which generates an answer file through a web-based wizard.


Figure 2.23 Access to the scripted installer must be enabled on an ESX Server 3.5 host.


Perform the following steps to enable the Scripted Installer on an ESX Server 3.5 host and to create a kickstart file:

1. Establish a console session with an ESX Server 3.5 host.

2. Type the following command:

cd /usr/lib/vmware/webAccess/tomcat/apache-tomcat-5.5.17/webapps/ui/WEB-INF

3. Type the following command to get a list of all files and folders in the current directory:

ls

4. Type the following command:

nano -w struts-config.xml

5. Comment out the following line by adding to the end of the line, as shown here and in Figure 2.24:

Figure 2.24 Enabling the Scripted Installer requires minor editing of the struts-config.xml file.


6. Uncomment the following lines by removing the that precede and conclude the series of lines, as shown here and in Figure 2.24:

 

 

 

 

 

 

 

7. Type the following command:

service vmware-webAccess restart

8. Return to the ESX Server 3.5 home page and click the link labeled Log In to the Scripted Installer.

9. The Scripted Install web-based wizard will begin. Select the appropriate options for the unattended installation, as shown in Figure 2.25, and then click the Next button. The options include:

♦ Installation Type: Initial Installation | Upgrade

♦ Installation Method: CD-ROM | Remote | NFS

♦ This selection identifies where the installation files are located.

♦ Remote Server URL:

♦ Network Method: DHCP | Static IP

♦ Static will be the more common selection.

♦ Create a default network for VMs: Yes | No

♦ This option is negligible. If Yes is selected, the VM network can be deleted later. If No is selected, the VM network can be created later.

♦ VLAN:

♦ Provide a VLAN ID for Service Console only if you know that a VLAN is configured on the physical switch to which the network adapter is connected.

♦ Time Zone

♦ Reboot After Installation: Yes | No

♦ Root password

Figure 2.25 The Scripted Installer wizard defines the installation type and method as well as the Service Console configuration information.


10. As shown in Figure 2.26, set the hostname and IP address information of the server to be installed with the answer file and then click the Next button.

11. Select the check box labeled I Have Read and Accept the Terms of the License Agreement and then click the Next button.

12. As shown in Figure 2.27, configure the disk partitioning strategy and licensing mode for the target server and then click the Next button. Apply any necessary customizations to the partitions. Licensing options include: Post Install | Use License Server | Use Host License File.

13. Configure the licensing options, as shown in Figure 2.28, and then click the Next button.

Figure 2.26 The Scripted Installer defines the hostname and IP address configuration for the target ESX Server.


Figure 2.27 The Scripted Installer allows disk partitioning customizations and licensing mode.


14. Click the Download Kickstart File from the final page of the Scripted Installer, as shown in Figure 2.29.

Since a Windows file share is not an option for the location of a kickstart file, the file must be copied to an appropriate location, which is most commonly an NFS directory. Use WordPad to review the kickstart file created by the Scripted Installer wizard. A sample default file is shown in Figure 2.30.

Figure 2.28 The Scripted Installer automates the configuration of pointing an ESX Server to a license server.


Figure 2.29 The finished kickstart file can be saved to the local computer accessing the Scripted Installer web-based wizard.


Using free tools like Veeam FastSCP (http://www.veeam.com) or WinSCP (http://www.winscp.com), the kickstart file can be copied to an NFS directory that is accessible to the target ESX Server. Once the file is in place on the NFS directory, the unattended installation can be launched from the target server.

Perform the following steps to perform an unattended installation using a CD for the installation files and a remote NFS directory for the kickstart file:

1. Boot the target computer from the ESX Server 3.5 CD.

2. At the installation mode selection screen, type the following command, as shown in Figure 2.31:

esx ks=nfs:: ksdevice=

3. The installation will begin and continue until the final reboot.

Kickstart files can be edited directly through WordPad so that the wizard does not have to be executed for each new installation. Unfortunately, the kickstart file does not provide a way of generating unique information for each installation and therefore each install will require a manually created (or adjusted) kickstart file that is specific to that installation — particularly configuration of static information that must be unique like IP address and hostname.

Figure 2.30 A sample kickstart file created by the Scripted Installer wizard, viewed through WordPad


Figure 2.31 Using a local installation media with a kickstart file on an NFS directory


Kickstart Customizations

You may have noticed the kickstart file creation wizard did not allow for configuration of Service Console NIC or any virtual networking or storage configuration. That's because by default it doesn't. The lack of Service Console NIC configuration can cause access problems because the kickstart installation automatically selects the NIC with the lowest PCI address. If your Service Console is not to be associated with the NIC that has the lowest PCI address, a postinstallation configuration will be required to unlink the current NIC and link the correct NIC. We will cover how to do this in the next section of this chapter.

Kickstart files can be edited to configure postinstallation configuration. These configurations can include Service Console NIC corrections, creation of virtual switches and port groups, storage configuration, and even customizations of Service Console config files for setting up external time servers.

The command-line syntax for virtual networking and storage will be covered in Chapters 3 and 4. The following kickstart file makes many postinstallation changes:

# Advanced Kickstart file with postinstallation configuration.

# Installation Method 

cdrom

# root Password

rootpw --iscrypted a6fh$/hkQQrCaeuc0mAe38$.captvmyeT4

# Authconfig

auth --enableshadow --enablemd5

# BootLoader (Grub has to be the default boot loader for ESX Server 3) 

bootloader --driveorder=sda --location=mbr

# Timezone (set this time zone to fit your company policy) 

timezone --utc UTC

# Do not install the X windowing System 

skipx

# Clean Installation or upgrade an existing installation 

install

# Text Mode 

text

# Network install type (this server will have a static IP address of 172.30.0.105 

with a subnet mask of 2555.255.255.0, a gateway of 172.30.0.1, a DNS server

# of 172.30.0.2 and a hostname of silo3505.vdc.local. It will not be configured on a vlan network --bootproto static --ip 172.30.0.105 --netmask 255.255.255.0 --gateway 172.30.0.1 --nameserver 172.30.0.2 --hostname silo3505.vdc.local --vlanid=0

# Language 

lang en_US

# Language Support 

langsupport --default en_US

# Keyboard 

keyboard us

# Mouse 

mouse none

# Force a reboot after the install 

reboot

# Firewall settings 

firewall --disabled

# Clear all Partitions on the local disk sda 

clearpart --all --initlabel --drives=sda

# Partitioning strategy for ESX Server host 

part /boot --fstype ext3 --size 200 --ondisk sda 

part / --fstype ext3 --size 25000 --ondisk sda 

part swap --size 1600 --ondisk sda

part None --fstype vmfs3 --size 1 --grow --ondisk sda part None --fstype vmkcore --size 100 --ondisk sda

part /var --fstype ext3 --size 12000 --ondisk sda part /tmp --fstype ext3 --size 2000 --ondisk sda

# VMware Specific Commands for accepting the license agreement, configuring a 

license server at 172.30.0.2 on port 270000, and a full license 

vmaccepteula

vmlicense --mode=server --server=27000@172.30.0.2 --edition=esxFull

%packages

@base

@ everything

%post

# Create a new file named S11PostInstallConfig that will become an executable 

that is run during the first reboot of the ESX Server

cat > /etc/rc.d/rc3.d/S11PostInstallConfig << EOF #!/bin/bash

# Overwrite the resolv.conf file to create primary and secondary DNS entries 

cat > /etc/resolv.conf << DNS

nameserver 172.30.0.2 nameserver 172.30.0.3 DNS

# Link vSwitch0 used for Service Console communication to vmnic2 if the vmnic0 
was not correct

/usr/sbin/esxcfg-vswitch -U vmnic0 vSwitch0 /usr/sbin/esxcfg-vswitch -L vmnic1 vSwitch0

# Add a vmkernel port for NAS access named NFSPort, with IP address of 172.30.0.101, 
and a default gateway of 172.30.0.1 (if required for routing)

/usr/sbin/esxcfg-vswitch -A NFSAccess vSwitch0

/usr/sbin/esxcfg-vmknic -a -i 172.30.0.101 -n 255.255.255.0 NFSport /usr/sbin/esxcfg-route 172.30.0.1

# Add an NFS datastore named NFSDatastore01 with an NFS server at 172.30.0.100 
and a shared directory of ISOImages

/usr/sbin/esxcfg-nas -a -o 172.30.0.100 -s /ISOImages NFSDatastore01

# Enable the Service Console firewall to allow ntp and iSCSI client firewall ports 

/usr/sbin/esxcfg-firewall -e ntpClient

/usr/sbin/esxcfg-firewall -e swISCSIClient

# Add a vmkernel port named VMotion on a virtual switch named vSwitch1. The VMkernel 

port will have an IP address of 172.29.0.105

# and a subnet mask of 255.255.255.0 

/usr/sbin/esxcfg-vswitch -a vSwitch1 

/usr/sbin/esxcfg-vswitch -A VMotion vSwitch1 

/usr/sbin/esxcfg-vswitch -L vmnic0 vSwitch1 

/usr/sbin/esxcfg-vmknic -a -i 172.29.0.105 -n 255.255.255.0 VMotion

# Add a vswitch named vSwitch2 with a virtual machine port group named ProductionLAN 

/usr/sbin/esxcfg-vswitch -a vSwitch2

/usr/sbin/esxcfg-vswitch -L vmnic2 vSwitch2 /usr/sbin/esxcfg-vswitch -A ProductionLAN vSwitch2

# Set up time synchronization for ESX Server 

cat > /etc/ntp.conf << NTP

restrict default kod nomodify notap noquery nopeer

restrict 173.30.0.111

172.30.0.111

fudge 127.127.1.0 stratum 10

driftfile /etc/ntp/drift

broadcastdelay 0.008

authenticate yes

keys /etc/ntp/keys

NTP

cat > /etc/ntp/step-tickers << STEP

172.30.0.111

STEP

/sbin/service ntpd start

/sbin/chkconfig --level 3 ntpd on

# Update system clock 

/sbin/hwclock --systohc --utc

# The --utc setting in the "timezone" command above eliminates the need for updating 

the clock file

#cat > /etc/sysconfig/clock << CLOCK

#ZONE="UTC"

#UTC=true

#ARC=false

#CLOCK

# Allow incoming/outgoing communications on the Service Console via SSH. 

esxcfg-firewall -e sshServer

esxcfg-firewall -e sshClient

# Rename the S11Post_Install_Config file to S11Post_Install_Complete after first 
execution. Since file name will now be incorrect it will

# not be triggered in subsequent ESX Server boot sequences. EOF dictates end of file.

mv /etc/rc.d/rc3.d/S11Post_Install_Config/etc/rc.d/rc3.d/S11Post_Install_complete 

EOF

# Make the S11servercfg file an executable 

/bin/chmod +x /etc/rc.d/rc3.d/S11Post_Install_Config

Postinstallation Configuration

Once the installation of ESX Server is complete, there are several postinstallation changes that either must be set or are just strongly recommended. Among these configurations are adjusting the amount of RAM allocated to the Service Console, changing the physical NIC used by the Service Console, and configuring the ESX Server host to synchronize with an external Network Time Protocol (NTP) server.

Service Console NIC

During the installation of ESX Server, the NIC selection screen creates a virtual switch bound to the selected physical NIC. The tricky part, as noted earlier, is choosing the correct PCI address that corresponds to the physical NIC connected to the physical switch that makes up the logical IP subnet from which the ESX Server will be managed. The problem often arises when the wrong PCI address is selected, resulting in the inability to access the Service Console. Figure 2.32 shows the structure of the virtual networking when the wrong NIC is selected and when the correct NIC is selected.

Figure 2.32 The virtual switch used by the Service Console must be associated with the physical switch that makes up the logical subnet from which the Service Console will be managed.


Should the incorrect PCI address be selected, the result is an inability to reach the ESX Server Web Access page after the installation is complete. The simplest fix for this problem is to unplug the network cable from the current Ethernet port and continue trying the remaining ports until the web page is accessible. The problem with this solution is that it puts a quick end to any type of documented standard that dictates the physical connectivity of the ESX Server hosts in a virtual environment.

So what then is the better fix? Is a reinstallation in order? If you like installations, go for it, but there is something much better. A quick visit to the command line and this problem is solved:

Sometimes It's All About the Case

Remember that ESX Server holds its roots in Linux and therefore any type of command-line management or configuration will always be case sensitive.

1. Log in to the console of the ESX Server using the root user account.

2. Review the PCI addresses of the physical NICs in the server by typing the following command:

esxcfg-nics -l

3. The results, as shown in Figure 2.33, will list identifying information for each NIC. Note the PCI addresses and names of each adapter.

Figure 2.33 The esxcfg-nics command provides detailed information about each adapter in an ESX Server host.


4. Review the existing Service Console configuration by typing the following command:

esxcfg-vswitch -l

5. The results, as shown in Figure 2.34, will display the current configuration of the Service Console port association.

Figure 2.34 The esxcfg-vswitch command provides information about the current Service Console configuration.


6. To change the NIC association, the existing NIC must be unlinked by typing the following command:

esxcfg-vswitch -U vmnic# vSwitch#

In this example the appropriate command would be:

esxcfg-vswitch -U vmnic0 vSwitch0

7. Use the following command to associate a new NIC with the vSwitch0 used by the Service Console:

esxcfg-vswitch -L vmnic# vSwitch#

If still unsure of the correct NIC, try each NIC listed in the output from step 2. For this example, to associate vmnic1 with a PCI address of 08:07:00, the appropriate command would be:

esxcfg-vswitch -L vmnic1 vSwitch0

8. Repeat steps 6 and 7 until a successful connection is made to the Web Access page of the ESX Server host.

Service Console Memory

Adjusting the amount of memory given to the Service Console is not mandatory but is strongly recommended if you have to install third-party applications into the console operating system. These third-party applications will consume memory available to the Service Console. As noted earlier, the Service Console is only granted 272MB of RAM by default, as shown in Figure 2.35, with a hard-coded maximum of 800MB.

Figure 2.35 The Service Console is allocated 272 MB of RAM by default.


The difference of 528MB is, and should be, negligible in relation to the amount of memory in the ESX Server host. Certainly an ESX Server host in a production network would not have less than 8GB of memory. Even that would be the low end. So adding 528MB of memory for use by the Service Console does not place a significant restriction on the number of virtual machines a host is capable of running due to lack of available memory.

Perform the following steps to increase the amount of memory allocated to the Service Console:

1. Use the VI Client to connect to an ESX Server host or VirtualCenter Server installation.

2. Select the appropriate host from the inventory tree on the left and then select the Configuration tab from the details pane on the right.

3. Select Memory from the Hardware menu.

4. Click the Properties link.

5. As shown in Figure 2.36, enter the amount of memory to be allocated to the Service Console in the text box and then click the OK button. The value entered must be between 256 and 800.

Figure 2.36 The amount of memory allocated to the Service Console can be increased to a maximum of 800 MB.


6. Reboot the ESX Server host. As shown in Figure 2.37, the Configuration tab now reflects the current memory allocated and the new amount of memory to be allocated after a reboot.

Figure 2.37 Altering the amount of memory allocated to the Service Console requires a reboot of the ESX Server host.

Time Synchronization

Time synchronization in ESX Server is an important configuration because the ramifications of incorrect time run deep. Time synchronization issues can affect things like performance charting, SSH key expirations, NFS access, backup jobs, authentication, and more. After the installation of ESX Server (or in a kickstart script), the host should be configured to perform time synchronization with a reliable time source. This source could be another server on your network or an Internet time source. For the sake of managing time synchronization, it is easiest to synchronize all your servers against one reliable internal time server and then synchronize the internal time server with a reliable Internet time server.

Configuring time synchronization for an ESX Server requires several steps, including Service Console firewall configuration and edits to several configuration files.

Perform the following steps to enable the NTP Client in the Service Console firewall:

1. Use the VI Client to connect directly to the ESX Server host or to a VirtualCenter installation.

2. Select the hostname from the inventory tree on the left and then click the Configuration tab in the details pane on the right.

3. Select Security Profile from the Software menu.

4. As shown in Figure 2.38, enable the NTP Client option in the Firewall Properties dialog box.

5. Alternatively the NTP client could be enabled using the following command:

esxcfg-firewall -e ntpClient

Type the following command to apply the changes made to the Service Console Firewall: service mgmt-vmware restart

Perform the following steps to configure the ntp.conf and step-tickers files for NTP time synchronization on an ESX Server host:

1. Log in to a console or SSH session with root privileges. If SSH has not been enabled for the host, log in with a standard user account and use the su - command to elevate to the root user privileges and environment.

Figure 2.38 The NTP Client can be enabled through the Security Profile of an ESX Server host configuration.


2. Create a copy of the ntp.conf file by typing the following command:

cp /etc/ntp.conf /etc/old.ntpconf

3. Type the following command to use the nano editor to open the ntp.conf file:

nano -w /etc/ntp.conf

4. Replace the following line:

restrict default ignore 

with this line:

restrict default kod nomodify notrap noquery nopeer

5. Uncomment the following line:

#restrict mytrustedtimeserverip mask 255.255.255.255 nomodify notrap noquery

Edit the line to include the IP address of the new time server. For example, if the time server's IP address is 172.30.0.111, the line would read:

restrict 172.30.0.111 mask 255.255.255.255 nomodify notrap noquery

6. Uncomment the following line:

#server mytrustedtimeserverip

Edit the line to include the IP address of the new time server. For example, if the time server's IP address is 172.30.0.111, the line would read:

server 172.30.0.111

Save the file by pressing Ctrl+X. Click Y to accept.

7. Create a backup of the step-tickers file by typing the following command:

cp /etc/ntp/step-tickers /etc/ntp/backup.step-tickers

8. Type the following command to open the step-tickers file:

nano -w /etc/ntp/step-tickers

9. Type the IP address of the new time server. For example, if the time server's IP address is 172.30.0.111, the single entry in the step-tickers would read:

172.30.0.111

Save the file by pressing Ctrl+X. Click Y to accept.

Windows as a Reliable Time Server

An existing Windows Server can be configured as a reliable time server by performing these steps:

1. Use the Group Policy Object editor to navigate to Administrative Templates→System→Windows Time Service→Time Providers.

2. Enable the Enable Windows NTP Server Group Policy option.

3. Navigate to Administrative Templates→System→Windows Time Service.

4. Double-click on the Global Configuration Settings option and select the Enabled radio button.

5. Set the AnnounceFlags option to 4.

6. Click the OK button.

Installing the Virtual Infrastructure Client

The VI Client is a Windows-only application that allows for connecting directly to an ESX Server host or to a VirtualCenter installation. The only difference in the tools used is that connecting directly to an ESX Server requires authentication with a user account that exists within the Service Console, while connecting to a VirtualCenter installation relies on Windows users for authentication. The VI Client can be installed as part of a VirtualCenter installation or with the VirtualCenter installation media. However, the easiest installation method is to simply connect to the Web Access page of an ESX Server or VirtualCenter and choose to install the application right from the web page.

Perform the following steps to install the VI Client from an ESX Server Web Access home page:

1. Open an Internet browser (Internet Explorer or Firefox).

2. Type in the IP address or fully qualified domain name of the ESX Server host from which the VI Client should be installed.

3. From the ESX Server host or VirtualCenter home page, click the link labeled Download the Virtual Infrastructure Client.

4. The application can be saved to the local system by clicking the Save button, or if the remote computer is trusted, it can be run directly from the remote computer by clicking the Run button.

5. Click the Run button in the Security Warning box that identifies an unverified publisher, as shown in Figure 2.39.

Figure 2.39 The VI Client might issue a warning about an unverified publisher.


6. Click the Next button on the welcome page of the Virtual Infrastructure Client installation wizard.

7. Click the radio button labeled I Accept the Terms in the License Agreement and then click the Next button.

8. Specify a username and organization name and then click the Next button.

9. Configure the destination folder and then click the Next button.

10. Click the Install button to begin the installation.

11. Click the Finish button to complete the installation.

No Bits for 64 Bits

As of the writing of this book, the latest VI Client (version 2.5) could not be installed on 64-bit operating systems.

The Bottom Line

Understand ESX Server compatibility requirements. ESX Server has tight restrictions with regard to supported hardware. VMware is the only company that provides hardware drivers for the VMware-supported hardware. The compatibility lists provided by VMware are living documents that will continue to change as new hardware is approved.

Master It You want to reconfigure an existing physical server as an ESX Server host.

Plan an ESX Server deployment. A great deal of detailed planning and projecting is required to deploy a scalable virtual infrastructure.

Master It Your company wants to achieve the greatest ROI while maintaining high performance and availability levels. You need to produce a report that details the virtual infrastructure hardware specifications and costs.

Install ESX Server. ESX Server is a fairly straightforward installation process with only one or two details to pay close attention to.

Master It You need to reinstall ESX Server and want to be sure that inadvertent data loss cannot occur. The ESX Server will boot from local disks.

Perform postinstallation configuration. Once the installation of ESX Server is complete the configuration can be tweaked to meet the needs of the organization.

Master It After installing ESX Server, the web-based management page is returning a “page not found” error.

Master It Your department heads have defined a company policy mandating the installation of antivirus software into the Service Console. Additional software might be installed at a later date.

Install the Virtual Infrastructure Client (VI Client). The Virtual Infrastructure Client is a flexible management tool that allows management of an ESX Server host directly or by connecting to a VirtualCenter installation.

Master It You want to manage the ESX Server hosts from your administrative workstation.

Загрузка...