Chapter 13 Configuring and Managing ESXi

ESXi is a revolutionary advancement to the architecture of ESX. In this chapter, we'll explain the

new ESXi architecture, and how to configure and manage it. In this chapter you will learn to:

♦ Understand the architecture of ESXi

♦ Deploy ESXi Installable

♦ Deploy ESXi Embedded

♦ Manage ESXi

Understanding ESXi Architecture

In earlier chapters of this book, we introduced you to the architecture of the ESX Server product designed by VMware. As a review, remember that when you install ESX Server 3.5 you are installing two components, VMkernel and Service Console, that are used for virtualization and management, respectively. The VMkernel is the hypervisor that provides resource allocation to virtual machines, and the Service Console is the Linux-based operating system that manages the VMkernel and the virtual machines it services.

ESXi presents a revolutionary alteration to the architecture of ESX. How? The ESXi product is a hypervisor that no longer relies on the Service Console operating system. In fact, the Service Console operating system that ESX Server 3.5 relied on is completely removed in ESXi. Figure 13.1 shows a comparison of the two architectures.

Figure 13.1 The elimination of the Service Console in ESXi presents a dramatic change in the VMware-based virtualization architecture.


ESXi is a hypervisor-only deployment in the form the VMkernel as a 32 MB footprint with all the logic to continue its primary function of managing virtual machines access to physical resources. Because there is only a 32 MB footprint dedicated to providing resource management, this system can be deployed with less concern for local security. In other versions of VMware's hypervisor, including ESX 3.5, the Service Console is the one element of the system that is vulnerable to security issues; thus it needs security patches and updates from the manufacturer. Invariably, when an organization shifts to a new product, especially one as profound as VI3, there will be hesitation because of the unknown risks and security vulnerabilities. However, ESXi arms enterprise architects with a tool that minimizes the installation architecture, thus making the shift to VI3 an easier sell to those responsible for ensuring the security and safety of a company's information system.

The Future of Virtualization

I am hoping that you have picked up on my excitement about ESXi. I am excited that ESXi will significantly reduce the security profiling to be performed in all my future virtualization consultations and endeavors. However, I am even more excited at the thought of ESXi as the beginning of a revolution. I see the hypervisor becoming a commodity in IT space as many companies try to bring hypervisors to market. As this commoditization takes place, it will be the tools around the hypervisor that will become the differentiating factor. And in today's market, despite the fact that other companies may market the release of a hypervisor, unquestionably the company making and breaking all the rules in the virtualization marketplace is and will continue to be VMware.

ESXi will be available in two different formats:

♦ ESXi Installable

♦ ESXi Embedded

With such a small footprint, ESXi is easy to install. In fact, with ESXi Embedded, you can order server systems preconfigured with ESXi. This enables you to receive, rack, power on, and configure the system within minutes. The ESXi Installable will require slightly more effort, but in both cases you will save a lot of time deploying new hardware into the infrastructure. In fact, once the server is racked, cabled, and loaded, you can break the deployment into four easy steps:

1. Power on the server and boot to the thin virtualization of ESXi.

2. Reconfigure the root password.

3. (If necessary) Configure a static IP address, subnet mask, default gateway, and DNS server.

4. Add the new hypervisor into VirtualCenter and add virtual switches.

If the new system running ESXi is added to an existing DRS cluster inside of VirtualCenter, it will automatically become a target hypervisor for the workload distribution of the cluster. In effect, this concept of building thin virtualization directly into the hardware platform so it is accessible right out of the box is creating a plug-and-play virtual infrastructure where hardware can easily be added and removed as required.

Regardless of the format deployed, the feature sets will be the same. Although ESXi can be installed and managed as its own server at a less expensive licensing cost, it will lack the enterprise functionality mentioned throughout the book — features like VMotion, High Availability (HA), and Distributed Resource Scheduler (DRS). At the time of this writing, in looking at the VMware product line including VI3, you will find the following:

♦ ESXi includes:

♦ Hypervisor functionality (VMkernel)

♦ Virtual Machine File System (VMFS)

♦ Virtual SMP

♦ VI Foundation includes:

♦ Hypervisor functionality (VMkernel) with ESX 3.5 or 3i

♦ Virtual Machine File System (VMFS)

♦ Virtual SMP

♦ Virtual Center Agent

♦ VMware Update Manager

♦ VMware Consolidate Backup

♦ VI Standard includes:

♦ Hypervisor functionality (VMkernel) with ESX 3.5 or 3i

♦ Virtual Machine File System (VMFS)

♦ Virtual SMP

♦ Virtual Center Agent

♦ VMware Update Manager

♦ VMware Consolidate Backup (VCB)

♦ VMware High Availability (HA)

♦ VI Enterprise includes:

♦ Hypervisor functionality (VMkernel) with ESX 3.5 or 3i

♦ Virtual Machine File System (VMFS)

♦ Virtual SMP

♦ Virtual Center Agent

♦ VMware Update Manager

♦ VMware Consolidate Backup (VCB)

♦ VMware High Availability (HA)

♦ VMotion

♦ Storage VMotion

♦ Distributed Resource Scheduler (DRS)

Real World Scenario

ESXi and the VI Products

While any of the license versions will support ESXi, VirtualCenter is a mandatory component for the implementation of VMotion, DRS, and HA. You may find documentation that states ESXi does not support these features. That documentation is true only in the situation where ESXi is deployed without the VirtualCenter component. As noted in Chapter 2, there is a cost for the VirtualCenter license as well. I recommend including either the Gold or Platinum support plans from VMware.

In addition to the necessary processor and memory hardware, VMware suggests the following minimum hardware requirements to install and configure ESXi:

♦ At least one Broadcom 570x or Intel Pro/1000 Ethernet adapter.

♦ A compatible SCSI adapter, fibre channel adapter, iSCSI host bus adapter, or internal RAID controller

♦ Access to a local disk or shared storage for virtual machines.

These are, of course, minimum requirements, and much can be done to enhance the performance of the ESXi host, including:

Increasing Memory Greater amounts of memory provide for larger capacity for virtualization.

Increasing the Number of Network Adapters The more Gigabit Ethernet adapters available in a server, the more flexible and robust the virtual networking architecture can be.

Adding Multiple Multicore Processors Multicore processors provide enhanced virtualization capability without incurring additional licensing costs.

ESXi and the HCL

As noted several times throughout this book, you should always consult the VMware compatibility guides to identify hardware compatibility for your version of VMware. ESXi is no different. Check the compatibility guides at http://www.vmware.com before buying any components to add to your virtual infrastructure.

Deploying ESXi Installable

The installation of ESXi Installable begins by ensuring that the computer system is configured to boot from the CD-ROM drive. To do this, insert the ESXi Installable installation CD into the drive and power on the system. The installation files can be downloaded from VMware's website at http://www.vmware.com/downloads. The installation files for ESXi are listed separately from ESX Server 3.5. Once the server is powered on and boots from the CD, the VMware VMvisor Boot Menu will display, as shown in Figure 13.2. To make changes to the installation parameters, press the Tab key. The default parameters will show beneath the boot menu.

Figure 13.2 Installing ESXi Installable to local drives requires downloading the appropriate disk image (.iso) from VMware.


Once you accept the license agreement, you will have the opportunity to select the hard drive onto which you wish to install ESXi. The available logical disks will be listed as shown in Figure 13.3. The ESXi Installable requires local hard drives to be available for the installation. The local hard drives can be Serial ATA (SATA), SCSI, or Serial Attached SCSI (SAS) as long as they are connected to a controller that is listed on the ESXi compatibility guide. The size of the hard drives is irrelevant since enterprise deployments of VI3 will most commonly place all virtual machines, templates, and ISOs on a shared storage device. Keep that in mind when you are in the process of identifying hardware specifications for new servers that you intend to use as thin virtualization clients with ESXi Installable. Do not incur the expenses of large disk arrays for the local storage on ESXi hosts. The smallest hard drives available in a RAID1 configuration will provide ample room and redundancy for the installation of ESXi.

Figure 13.3 ESXi can be installed on SATA, SCSI, or SAS drives.


If the disk you select for the installation has existing data, you will receive a warning message about the data being overwritten with the new installation, as shown in Figure 13.4. Always be sure that answering yes to this prompt does not erase any critical data.

Once the installation process begins, it takes only a few minutes to load the thin hypervisor. Upon completion, the server will require a reboot and is configured by default to obtain an IP address via DHCP. Later in this chapter we'll discuss how to configure and manage ESXi.

Figure 13.4 Disks with existing data will be overwritten during the ESXi installation procedure.


Perform the following steps to install ESXi:

1. Insert the ESXi Installable installation CD into the physical CD-ROM drive.

2. Boot the computer from the installation CD.

3. Allow the three-second Automatic boot timer to expire before beginning the installation of the ThinESX Installer option selected on the VMware VMvisor Boot Menu.

4. The setup process will load the VMware ISO and VMkernel components, as shown in Figure 13.5.

Figure 13.5 The VMware ISO will load the VMkernel components to begin the installation.


5. Once the components are loaded and the Welcome to the VMware ESXi 3.5.0 Installation screen is displayed, as shown in Figure 13.6, press Enter to perform the installation.

6. Press the F11 key to accept the license agreement and to continue the installation.

7. Select the appropriate disk onto which you will install ESXi and press the Enter key to continue.

8. If you receive a warning about existing data, press Enter to continue only after verifying that the data loss will not be of concern.

9. Press the F11 key to install.

Figure 13.6 Installing ESXi on a local disk.

Deploying ESXi Embedded

ESXi Embedded refers to the original equipment manufacturer (OEM) installation of ESXi onto a persistent storage device inside the qualified hardware. This is an exciting option that will save administrators the time of performing any type of installation. The embedded hypervisor truly allows for the plug-and-play hardware-type atmosphere. You can see that major server manufacturers are banking on this idea because their server designs include an internal USB port. Perhaps eventually the ESXi hypervisor will move from USB flash drive on an internal port to some type of flash memory built right onto the motherboard.

ESXi on Internal USB

At the time of this writing, there were no manufacturers selling the ESXi embedded. However, there were reports of agreements with major vendors like Dell and HP that each company would have several products available in Q2 of 2008. In my work with the good folks at Dell, they assured me it was coming soon but that further tests and design work had to be completed to ensure the security of the USB flash device on the internal port. Dell, like other manufacturers, puts all server designs through rigorous tests, including earthquake tests. Until they are confident that the products will withstand these rigorous tests, they have opted not to simply place a USB flash disk inside the system without some type of locking mechanism to ensure its placement. Perhaps a solution will have been devised by the time the book is published.

The installation files for ESXi Embedded are not available for public download; however, ESXi is available for download. It is suggested that only the OEMs will have access to the files necessary for building the persistent storage devices with the thin hypervisor installed. In best estimation, OEMs will be provided with a .dd file or an image file that can be extracted to the storage devices. For those who have been given access to the .dd file, it is possible to use dd.exe or WinImage to extract the installation files to a USB flash drive. The result is a bootable image of ESXi embedded onto the device.

When you purchase a system with ESXi Embedded, you only need to rack the server, connect the networking cables, and power on. The ESXi embedded on the persistent storage will obtain an IP address from a DHCP server to provide immediate access via console, VI Client, or VirtualCenter.

The server set to run ESXi Embedded must be configured to boot from the appropriate device. Take, for example, a Dell server with a USB flash drive with ESXi Embedded connected to an internal (or external) USB port. To run the thin hypervisor, the server must be configured to boot from the USB device. Figure 13.7 shows the BIOS of a Dell PowerEdge server.

Figure 13.7 To run the thin hypervisor of ESXi Embedded, the server must be configured to boot from the persistent storage device.


While this is just an example of using ESXi Embedded, the ideas and principles will be the same even when the system manufacturers finally get around to shipping the product.

Dell and ESXi

Based on my conversations and work with Dell in writing this chapter, Dell will add an internal USB connection in the next generation of the Dell PowerEdge server line.

At the time of this writing, ESXi Embedded was still not an option for purchase from the Dell website. Again, according to Dell the release of the product as an embedded feature will not occur until Dell has completed a period of rigorous testing that ensures system functionality and performance in even the harshest of conditions.

Managing ESXi

ESXi can be managed in several ways, including from the local console, VirtualCenter, VI Client, or a remote command-line interface. VirtualCenter will be the most common choice in order to have the opportunity to manage a mixture of ESX Server 3.5 versions within the same VirtualCenter.

ESXi Console

Once the ESXi installation is complete or if your new ESXi Embedded server has just arrived, you can use the local console to perform some limited configuration of the host. Each of the following sections details the ESXi management tasks available from the console.

Configure Root Password

By pressing the F2 key, you'll be offered the Customize System screen where configuration takes place. The first option in the list is the Configure Root Password option, shown in Figure 13.8. Pressing the Enter key will open the Configure Root Password box, shown in Figure 13.9.

Figure 13.8 ESXi provides a simple-to-use interface for customizing the installation.


Figure 13.9 Change the root password upon first boot of the ESXi thin hyper-


Configure Lockdown Mode

ESXi includes a configuration option that allows administrators to prevent direct access to an ESXi host via the VI Client application under the context of the root user. Access directly to the system using the VI Client is still permissible with nonroot user accounts. If no other accounts except the root exist, then the only way to manage the server would be remotely from VirtualCenter or the Remote Command-Line Interface (RCLI). Figures 13.10 and 13.11 show the Configure Lockdown Mode option and the enabling of the option.

Figure 13.10 Lockdown mode prevents root user access to the system via the VI Client.


Figure 13.11 Lockdown mode can be enabled and disabled as needed to support server management.


Configure Management Network

Selecting the Configure Network Management option, shown in Figure 13.12, offers a set of options for configuring network communications like physical NIC assignment, IP address, subnet mask, DNS, and so forth.

Figure 13.12 The network communication parameters of an ESXi host can be altered as part of the post-installation configuration.


The Configure Network Management option provides a submenu for each of the following:

Network Adapters As shown in Figure 13.13, the Network Adapters screen allows you to select which network adapter in the computer should be used for host management. Multiple adapters can be selected for providing redundancy and load balancing to the host management traffic.

Figure 13.13 One or more network adapters can be selected for the host's default management network.


VLAN As shown in Figure 13.14, VLAN configuration is possible on ESXi. VLANs are used to segment off the management traffic when a single physical network is used for multiple data transmission types. The configuration window accepts VLAN IDs in the range of 1 to 4094, or use 4095 for access to all VLANs.

Figure 13.14 VLANs can provide traffic segmentation for the host's management network.


IP Configuration As shown in Figure 13.15, IP Configuration allows for the configuration of a static IP address, a subnet mask, and a default gateway. The default is for the host to obtain an IP address via DHCP. It is always best to provide enterprise servers, especially ESX Server hosts, with a static IP address.

DNS Configuration As shown in Figure 13.16, DNS Configuration allows for the configuration of primary and alternate DNS servers as well as the hostname for the ESXi host. As with ESX Server 3.5, hostname resolution is an important part of the host's functionality. Therefore, a corresponding Host (A) record should be created in DNS with the name of each ESXi host referencing the IP address as assigned in the IP Configuration page.

Figure 13.15 ESXi hosts can be configured with static IP address information or remain set to the default of obtaining an IP address from DHCP.


Figure 13.16 ESXi hosts can be configured with multiple DNS servers and a unique hostname to be referenced by other servers in the VirtualCenter inventory.


Custom DNS Suffixes As shown in Figure 13.17, Custom DNS Suffixes allows for the configuration of DNS suffixes that are appended when a host uses short, unqualified names. Multiple suffixes can be listed by separating each suffix with a space. For example, if the DNS suffixes include vdc.local, learn.vmw, and vdc.vmw, and the host references the unqualified name of Silo108, the suffixes will be appended until a name can be resolved. The first name tried will be silo108.vdc.local, followed by silo108.learn.vmw if no response is returned from the first suffix attempt, followed by silo108.vdc.vmw if no response is returned from either of the first two suffix attempts.

Figure 13.17 DNS suffixes allow hosts to reference other systems using short, unqualified names.


ESXi and DHCP

If an ESXi host is connected to a physical network that does not have a DHCP server to deliver, the server will assign itself the IP address of 169.254.0.1 with a 255.255.0.0 subnet mask.

Restart Management Network

The option to restart the management network forces a DHCP lease renewal. If the ESXi host is configured to obtain an IP address via DHCP, this option could force a change in the IP address, resulting in loss of access to the host because the DNS entry for the host would retain the old IP address configuration. For this reason alone, it is a good idea, and strongly suggested, that ESX Server hosts be manually configured with static IP address information. Since ESX Servers do not perform a dynamic update of DNS records, administrators will need to carefully manage the Host (A) records of each ESX Server to ensure they are accurate in the event of an IP address change.

Test Management Network

The Test Management Network option in the Customize System menu list is an excellent troubleshooting utility. It can be used to test IP connectivity and name resolution. The option uses the ping utility against the default gateway, the primary DNS server, and the alternate DNS server. In addition, it will attempt to resolve your hostname. Figures 13.18 and 13.19 show the test configuration and the test operation.

Figure 13.18 ESXi has a built-in utility for troubleshooting IP connectivity and name resolution.


Figure 13.19 Using the Test Management Network can be extremely helpful in determining if problems stem from host configuration issues or external configuration issues.


This tool will be helpful when you are having problems connecting to an ESXi host, as it allows you to rule out internal configuration issues. As you have probably seen, the configuration of ESXi is minimal already, so this tool will help you identify local versus external configuration issues. In the case when all tests came back with an OK status, it would be more likely that any type of connectivity problem is coming from an external configuration problem, not a problem with the ESXi host upon which this test was performed. The test results identify that IP connectivity is good, the DNS server is responding, and that name resolution for the local host hostname is working.

Configure Keyboard

As the name suggests, this option allows for the configuration of the keyboard. The options include:

♦ Default

♦ French

♦ German

♦ Japanese

♦ Russian

View Support Information

The View Support Information option provides information about:

♦ Serial number

♦ License serial number

♦ SSL Thumbprint (SHA1)

View System Logs

The View System Logs option provides a look at several logs, including:

♦ Messages (press 1)

♦ Config (press 2)

♦ Management agent (hostd) (press 3)

♦ VirtualCenter agent (vpxa) (press 4)

Once you have selected a particular log to view, press the H key to get help on how to navigate through the logs. As shown in Figure 13.20, the logs (hostd is shown) are plain text and human-readable logs but certainly require some experience in going through them. The value in these logs is in knowing how to get to them and how to navigate them to troubleshoot when problems arise. It is unlikely that you will regularly visit these logs on a voluntary basis.

Restart Management Agents

This option, as the name suggests, allows for restarting the management agents (hostd) that govern the remote management and control of the ESXi thin hypervisor. Executing a restart of hostd will cause a temporary loss of access to the hypervisor.

Reset Customized Settings

This option reverts back to all defaults that exist just after the installation of ESXi. This includes resetting IP configuration and the root password and the unregistering of virtual machines.

Figure 13.20 Logs on ESXi are accessible and can be reviewed for troubleshooting or maintenance.

VI Client

Once your initial configuration from the console is completed, your management tasks will shift to using the more user-friendly VI Client. The VI Client can be used to connect directly to an ESXi host, as shown in Figure 13.21, but the more common method will be to connect through VirtualCenter (coming up next).

Figure 13.21 The VI Client can be used to connect directly to an ESXi host.


Notice in Figure 13.21 that the host is presented with a configuration issue stating that the ESX Server does not have persistent storage. If a server is set up to run the ESXi Embedded version but still contains local hard drives, remember that the local drives are not used during the installation procedure and remain empty. The server in Figure 13.21 is running ESXi Embedded. In Figure 13.22, the VI Client is connected to a server running ESXi Installable and does not show that error. You can see in the figure that a local storage device is configured. Remember in the previous section, on deploying ESXi Installable, that the installation process required the selection of a hard disk. Therefore, a storage device is already configured.

Figure 13.22 ESXi Installable hosts have storage devices configured as part of the installation process.


VI Client to Host or VI Client to VirtualCenter

While the VI Client will be used predominantly for connecting to VirtualCenter and managing hosts from a centralized interface, it is important to note that connecting through VirtualCenter does not provide the Users & Groups tab that allows for the creation of local users and groups on the ESX host.

VirtualCenter 2.5

As noted in the previous section, VI Client connections to VirtualCenter will be the most common means of managing an ESXi host. All in all, your management of an ESXi host should be no different than the management of an ESX Server 3.5 host. The architectures may differ, but together in the same datacenters and clusters they will act no differently. Figure 13.23 displays the Hosts tab of a datacenter in the VirtualCenter inventory. With the exception of the naming scheme, it is not possible to tell which server is running the ESXi product as opposed to ESX Server 3.5.

One distinct difference that can be seen when managing an ESXi host is the existence of the Health Status option from the Hardware menu on the Configuration tab. Figure 13.24 shows the hardware monitoring data that is discovered as part of the health status of an ESXi host.

Once added to a VirtualCenter inventory, an ESXi host is capable of all the same enterprise features of VMotion, DRS, and HA. These thin hypervisor hosts are susceptible to all the feature constraints as any other host. Networking requirements, storage requirements, and so forth must all be met to support these features.

Figure 13.23 ESXi hosts are managed right alongside the ESX Server 3.5 hosts in VirtualCenter.


Figure 13.24 ESXi hosts have an additional menu item for monitoring the hardware health status.

Remote Command-Line Interface (RCLI)

With no console operating system to connect to, ESXi would seem to be limited in its management capability. However, for those environments with many servers whose administrators refuse to perform tasks repeatedly through the VI Client, VMware now provides a remote command-line interface for host management. This tool is available in two formats:

♦ Remote CLI installable package for installation on Windows or Linux

♦ Remote CLI virtual machine appliance

Both tools can be downloaded from the VMware website. The Remote CLIs support a long list of commands for managing ESX Server hosts. The tools are based on the VMware Infrastructure (VI) Perl Toolkit, which relies on Perl and a few other libraries.

CLI and Scripting

Before we go any further with the command-line tools and scripting, I want to be sure that you are aware of the various security concerns and implications. All of these commands will require the submission of a user name and password. Depending on how you implement the command, the password might be presented on the screen in clear text or it might be stored in a file in clear text. As you look at configuring ESX Server, hosts using scripts and command-line tools ensure that you have adopted the appropriate security methods to prevent improper uses, ranging from determining passwords to high-level accounts. For example, if you create configuration files, limit the permissions (even read permissions) to ensure that unauthorized users cannot read the files and thereby discover passwords to elevated accounts.

In addition to these security concerns, let me address another issue for those who might be new to the idea of scripting. You do not have to do this. While it is certainly true that scripting can save time, keep in mind that it is only true for those who have achieved a high level of proficiency in the scripting methods and only in those environments with a significant number of hosts to manage. The remaining portions of this chapter will most likely come across as a time-consuming effort for those unfamiliar with scripting technologies. So maybe you are asking yourself, “When should I use scripting?” The answer varies, but you can apply some common sense to the situation by considering the following:

♦ What would be the length of your learning curve to perform scripting versus just performing the task through the VI Client? If you only have a handful of hosts, it might be quicker to use the VI Client than it would be to figure out how to script from the ground up.

♦ How often will you be performing this task? If you have ten hosts with six network adapters that are all used in virtual switches and there is no intention of adding more adapters and configuring more virtual switches, then creating a script to add virtual switches to the ten hosts would be a waste of time. On top of that, if you are only adding one or two new ESX Server hosts per year, it might be less administratively cumbersome to manually configure each new server with the necessary virtual switches. On the other hand, if you were adding one or two new ESX Server hosts per week, then generating a script to create virtual switches might make the learning curve worth it.

In conclusion, I know that many of you who are getting into virtualization management and have long been Windows administrators may not be experts in the realm of scripting, especially in Perl. So understand that scripting is an option that is beneficial in some, but not necessarily all, cases. It is up to you to decide whether to learn to script or whether to use the graphical tools, even if they do incur extra time. VMware has done an excellent job in structuring the VI3 suite of products around the idea that scripting is an option, not a mandate.

For each of the remote CLI commands, the following options are available as part of the command execution:

--config Specifies the location of the configuration to be used, which must be a location accessible from the current directory. The equivalent variable used in a configuration file is VICONFIG.

--password Specifies the password for use in combination with the --username parameter. If the username and password are not specified in the execution string, you will be prompted for them. The equivalent variable used in a configuration file is VI PASSWORD.

--portnumber Used to specify the port used to connect to the ESX Server host. The default port is 443. The equivalent variable used in a configuration file is VIPORTNUMBER.

--protocol Used to specify the protocol used to connect to the ESX Server host. The default protocol is HTTPS. The equivalent variable used in a configuration file is VIPROTOCOL.

--server Used to identify the server against which the command should be run. The default is the localhost. The equivalent variable used in a configuration file is VISERVER.

--servicepath Used to identify the service path to connect to the ESX Server host. The default is /sdk/webService. The equivalent variable used in a configuration file is VISERVICEPATH.

--sessionfile Used to reference a saved session file. The equivalent variable used in a configuration file is VISESSIONFILE.

--url Used to connect to the VI SDK specified in the URL. The equivalent variable used in a configuration file is VIURL.

--username Used to specify the username for the authentication context. If the username and password are not specified, you will be prompted for them. The equivalent variable used in a configuration file is VIUSERNAME.

--verbose Used to provide more detail in the debugging information. The equivalent variable used in a configuration file is VIVERBOSE.

--version Used to display version information.

Using --help

For any of the commands listed below, you can type the command followed by --help to get a look at the parameters that can be passed in.

Here is a list of commands with explanations and samples:

resxtop This command provides real-time monitoring of ESX CPU, memory, disk, and network adapter utilization. To run resxtop on an ESX host with an IP address of 172.30.0.105, use the following command:

resxtop -server 172.30.0.105 -username root

Once the tool is running, use the C, M, D, and N keys to switch between CPU, memory, disk, and network counters, respectively. Figures 13.25 and 13.26 show the resxtop output for CPU and memory.

Figure 13.25 The resxtop Remote CLI command provides CPU utilization data.


svmotion This command performs the storage migration process of moving the disk files of a virtual machine to a different LUN.

vicfg-advcfg This option is not recommended for customer use. This command is typically used only under the guidance of VMware technical support.

Figure 13.26 Pressing the M key makes resxtop display data regarding memory utilization.


vicfg-cfgbackup This tool allows for the backup and restore of the configuration data of an ESXi host.

vicfg-dumppart This tool allows for querying, setting, and scanning the diagnostic data "dumps" of an ESX Server host.

vicfg-mpath This tool allows for the configuration of multipathing settings for fibre channel or iSCSI LUNs.

vicfg-nas This tool provides settings and parameters for managing NFS access for your ESX Server host. For example, the following syntax adds a NAS datastore named NFSDS to a server named silo107.vdc.local. The NFS server is named nfs1.vdc.local and has a directory named /iso that is shared:

vicfg-nas --server silo107.vdc.local --username root --password -a -o nfs1.vdc.local -s /iso NFSDS

vicfg-nics This tool provides management of physical network adapters. Figure 13.27 shows a simple listing of all network adapters on a server with an IP address of 172.30.01 using the following syntax:

vicfg-nics --server 172.30.0.105 --username root --list

Figure 13.27 The vicfg-nics tool can identify details about network adapters.


vicfg-ntp This command allows for the configuration of NTP servers for the 3i hosts.

vicfg-rescan This command executes a remote HBA rescan.

vicfg-route This command allows for setting the IP address of the default gateway for the VMkernel.

vicfg-snmp This command allows for the configuration of SNMP for ESX Server hosts.

vicfg-syslog This command allows for the specification of the remote syslog server for an ESX Server host.

vicfg-vmhbadevs This command provides information regarding the LUNs available to an ESX Server host.

vicfg-vmknic This command allows for the configuration of VMkernel network adapters.

vicfg-vswitch This command allows for adding, removing, and modifying virtual switches and virtual switch properties. Review the following examples. Keep in mind that the parameters of --server and --username (and then a password) would be required to run the commands against a remote host.

To add a new virtual switch named vSwitch3:

vicfg-vswitch --add vSwitch3

To add a new port group named TestVMs to vSwitch3:

vicfg-vswitch --add-pg="TestVMs" vSwitch3

To list all the virtual switches:

vicfg-vswitch -l

vihostupdate This command allows for the management of software updates on an ESXi host. It provides the necessary parameters for applying software updates and monitoring installed updates.

ESX Updates

While the command-line tool might be attractive to the hard-core command-line junkie types, software updates and patch monitoring are best performed using the new VMware Update Manager that is built into VirtualCenter 2.5.

vifs This command allows copying, removing, getting, and putting on files and directories.

vmkfstools This command allows for the creation and manipulation of file systems (VMFS), virtual disks (VMDK), logical volumes, and physical storage devices.

All of these commands can be combined with the use of a configuration file to simplify the scripting syntax. A configuration file is a file accessible by the system where the remote command line is being generated (Windows host, Linux host, or virtual appliance) that allows for commonly used parameters to be stored in a file and then referenced as part of the CLI command execution. A configuration file for a host named silo101.vdc.local would include information as shown here:

VI_SERVER=silo101.vdc.local

VI_USERNAME= root

VI_PASSWORD = learnvmware

This configuration file is then called in during the execution of the command as shown in the following syntax. This command would create an NFS datastore named NFS ISOs on the host silo101.vdc.local that points to an /iso directory on an NFS server named nfsserver.learn.vmw:

vicfg-nas --config  -a -o nfsserver.learn.vmw -s /iso NFS_ISOs

The Bottom Line

Understand the architecture of ESXi. ESXi presents a radical change not just to the virtualization world but to the system manufacturers that want to be part of virtualization evolution. By removing the local management component, the Service Console, ESXi presents a thin yet highly functional hypervisor on which virtual machines can run. But don't mistake thin for meaning not as feature rich. ESXi supports all the same enterprise features of VMotion, DRS, and HA that have made VMware ESX the number-one choice for the foundation of virtualization platforms around the world.

Master It You manage a datacenter that experiences rapid growth. You need to identify a way to introduce new hardware resources into the virtual infrastructure with minimal administrative effort and maximum security.

Deploy ESXi Installable. ESXi Installable provides existing VI3 licensees with the ability to shift their infrastructures to the new thin hypervisor architecture. The installation files can be downloaded as part of the existing license agreement without any penalty or additional cost. ESXi Installable installs onto local disk drives.

Master It You manage a datacenter with five existing ESX Server 3.5 hosts. You wish to restructure the datacenter to use the thin hypervisor architecture of ESXi.

Deploy ESXi Embedded. ESXi Embedded, like ESXi Installable, is a thin hypervisor architecture with no reliance on a console operating system; however, the hypervisor runs from an embedded storage module on the host. System manufacturers like Dell offer next-generation products that include internal storage functionality for running ESXi Embedded.

Master It You want to construct a virtual infrastructure on physical servers without local storage devices. You want the CPU and memory of each server to be allocated to a VMware cluster for supporting HA and DRS.

Manage ESXi. Managing ESXi can be done using the console of the system, the VI Client connected directly to the server, the VI Client connected to VirtualCenter, or from a command line using the remote CLI tools. The remote CLI tools can be deployed on a Windows host, Linux host, or from within a downloadable virtual appliance. All are available from the VMware website.

Master It You have deployed four servers running ESXi. You need to configure them into a cluster that supports DRS and HA.

Master It You have 30 ESXi hosts to which you need to add a new virtual switch. Your administrative desktop runs Windows XP Professional.

Загрузка...