The VMware vSphere Command-Line Interface provides a set of commands that you use to manage, configure, and automate administrative activities for ESXi and your vSphere virtual environment.

ESXCLI commands
Managing Virtual Machines
vmware-cmd Overview
Connection Options for vmware-cmd
Format for Specifying Virtual Machines
Listing and Registering Virtual Machines
Retrieving Virtual Machine Attributes
Managing Virtual Machine Snapshots with vmware-cmd
Taking Virtual Machine Snapshots
Reverting and Removing Snapshots
Powering Virtual Machines On and Off
Connecting and Disconnecting Virtual Devices
Forcibly Stopping Virtual Machines with EXCLI
Managing vSphere Networking
Introduction to vSphere Networking
Networking Using vSphere Standard Switches
Networking Using vSphere Distributed Switches
Retrieving Basic Networking Information
Network Troubleshooting
Setting Up vSphere Networking with vSphere Standard Switches
Setting Up Virtual Switches and Associating a Switch with a Network Interface
Retrieving Information About Virtual Switches
Retrieving Information about Virtual Switches with esxcli
List all virtual switches and associated port groups
List the network policy settings (security, traffic shaping, failover) for the virtual switch
Adding and Deleting Virtual Switches
Adding and Deleting Virtual Switches with ESXCLI
Add a virtual switch
Delete a virtual switch
Setting Switch Attributes with esxcli network vswitch standard
Set the MTU for a vSwitch
Set the CDP value for a vSwitch
Retrieving Hardware Information
Checking, Adding, and Removing Port Groups
Managing Port Groups with ESXCLI
Managing Uplinks and Port Groups
Connecting and Disconnecting Uplink Adapters and Port Groups with ESXCLI
Setting the Port Group VLAN ID
Setting the Port Group VLAN ID with ESXCLI
Managing Uplink Adapters
Managing Uplink Adapters with esxcli network nic
Specifying Multiple Uplinks with ESXCLI
Linking and Unlinking Uplink Adapters with ESXCLI
Adding and Modifying VMkernel Network Interfaces
Managing VMkernel Network Interfaces with ESXCLI
To add and configure an IPv4 VMkernel Network Interface for IPv4
To add and configure a VMkernel Network Interface for IPv6
Setting the DNS Configuration with ESXCLI
Adding and Starting an NTP Server
Managing the IP Gateway
Managing Diagnostic Partitions
Configuring ESXi Syslog Services
esxcli system syslog Examples
Managing Local Core Dumps with ESXCLI
Managing Core Dumps with ESXi Dump Collector
Managing Modules with esxcli system module
Examining LUNs with esxcli storage core
Managing NMP with esxcli storage nmp
Path Claiming with esxcli storage core claiming
Device Management with esxcli storage nmp device
Listing Paths with esxcli storage nmp path
Managing Path Selection Policy Plugins with esxcli storage nmp psp
Setting Switch Attributes with esxcli
Retrieving Information about Virtual Switches with esxcli
List all virtual switches and associated port groups
List the network policy settings (security, traffic shaping, failover) for the virtual switch
esxcli esxcli Commands
esxcli fcoe Commands
esxcli graphics Commands
esxcli hardware Commands
esxcli iscsi Commands
esxcli network Commands
esxcli sched Commands
esxcli software Commands
esxcli storage commands
esxcli system commands
esxcli vm commands
esxcli vsan commands
Managing the ESXi Firewall
Setting Up IPsec
Using IPsec with ESXi
Managing Security Associations
Managing Security Policies

vSphere Command-Line Interface Reference

vicfg Commands
Managing Users with vicfg-user
vicfg-user Command Syntax
Retrieving Information about Virtual Switches with vicfg-vswitch
Check whether vSwitch1 exists
List all virtual switches and associated port groups
Retrieve the current CDP setting for this virtual switch
Adding and Deleting Virtual Switches with vicfg-vswitch
Add a virtual switch
Delete a virtual switch
Setting Switch Attributes with vicfg-vswitch
Set the MTU for a vSwitch
Set the CDP value for a vSwitch
Managing Port Groups with vicfg-vswitch
Connecting and Disconnecting Uplinks and Port Groups with vicfg-vswitch
Setting the Port Group VLAN ID with vicfg-vswitch
Managing Uplink Adapters with vicfg-nics
Linking and Unlinking Uplink Adapters with vicfg-vswitch
Managing VMkernel Network Interfaces with vicfg-vmknic
To add and configure an IPv4 VMkernel Network Interface with vicfg-vmknic
Setting the DNS Configuration with vicfg-dns
Setting Switch Attributes with vicfg-vswitch
Setting Up vSphere Networking with vSphere Distributed Switch
Backing Up Configuration Data
Managing ESXi SNMP Agents
Configuring the SNMP Agent to Send Traps
To configure the SNMP agent for polling
Managing Core Dumps with vicfg-dumppart
About This Book
About This Book
The vSphere Command-Line Interface Concepts and Examples documentation explains how to use the VMware vSphere® Command-Line Interface (vCLI) and includes command overviews and examples.
Intended Audience
This book is for experienced Windows or Linux system administrators who are familiar with vSphere administration tasks and datacenter operations and know how to use commands in scripts.
VMware Technical Publications Glossary
VMware® Technical Publications provides a glossary of terms that might be unfamiliar to you. For definitions of terms as they are used in VMware technical documentation, go to http://www.vmware.com/support/pubs.
Document Feedback
VMware welcomes your suggestions for improving our documentation. If you have comments, send your feedback to docfeedback@vmware.com.
Related Documentation
The vSphere Command-Line Interface Reference, available in the vSphere Documentation Center, includes reference information for vicfg- commands and ESXCLI commands.
Getting Started with vSphere Command-Line Interfaces includes information about available CLIs, enabling the ESXi Shell, and installing and running vCLI commands. An appendix supplies the ESXCLI namespace and command hierarchies.
Command-Line Management in vSphere 5 for Service Console Users is for customers who currently use the ESX Service Console.
The vSphere SDK for Perl documentation explains how you can use the vSphere SDK for Perl and related utility applications to manage your vSphere environment. The documentation includes an Installation Guide, a Programming Guide, and a reference to the vSphere SDK for Perl Utility Applications.
Background information for the tasks discussed in this manual is available in the vSphere documentation set. The vSphere documentation consists of the combined vCenter Server and ESXi documentation and includes information about managing storage, networking virtual machines, and more.
Technical Support and Education Resources
The following sections describe the technical support resources available to you. To access the current version of this book and other books, go to http://www.vmware.com/support/pubs.
Online and Telephone Support
To use online support to submit technical support requests, view your product and contract information, and register your products, go to http://www.vmware.com/support.
Customers with appropriate support contracts should use telephone support for the fastest response on priority 1 issues. Go to http://www.vmware.com/support/phone_support.
Support Offerings
To find out how VMware support offerings can help meet your business needs, go to http://www.vmware.com/support/services.
VMware Professional Services
VMware Education Services courses offer extensive hands-on labs, case study examples, and course materials designed to be used as on-the-job reference tools. Courses are available onsite, in the classroom, and live online. For onsite pilot programs and implementation best practices, VMware Consulting Services provides offerings to help you assess, plan, build, and manage your virtual environment. To access information about education classes, certification programs, and consulting services, go to http://www.vmware.com/services.
 
vSphere CLI Command Overviews
vSphere CLI Command Overviews
This chapter introduces the command set, presents supported commands for different versions of vSphere, lists connection options, and discusses vCLI and lockdown mode.
This chapter includes the following topics:
Introduction
Introduction
The vSphere CLI command set, available since ESX/ESXi 3.5, allows you to perform vSphere configuration tasks using a vCLI package installed on supported platforms, or using vMA. The set consists of several command sets.
Comprehensive set of commands for managing most aspects of vSphere. In vSphere 5.0, this command set has been unified. Eventually, ESXCLI commands will replace other commands in the vCLI set.
vicfg- commands
Set of commands for many aspects of vSphere. In vSphere 5.0, only minor changes were made to this command set. Eventually, these commands will be replaced by ESXCLI commands.
A set of esxcfg- commands that precisely mirrors the vicfg- commands is also included in the vCLI package.
Other commands (vmware-cmd, vifs, vmkfstools)
Commands implemented in Perl that do not have a vicfg- prefix. All vCLI commands are scheduled to be replaced by ESXCLI commands.
You can install the vSphere CLI command set on a supported Linux or Windows system. See Getting Started with vSphere Command-Line Interfaces. You can also deploy the vSphere Management Assistant (vMA) to an ESXi system of your choice. Manage ESXi hosts from the Linux or Windows system or from vMA by running vCLI commands with connection options such as the target host, user, and password or a configuration file. See Connection Options.
Documentation
Getting Started with vSphere Command-Line Interfaces includes information about available CLIs, enabling the ESXi Shell, and installing and running vCLI commands. An appendix supplies the namespace and command hierarchies for ESXCLI.
Reference information for vCLI commands is available on the vCLI documentation page http://www.vmware.com/support/developer/vcli/.
vSphere Command-Line Interface Reference is a reference to vicfg- and related vCLI commands and includes reference information for ESXCLI commands. All reference information is generated from the help.
A reference to esxtop and resxtop is included in the Resource Management documentation.
Command-Line Help
Available command-line help differs for the different commands.
vicfg- commands
Run <vicfg-cmd> --help for an overview of each options.
This output corresponds to the information available in the vSphere Command-Line Interface Reference.
Run --help at any level of the hierarchy for information about both commands and namespaces available from that level.
List of Available Commands
List of Available Commands
vCLI and ESXCLI Commands lists all ESX/ESXi 4.1 vCLI commands in alphabetical order and the corresponding ESXCLI command if available. No new vicfg- commands were added in vSphere 5.0. Many new namespaces were added to ESXCLI in vSphere 5.0. An additional set of new commands and namespaces was added in vSphere 5.1.
vCLI and ESXCLI Commands 
esxcli (new syntax)
All vCLI 4.1 commands have been renamed. Significant additions have been made to ESXCLI. Many tasks previously performed with a vicfg- command is now performed with ESXCLI.
resxtop (No ESXCLI equivalent)
See Using resxtop for Performance Monitoring. See the vSphere Resource Management documentation for a detailed reference.
svmotion (No ESXCLI
Must run against a vCenter Server system.
esxcli system settings advanced
The advanced settings are a set of VMkernel options. These options are typically in place for specific workarounds or debugging.
vicfg-authconfig (No ESXCLI equivalent).
vicfg-cfgbackup (No ESXCLI equivalent), Cannot run against a vCenter Server system.
Sets both the partition (esxcli system coredump partition) and the network (esxcli system coredump network) to use for core dumps. Use this command to set up ESXi Dump Collector.
esxcli system maintenancemode
Sets up IPsec (Internet Protocol Security), which secures IP communications coming from and arriving at ESXi hosts. ESXi hosts support IPsec using IPv6.
vicfg-ntp (No ESXCLI equivalent)
esxcli storage core adapter rescan
Manages the SNMP agent. Managing ESXi SNMP Agents. Using SNMP in a vSphere environment is discussed in detail in the vSphere Monitoring and Performance documentation.
The vCenter Server and Host Management documentation explains how to set up system logs using the vSphere Web Client.
vicfg-user (No ESXCLI equivalent)
The vSphere Security documentation discusses security implications of user management and custom roles.
esxcli storage filesystem
vifs (No ESXCLI equivalent)
Run esxcli software vib against ESXi 5.0 and later.
Run vihostupdate against ESX/ESXi 4.x.
Run vihostupdate35 against ESX/ESXi 3.5.
You cannot run vihostupdate against ESXi 5.0 and later hosts.
vmkfstools (No ESXCLI equivalent)
vmware-cmd (No ESXCLI equivalent)
Performs virtual machine operations remotely. This includes, for example, creating a snapshot, powering the virtual machine on or off, and getting information about the virtual machine. See Managing Virtual Machines.
Supported Protocols and Platforms for Commands
Supported Protocols and Platforms for Commands
The resxtop command requires an HTTPS connection. All other commands support HTTP and HTTPS.
Most vCLI commands can run against an ESXi system or against vCenter Server. vCenter Server support means that you can connect to a vCenter Server system and use --vihost to specify the ESXi host to run the command against. The only exception is svmotion, which you can run against vCenter Server systems, but not against ESXi systems.
The following commands must have an ESXi system, not a vCenter Server system target.
You cannot run the vihostupdate command against an ESXi 5.0 or later system.
You cannot run the vihostupdate and vicfg-mpath commands that are in a vCLI 4.0 or later installation against ESX/ESXi 3.5 or vCenter 2.5 systems. Instead, run vihostupdate35 and vicfg-mpath35, included in the vCLI 4.x installation, against those systems. vihostupdate35 is supported for ESXi, but not for ESX.
You cannot run vicfg-syslog --setserver or vicfg-syslog --setport with an ESXi 5.0 or later target.
See the VMware Infrastructure Remote Command-Line Interface Installation and Reference Guide for ESX/ESXi 3.5 Update 2 for a list of supported options. To access that document, select Resources > Documentation from the VMware web site. Find the vSphere documentation set and open the archive. A few vCLI 4.x options are supported against hosts running ESX/ESXi 3.5 Update 2 or later even though they were not supported in RCLI version 3.5.
Run a vCLI 4.x command with --help for information about option support with ESX/ESXi 3.5 Update 2, or see the VMware knowledge base article at http://kb.vmware.com/kb/1008940 for more detail.
Platform Support for vCLI 5.x Commands lists platform support for the different vCLI 5.x commands. These commands have not been tested against VirtualCenter 2.5 Update 2 systems. You can, however, connect to a vCenter Server 4.x system and target ESX/ESXi 3.5 Update 2 hosts.
Use vicfg-mpath35 instead.
Use esxcli software vib instead.
Use vihostupdate35 instead
Running ESXCLI Commands Against ESXi 4.x Hosts
Running ESXCLI Commands Against ESXi 4.x Hosts
When you run an ESXCLI vCLI command, you must know the commands supported on the target host.
You specify the target host with --server or set up a vMA target.
Some commands or command outputs are determined by the host type. In addition, VMware partners might develop custom ESXCLI commands that you can run on hosts where the partner VIB has been installed.
Run esxcli --server <target> --help for a list of namespaces supported on the target. You can drill down into the namespaces for additional help.
Important ESXCLI on ESX 4.x hosts does not support targeting a vCenter Server system. You can therefore not run ESXCLI commands with --server pointing to a vCenter Server system even if you install vCLI 5.0.
Commands with an esxcfg Prefix
Commands with an esxcfg Prefix
For many of the vCLI commands, you might have used scripts with corresponding service console commands starting with an esxcfg prefix to manage ESX 3.x hosts. To facilitate easy migration from ESX/ESXi 3.x to later versions of ESXi, the vCLI package includes a copy of each vicfg- command that uses an esxcfg- prefix.
Important VMware recommends that you use ESXCLI or the vCLI commands with the vicfg prefix. Commands with the esxcfg prefix are available mainly for compatibility reasons and might become obsolete.
vCLI esxcfg- commands are equivalent to vicfg- commands, but not completely equivalent to the deprecated esxcfg- service console commands.
Commands with an esxcfg Prefix lists all vCLI commands for which a vCLI command with an esxcfg prefix is available.
Commands with an esxcfg Prefix 
Using ESXCLI Output
Using ESXCLI Output
Many ESXCLI commands generate output you might want to use in your application. You can run esxcli with the --formatter dispatcher option and send the resulting output as input to a parser.
The --formatter options supports three values, csv, xml, and keyvalue and is used before any namespace.
esxcli --formatter=csv storage filesystem list
Lists all file system information in CSV format.
You can pipe the output to a file.
esxcli --formatter=keyvalue storage filesystem list > myfilesystemlist.txt
Connection Options
Connection Options
vCLI Connection Options lists options that are available for all vCLI commands in alphabetical order. Examples in this book use <conn_options> to indicate the position of connection options.
For example, esxcli <conn_options> filesystem nfs list means that you could use a configuration file, a session file, or just specify a target server and respond with a user name and password when prompted.
The table includes options for use on the command line and variables for use in configuration files.
Important For connections, vCLI supports only the IPv4 protocol, not the IPv6 protocol. You can, however, configure IPv6 on the target host with several of the networking commands.
See the Getting Started with vSphere Command-Line Interfaces documentation for additional information and examples.
Used to specify the CA (Certificate Authority) certificate file, in PEM format, to verify the identity of the vCenter Server system or ESXi system to run the command on. Can be used, for example, to prevent man-in-the-middle attack.
--config <cfg_file_full_path>
--credstore <credstore>
Name of a credential store file. Defaults to <HOME>/.vmware/credstore/vicredentials.xml on Linux and <APPDATA>/VMware/credstore/vicredentials.xml on Windows. Commands for setting up the credential store are included in the vSphere SDK for Perl, which is installed with vCLI. The vSphere SDK for Perl Programming Guide explains how to manage the credential store.
--encoding <encoding>
Specifies the encoding to be used. The following encodings are supported.
cp936 (Simplified Chinese)
shftjis (Japanese)
cp850 (German and French).
You can use --encoding to specify the encoding vCLI should map to when it is run on a foreign language system.
--passthroughauth
If you specify this option, the system uses the Microsoft Windows Security Support Provider Interface (SSPI) for authentication. Trusted users are not prompted for a user name and password. See the Microsoft Web site for a detailed discussion of SSPI.
--passthroughauthpackage <package>
Use this option with --passthroughauth to specify a domain-level authentication protocol to be used by Windows. By default, SSPI uses the Negotiate protocol, which means that client and server try to negotiate a protocol that both support.
If the vCenter Server system to which you are connecting is configured to use a specific protocol, you can specify that protocol using this option.
--password <passwd>
Uses the specified password (used with --username) to log in to the server.
If --server specifies a vCenter Server system, the user name and password apply to that server. If you can log in to the vCenter Server system, you need no additional authentication to run commands on the ESXi hosts that server manages.
If --server specifies an ESXi host, the user name and password apply to that server.
Use the empty string (' ' on Linux and “ “ on Windows) to indicate no password.
If you do not specify a user name and password on the command line, the system prompts you and does not echo your input to the screen.
--portnumber <number>
--protocol <HTTP|HTTPS>
--savesessionfile <file>
--server <server>
If --server points to a vCenter Server system, you use the --vihost option to specify the ESXi host on which you want to run the command. A command is supported for vCenter Server if the --vihost option is defined.
--servicepath <path>
--sessionfile <file>
--url <url>
--username <u_name>
If --server specifies a vCenter Server system, the user name and password apply to that server. If you can log in to the vCenter Server system, you need no additional authentication to run commands on the ESXi hosts that server manages.
If --server specifies an ESXi system, the user name and password apply to that system.
If you do not specify a user name and password on the command line, the system prompts you and does not echo your input to the screen.
--vihost <host>
When you run a vSphere CLI command with the --server option pointing to a vCenter Server system, use --vihost to specify the ESXi host to run the command against.
Note: This option is not supported for each command. If supported, the option is included in the individual command option list.
vCLI and Lockdown Mode
vCLI and Lockdown Mode
For additional security, an administrator can place one or more hosts managed by a vCenter Server system in lockdown mode. Lockdown mode affects login privileges for the ESXi host.
You can disable lockdown mode as follows.
The root user can always log in directly to the ESXi host's direct console to disable lockdown mode. If the direct console is disabled, the administrator on the vCenter Server system can disable lockdown mode. If the host is not managed by a vCenter Server system or if the host is unreachable, you must reinstall ESXi.
To make changes to ESXi systems in lockdown mode, you must go through a vCenter Server system that manages the ESXi system as the user vpxuser.
esxcli --server MyVC --vihost MyESXi storage filesystem list
The command prompts for the vCenter Server system user name and password.
You can use the vSphere Web Client or vCLI commands that support the --vihost option. The following commands cannot run against vCenter Server systems and are therefore not available in lockdown mode:
If you have problems running a command on an ESXi host directly (without specifying a vCenter Server target), check whether lockdown mode is enabled on that host.
The vSphere Security documentation discusses lockdown mode in detail.
Managing Hosts
Managing Hosts
Host management commands can stop and reboot ESXi hosts, back up configuration information, and manage host updates. You can also use a host management command to make your host join an Active Directory domain or exit from a domain.
The chapter includes the following topics:
For information on updating ESXi 5.0 hosts with the esxcli software command and on changing the host acceptance level to match the level of a VIB that you might want to use for an update, see the vSphere Upgrade documentation.
Stopping, Rebooting, and Examining Hosts
Stopping, Rebooting, and Examining Hosts
You can stop, reboot, and examine hosts with ESXCLI or with vicfg-hostops.
Stopping and Rebooting Hosts with ESXCLI
You can shut down or reboot an ESXi host using the vSphere Web Client or vCLI commands (ESXCLI or vicfg-hostops).
Shutting down a managed host disconnects it from the vCenter Server system, but does not remove the host from the inventory. You can shut down a single host or all hosts in a datacenter or cluster. Specify one of the options listed in Connection Options in place of <conn_options>.
To shut down a host, run esxcli system shutdown poweroff. You must specify the --reason option and supply a reason for the shutdown. A --delay option allows you to specify a delay interval, in seconds.
To reboot a host, run system shutdown reboot. You must specify the --reason option and supply a reason for the shutdown. A --delay option allows you to specify a delay interval, in seconds.
Stopping, Rebooting, and Examining Hosts with vicfg-hostops
You can shut down or reboot an ESXi host using the vSphere Web Client, or ESXCLI or vicfg-hostops vCLI command.
Shutting down a managed host disconnects it from the vCenter Server system, but does not remove the host from the inventory. You can shut down a single host or all hosts in a datacenter or cluster. Specify one of the options listed in Connection Options in place of <conn_options>.
Single host. Run vicfg-hostops with --operation shutdown.
vicfg-hostops <conn_options> --operation shutdown
If the host is not in maintenance mode, use --force to shut down the host and all running virtual machines.
vicfg-hostops <conn_options> --operation shutdown --force
All hosts in datacenter or cluster. To shut down all hosts in a cluster or datacenter, specify --cluster or --datacenter.
vicfg-hostops <conn_options> --operation shutdown --cluster <my_cluster>
vicfg-hostops <conn_options> --operation shutdown --datacenter <my_datacenter>
You can reboot a single host or all hosts in a datacenter or cluster.
Single host. Run vicfg-hostops with --operation reboot.
vicfg-hostops <conn_options> --operation reboot
If the host is not in maintenance mode, use --force to shut down the host and all running virtual machines.
vicfg-hostops <conn_options> --operation reboot --force
All hosts in datacenter or cluster. You can specify --cluster or --datacenter to reboot all hosts in a cluster or datacenter.
vicfg-hostops <conn_options> --operation reboot --cluster <my_cluster>
vicfg-hostops <conn_options> --operation reboot --datacenter <my_datacenter>
You can display information about a host by running vicfg-hostops with --operation info.
vicfg-hostops <conn_options> --operation info
The command returns the host name, manufacturer, model, processor type, CPU cores, memory capacity, and boot time. The command also returns whether vMotion is enabled and whether the host is in maintenance mode.
Entering and Exiting Maintenance Mode
Entering and Exiting Maintenance Mode
You can instruct your host to enter or exit maintenance mode with ESXCLI or with vicfg-hostops.
Entering and Exiting Maintenance Mode with ESXCLI
You place a host in maintenance mode to service it, for example, to install more memory. A host enters or leaves maintenance mode only as the result of a user request.
esxcli system maintenanceMode set allows you to enable or disable maintenance mode.
When you run the vicfg-hostops vCLI command, you can specify one of the options listed in Connection Options in place of <conn_options>.
To enter and exit maintenance mode
1
Run esxcli <conn_options> system maintenanceMode set --enable true to enter maintenance mode.
After all virtual machines on the host have been suspended or migrated, the host enters maintenance mode. You cannot deploy or power on a virtual machine on hosts in maintenance mode.
2
Run esxcli <conn_options> system maintenanceMode set --enable false to have a host existing maintenance mode.
If you attempt to exit maintenance mode when the host is no longer in maintenance mode, an error informs you that maintenance mode is already disabled.
Entering and Exiting Maintenance Mode with vicfg-hostops
You place a host in maintenance mode to service it, for example, to install more memory. A host enters or leaves maintenance mode only as the result of a user request.
vicfg-hostops suspends virtual machines by default, or powers off the virtual machine if you run vicfg-hostops --action poweroff.
Note vicfg-hostops does not work with VMware DRS. Virtual machines are always suspended.
The host is in a state of Entering Maintenance Mode until all running virtual machines are suspended or migrated. When a host is entering maintenance mode, you cannot power on virtual machines on it or migrate virtual machines to it.
When you run the vicfg-hostops vCLI command, you can specify one of the options listed in Connection Options in place of <conn_options>.
To enter maintenance mode
1
Run vicfg-hostops <conn_options> --operation enter to enter maintenance mode.
2
Run vicfg-hostops <conn_options> --operation info to check whether the host is in maintenance mode or in the Entering Maintenance Mode state.
After all virtual machines on the host have been suspended or migrated, the host enters maintenance mode. You cannot deploy or power on a virtual machine on hosts in maintenance mode.
You can put all hosts in a cluster or datacenter in maintenance mode by using the --cluster or --datacenter option. Do not use those options unless suspending all virtual machines in that cluster or datacenter is no problem.
You can later run vicfg-hostops <conn_options> --operation exit to exit maintenance mode.
Backing Up Configuration Information with vicfg-cfgbackup
Backing Up Configuration Information with vicfg-cfgbackup
After you configure an ESXi host, you can back up the host configuration data. Always back up your host configuration after you change the configuration or upgrade the ESXi image.
Important The vicfg-cfgbackup command is available only for ESXi hosts. The command is not available through a vCenter Server system connection. No equivalent ESXCLI command is supported.
Backup Tasks
During a configuration backup, the serial number is backed up with the configuration. The number is restored when you restore the configuration. The number is not preserved when you run the Recovery CD (ESXi Embedded) or perform a repair operation (ESXi Installable).
You can back up and restore configuration information as follows.
1
2
3
When you restore a configuration, you must make sure that all virtual machines on the host are stopped.
Backing Up Configuration Data
You can back up configuration data by running vicfg-cfgbackup with the -s option.
vicfg-cfgbackup <conn_options> -s /tmp/ESXi_181842_backup.txt
For the backup filename, include the number of the build that is running on the host that you are backing up. If you are running vCLI on vMA, the backup file is saved locally on vMA. Backup files can safely be stored locally because virtual appliances are stored in the /vmfs/volumes/<datastore> directory on the host, which is separate from the ESXi image and configuration files.
Restoring Configuration Data
If you have created a backup, you can later restore ESXi configuration data. When you restore configuration data, the number of the build running on the host must be the same as the number of the build that was running when you created the backup file. To override this requirement, include the -f (force) option.
To restore ESXi configuration data
1
2
3
Run vicfg-cfgbackup with the -l flag to load the host configuration from the specified backup file. Specify one of the options listed in Connection Options in place of <conn_options>.
vicfg-cfgbackup <conn_options> -l /tmp/ESXi_181842_backup.tgz
vicfg-cfgbackup <conn_options> -l /tmp/ESXi_181842_backup.tgz -q
To restore the host to factory settings, run vicfg-cfgbackup with the -r option:
vicfg-cfgbackup <conn_options> -r
Using vicfg-cfgbackup from vMA
To back up a host configuration, you can run vicfg-cfgbackup from a vMA instance. The vMA instance can run on the target host (the host that you are backing up or restoring), or on a remote host.
To restore a host configuration, you must run vicfg-cfgbackup from a vMA instance running on a remote host. The host must be in maintenance mode, which means all virtual machines (including vMA) must be suspended on the target host.
For example, a backup operation for two ESXi hosts (host1 and host2) with vMA deployed on both hosts works as follows:
To back up one of the host’s configuration (host1 or host2), run vicfg-cfgbackup from the vMA appliance running on either host1 or host2. Use the --server option to specify the host for which you want backup information. The information is stored on vMA.
To restore the host1 configuration, run vicfg-cfgbackup from the vMA appliance running on host2. Use the --server option to point to host1 to restore the configuration to that host.
To restore the host2 configuration, run vicfg-cfgbackup from the vMA appliance running on host1. Use the --server option to point to host2 to restore the configuration to that host.
Managing VMkernel Modules
Managing VMkernel Modules
The esxcli system module and vicfg-module commands support setting and retrieving VMkernel module options.
vicfg-module and esxcli system module commands are implementations of the deprecated esxcfg-module service console command. The two commands support most of the options esxcfg-module supports. vicfg-module and esxcli system module are commonly used when VMware Technical Support, a Knowledge Base article, or VMware documentation instruct you to do so.
Managing Modules with esxcli system module
Not all VMkernel modules have settable module options. The following example illustrates how to examine and enable a VMkernel. Specify one of the connection options listed in Connection Options in place of <conn_options>.
To examine, enable, and set a VMkernel modules
1
esxcli <conn_options> system module list -module=module_name
The system returns the name, type, value, and description of the module.
2
esxcli <conn_options> system module list --enabled=true
esxcli <conn_options> system module list --loaded=true
3
esxcli <conn_options> system module set --module=module_name --enabled=true
4
esxcli system module parameters set --module module_name --parameter-string="parameter_string"
5
esxcli <conn_options> system module parameters list --module=module_name
Managing Modules with vicfg-module
Not all VMkernel modules have settable module options. The following example illustrates how the examine and enable a VMkernel modules. Specify one of the connection options listed in Connection Options in place of <conn_options>.
To examine and set a VMkernel modules
1
Run vicfg-module --list to list the modules on the host.
vicfg-module <conn_options> --list
2
Run vicfg-module --set-options with connection options, the option string to be passed to a module, and the module name. For example:
vicfg-module <conn_options> --set-options 'parameter_name=value' module_name
To retrieve the option string that is configured to be passed to a module when the module is loaded, run vicfg-module --get-options. This string is not necessarily the option string currently in use by the module.
vicfg-module <conn_options> --get-options module_name
Verifies that a module is configured.
Using vicfg-authconfig for Active Directory Configuration
Using vicfg-authconfig for Active Directory Configuration
vSphere 5.0 is tightly integrated with Active Directory. Active Directory provides authentication for all local services and for remote access through the vSphere Web Services SDK, vSphere Web Client, PowerCLI, and vSphere CLI. You can configure Active Directory settings with the vSphere Web Client, as discussed in the vCenter Server and Host Management documentation, or use vicfg-autconfig.
vicfg-authconfig allows you to remotely configure Active Directory settings on ESXi hosts. You can list supported and active authentication mechanisms, list the current domain, and join or part from an Active Directory domain. Before you run the command on an ESXi host, you must prepare the host.
To prepare ESXi hosts for Active Directory Integration
1
The ESXi system’s time zone is always set to UTC.
2
You can run vicfg-authconfig to add the host to the domain. A user who runs vicfg-authconfig to configure Active Directory settings must have the appropriate Active Directory permissions, and must have administrative privileges on the ESXi host. You can run the command directly against the host or against a vCenter Server system, specifying the host with --vihost.
To set up Active Directory
1
Install the ESXi host, as explained in the vSphere Installation and Setup documentation.
2
Install Windows Active Directory on a Windows Server that runs Windows 2000, Windows 2003, or Windows 2008. See the Microsoft Web site for instructions and best practices.
3
4
ping <ESX_hostname>
5
Run vicfg-authcofig to add the host to the Active Directory domain.
vicfg-authconfig --server=<ESXi Server IP Address>
--username=<ESXi Server Admin Username>
--password=<ESXi Server Admin User's Password>
--authscheme AD --joindomain <AD Domain Name>
--adusername=<Active Directory Administrator User Name>
--adpassword=<Active Directory Administrator User's Password>
The system prompts for user names and passwords if you do not specify them on the command line. Passwords are not echoed to the screen.
6
Check that a Successfully Joined <Domain Name> message appears.
7
vicfg-authconfig --server XXX.XXX.XXX.XXX --authscheme AD -c
You are prompted for a user name and password for the ESXi system.
Updating Hosts
Updating Hosts
When you add custom drivers or patches to a host, the process is called an update.
Update ESXi 4.0 and ESXi 4.1 hosts with the vihostupdate command, as discussed in the vSphere Command-Line Interface Installation and Reference Guide included in the vSphere 4.1 documentation set.
Update ESXi 5.0 hosts with esxcli software vib commands discussed in the vSphere Upgrade documentation included in the vSphere 5.0 documentation set. You cannot run the vihostupdate command against an ESXi 5.0 host.
 
Managing Files
Managing Files
The vSphere CLI includes two commands for file manipulation. vmkfstools allows you to manipulate VMFS (Virtual Machine File System) and virtual disks. vifs supports remote interaction with files on your ESXi host.
Note See Managing Storage for information about storage manipulation commands.
This chapter includes the following topics:
Introduction to Virtual Machine File Management
Introduction to Virtual Machine File Management
You can use the vSphere Web Client or vCLI commands to access different types of storage devices that your ESXi host discovers and to deploy datastores on those devices.
Note Datastores are logical containers, analogous to file systems, that hide specifics of each storage device and provide a uniform model for storing virtual machine files. Datastores can be used for storing ISO images, virtual machine templates, and floppy images. The vSphere Web Client uses the term datastore exclusively. This manual uses the term datastore and VMFS (or NFS) volume to refer to the same logical container on the physical device.
Depending on the type of storage you use, datastores can be backed by the following file system formats:
Virtual Machine File System (VMFS). High-performance file system that is optimized for storing virtual machines. Your host can deploy a VMFS datastore on any SCSI-based local or networked storage device, including Fibre Channel and iSCSI SAN equipment. As an alternative to using the VMFS datastore, your virtual machine can have direct access to raw devices and use a mapping file (RDM) as a proxy.
You manage VMFS and RDMs with the vSphere Web Client, or the vmkfstools command.
Network File System (NFS). File system on a NAS storage device. ESXi supports NFS version 3 over TCP/IP. The host can access a designated NFS volume located on an NFS server, mount the volume, and use it for any storage needs.
You manage NAS storage devices with the esxcli storage nfs command.
Virtual Machines Accessing Different Types of Storage
Managing the Virtual Machine File System with vmkfstools
Managing the Virtual Machine File System with vmkfstools
VMFS datastores primarily serve as repositories for virtual machines. You can store multiple virtual machines on the same VMFS volume. Each virtual machine, encapsulated in a set of files, occupies a separate single directory. For the operating system inside the virtual machine, VMFS preserves the internal file system semantics.
In addition, you can use the VMFS datastores to store other files, such as virtual machine templates and ISO images. VMFS supports file and block sizes that enable virtual machines to run data-intensive applications, including databases, ERP, and CRM, in virtual machines. See the vSphere Storage documentation.
You use the vmkfstools vCLI to create and manipulate virtual disks, file systems, logical volumes, and physical storage devices on an ESXi host. You can use vmkfstools to create and manage a virtual machine file system (VMFS) on a physical partition of a disk and to manipulate files, such as virtual disks, stored on VMFS-3 and NFS. You can also use vmkfstools to set up and manage raw device mappings (RDMs).
Important The vmkfstools vCLI supports most but not all of the options that the vmkfstools ESXi Shell command supports. See VMware Knowledge Base article 1008194.
You cannot run vmkfstools with --server pointing to a vCenter Server system.
The vSphere Storage documentation includes a complete reference to the vmkfstools command that you can use in the ESXi Shell. You can use most of the same options with the vmkfstools vCLI command. Specify one of the connection options listed in Connection Options in place of <conn_options>.
The following options supported by the vmkfstools ESXi Shell command are not supported by the vmkfstools vCLI command.
Upgrading VMFS3 Volumes to VMFS5
Upgrading VMFS3 Volumes to VMFS5
vSphere 5.0 supports VMFS5 volumes, which have improved scalability and performance. You can upgrade from VMFS3 to VMFS5 by using the vSphere Web Client, the vmkfstools ESXi Shell command, or the esxcli storage vmfs upgrade command. Pass the volume label or the volume UUID to the ESXCLI command.
Important You cannot upgrade VMFS3 volumes to VMFS5 with the vmkfstools command included in vSphere CLI.
Managing VMFS Volumes
Managing VMFS Volumes
Different commands are available for listing, mounting, and unmounting VMFS volumes and for listing, mounting, and unmounting VMFS snapshot volumes.
esxcli storage filesystem list shows all volumes, mounted and unmounted, that are resolved, that is, that are not snapshot volumes.
esxcli storage filesystem unmount unmounts a currently mounted filesystem. Use this command for snapshot volumes or resolved volumes.
esxcli storage vmfs snapshot commands can be used for listing, mounting, and resignaturing snapshot volumes. See Mounting Datastores with Existing Signatures and Resignaturing VMFS Copies.
Managing Duplicate VMFS Datastores
Each VMFS datastore created in a LUN has a unique UUID that is stored in the file system superblock. When the LUN is replicated or when a snapshot is made, the resulting LUN copy is identical, byte-for-byte, to the original LUN. As a result, if the original LUN contains a VMFS datastore with UUID X, the LUN copy appears to contain an identical VMFS datastore, or a VMFS datastore copy, with the same UUID X.
ESXi hosts can determine whether a LUN contains the VMFS datastore copy, and either mount the datastore copy with its original UUID or change the UUID to resignature the datastore.
When a LUN contains a VMFS datastore copy, you can mount the datastore with the existing signature or assign a new signature. The vSphere Storage documentation discusses volume resignaturing in detail.
Mounting Datastores with Existing Signatures
You can mount a VMFS datastore copy without changing its signature if the original is not mounted. For example, you can maintain synchronized copies of virtual machines at a secondary site as part of a disaster recovery plan. In the event of a disaster at the primary site, you can mount the datastore copy and power on the virtual machines at the secondary site.
When you mount the VMFS datastore, ESXi allows both read and write operations to the datastore that resides on the LUN copy. The LUN copy must be writable. The datastore mounts are persistent and valid across system reboots.
You can mount a datastore with vicfg-volume (see To mount a datastore with vicfg-volume) or with ESXCLI (see To mount a datastore with ESXCLI).
Mounting and Unmounting with ESXCLI
The esxcli storage filesystem commands support mounting and unmounting volumes. You can also specify whether to persist the mounted volumes across reboots by using the --no-persist option.
Use the esxcli storage filesystem command to list mounted volumes, mount new volumes, and unmount a volume. Specify one of the connection options listed in Connection Options in place of <conn_options>.
To mount a datastore with ESXCLI
1
esxcli <conn_options> storage filesystem list
2
Run esxcli storage filesystem mount with the volume label or volume UUID.
By default, the volume is mounted persistently, use --no-persist to mount persistently.
esxcli <conn_options> storage filesystem volume mount --volume-label=<label>|--volume-uuid=<VMFS-UUID>
This command fails if the original copy is online.
You can later run esxcli storage filesystem volume unmount to unmount the snapshot volume.
esxcli <conn_options> storage filesystem volume unmount --volume-label=<label>|--volume-uuid=<VMFS-UUID>
Mounting and Unmounting with vicfg-volume
Use the vicfg-volume command to list mounted volumes, mount new volumes, and unmount a volume. Specify one of the connection options listed in Connection Options in place of <conn_options>.
To mount a datastore with vicfg-volume
1
vicfg-volume <conn_options> --list
2
Run vicfg-volume --persistent-mount with the VMFS-UUID or label as an argument to mount a volume.
vicfg-volume <conn_options> --persistent-mount <VMFS-UUID|label>
This command fails if the original copy is online.
You can later run vicfg-volume --unmount to unmount the snapshot or replica volume.
vicfg-volume <conn_options> --unmount <VMFS-UUID|label>
The vicfg-volume command supports resignaturing a snapshot volume and mounting and unmounting the volume. You can also make the mounted volume persistent across reboots and query a list of snapshot volumes and original volumes.
Resignaturing VMFS Copies
Use datastore resignaturing to retain the data stored on the VMFS datastore copy. When resignaturing a VMFS copy, the ESXi host assigns a new UUID and a new label to the copy, and mounts the copy as a datastore distinct from the original. Because ESXi prevents you from resignaturing the mounted datastore, unmount the datastore before resignaturing.
The default format of the new label assigned to the datastore is snap-<snapID>-<oldLabel>, where <snapID> is an integer and <oldLabel> is the label of the original datastore.
When you perform datastore resignaturing, consider the following points:
You can mount the new VMFS datastore without a risk of its UUID conflicting with UUIDs of any other datastore, such as an ancestor or child in a hierarchy of LUN snapshots.
You can resignature a VMFS copy with ESXCLI (see Resignaturing a VMFS Copy with ESXCLI) or with vicfg-volume see Resignaturing a VMFS Copy with vicfg-volume.
Resignaturing a VMFS Copy with ESXCLI
The esxcli storage vmfs snapshot commands support resignaturing a snapshot volume. Specify one of the connection options listed in Connection Options in place of <conn_options>.
To resignature a VMFS copy with ESXCLI
1
esxcli <conn_options> storage vmfs snapshot list
2
esxcli <conn_options> storage filesystem unmount
3
Run the resignature command.
esxcli <conn_options> storage vmfs snapshot resignature --volume-label=<label>|--volume-uuid=<id>
The command returns to the prompt or signals an error.
After resignaturing, you might have to do the following:
Resignaturing a VMFS Copy with vicfg-volume
You can use vicfg-volume to mount, unmount, and resignature VMFS volumes.
To resignature a VMFS copy with vicfg-volume
1
2
Run vicfg-volume with the resignature option.
vicfg-volume <conn_options> --resignature <VMFS-UUID|label>
The command returns to the prompt or signals an error.
Detaching Devices and Removing a LUN
Detaching Devices and Removing a LUN
Before you can remove a LUN, you must detach the corresponding device by using the vSphere Web Client, or the esxcli storage core device set command. Detaching a device brings a device offline. Detaching a device does not impact path states. If the LUN is still visible, the path state is not set to dead.
To detach a device and remove a LUN
1
For information on migrating virtual machines, see the vCenter Server and Host Management documentation.
2
If the unmount fails, ESXCLI returns an error. If you ignore that error, you will get an error in step 4 when you attempt to detach a device with a VMFS partition still in use.
3
esxcli storage core device world list -d <device>
If a VMFS volume is using the device indirectly, the world name includes the string idle0. If a virtual machine uses the device as an RDM, the virtual machine process name is displayed. If any other process is using the raw device, the information is displayed.
4
esxcli storage core device set -d naa.xxx... --state=off
Detach is persistent across reboots and device unregistration. Any device that is detached remains detached until a manual attach operation. Rescan does not bring persistently detached devices back online. A persistently detached device comes back in the off state.
ESXi maintains the persistent information about the device’s offline state even if the device is unregistered. You can remove the device information by running esxcli storage core device detached remove -d naa.12.
5
esxcli storage core device detached list
6
esxcli <conn_options> storage core adapter rescan
When you have completed storage reconfiguration, you can reattach the storage device, mount the datastore, and restart the virtual machines.
To reattach the device
1
esxcli storage core device detached list
2
esxcli storage core device set -d naa.XXX --state=on
3
Working with Permanent Device Loss
Working with Permanent Device Loss
With earlier ESX/ESXi releases, an APD (All Paths Down) event results when the LUN becomes unavailable. The event is difficult for administrators because they do not have enough information about the state of the LUN to know which corrective action is appropriate.
In ESXi 5.0, the ESXi host can determine whether the cause of an All Paths Down (APD) event is temporary, or whether the cause is permanent device loss. A PLD status occurs when the storage array returns SCSI sense codes indicating that the LUN is no longer available or that a severe, unrecoverable hardware problem exist with it. ESXi has an improved infrastructure that can speed up operations of upper-layer applications in a device loss scenario.
Important Do not plan for APD/PDL events, for example, when you want to upgrade your hardware. Instead, perform an orderly removal of LUNs from your ESXi server, which is described in Detaching Devices and Removing a LUN, perform the operation, and add the LUN back.
To Remove a PDL LUN
How you remove a PDL LUN depends on whether it was in use.
To Reattach a PDL LUN
1
2
You cannot bring a device back without removing active users. The ESXi host cannot know whether the device that was added back has changed. ESXi must be able to treat the device similarly to a new device being discovered.
3
Using vifs to Manipulate Files on Remote ESXi Hosts
Using vifs to Manipulate Files on Remote ESXi Hosts
In most cases, vmkfstools and other commands are used to manipulate virtual machine files. In some cases, you might have to view and manipulate files on remote ESXi hosts directly.
Caution If you manipulate files directly, your vSphere setup might end up in an inconsistent state. Use the vSphere Web Client or one of the other vCLI commands to manipulate virtual machine configuration files and virtual disks.
The vifs command performs common operations such as copy, remove, get, and put on ESXi files and directories. The command is supported against ESXi hosts but not against vCenter Server systems.
Some similarities between vifs and DOS or UNIX/Linux file system management utilities exist, but there are many differences. For example, vifs does not support wildcard characters or current directories and, as a result, relative pathnames. Use vifs only as documented.
Instead of using the vifs command, you can browse datastore contents and host files by using a Web browser. Connect to the following location:
http://ESX_host_IP_Address/host
http://ESX_host_IP_Address/folder
You can view datacenter and datastore directories from this root URL. For example:
http://<ESXi_addr>/folder?dcPath=ha-datacenter
http://<ESXi_host_name>/folder?dcPath=ha-datacente
The ESXi host prompts for a user name and password.
The vifs command supports different operations for the following groups of files and directories. Different operations are available for each group, and you specify locations with a different syntax. The behavior differs for vSphere 4.x and vSphere 5.0.
Host configuration files. You must specify the file’s unique name identifier.
Host configuration files. You must specify the file’s unique name identifier.
The /tmp directory and files in that directory.
Specify temp locations by using the /tmp/dir/subdir syntax.
Datastore prefix style: '[ds_name] relative_path'. For example:
URL style: /folder/dir/subdir/file?dsName=<name>. For example:
To avoid problems with directory names that use special characters or spaces, enclose the path in quotes for both operating systems.
When you run vifs, you can specify the operation name and argument and one of the standard connection options. Use aliases, symbolic links, or wrapper scripts to simplify the invocation syntax.
Options
vifs command-specific options allow you to retrieve and upload files from the remote host and perform a number of other operations. All vifs options work on datastore files or directories. Some options also work on host files and files in the temp directory. You must also specify connection options.
- -copy
-c <source> <target>
Copies a file in a datastore to another location in a datastore. The <source> must be a remote source path, the <target> a remote target path or directory.
The - -force option replaces existing destination files.
copy src_file_path dst_directory_path [- -force]
copy src_file_path dst_file_path [- -force]
- -dir
dir datastore_directory_path
- -force
copy src_file_path dst_file_path [- -force]
- -get
-g <remote_path> <local_path>
Downloads a file from the ESXi host to the machine on which you run vCLI. This operation uses HTTP GET.
get src_dstore_file_path dst_local_file_path
get src_d store_dir_path dst_local_file_path
- -listdc
- -listds
Lists the datastore names on the ESXi system. When multiple data centers are available, use the - -dc (-Z) argument to specify the name of the datacenter from which you want to list the datastore.
vifs - -listds
- -mkdir
Creates a directory in a datastore. This operation fails if the parent directory of dst_datastore_file_path does not exist.
- -move
-m <source> <target>
Moves a file in a datastore to another location in a datastore. The <source> must be a remote source path, the <target> a remote target path or directory.
The - -force option replaces existing destination files.
move src_file_path dst_directory_path [- -force]
move src_file_path dst_file_path [- -force]
- -put
Uploads a file from the machine on which you run vCLI to the ESXi host. This operation uses HTTP PUT.
Datastore Host Temp
put src_local_file_path dst_directory_path
- -rm
- -rmdir
Deletes a datastore directory. This operation fails if the directory is not empty.
Examples
You can use vifs to interact with the remote ESXi or vCenter Server system in a variety of ways. Specify one of the connection options listed in Connection Options in place of <conn_options>. The examples illustrate use on a Linux system, use double quotes instead of single quotes when on a Windows system.
Listing Remote Information
List all data centers on a vCenter Server system with --listdc, using --server to point to the vCenter Server system.
vifs --server <my_vc>--username administrator --password <pswd> --listdc
vifs --server <my_vc> --username administrator --password <pswd> --dc kw-dev --listds
vifs --server <my_ESXi> --username root --password <pswd> --listds
The command lists the names of all datastores on the specified server.
You can use each name that has been returned to refer to datastore paths by using square bracket notation, as follows:
'[my_datastore] dir/subdir/file'
vifs --server <my_ESXi> --username root --password <pswd>--dir '[Storage1]'
vifs --server <my_ESXi> --username root --password <pswd> --dir '[Storage1] WindowsXP'
The command lists the directory content. In this example, the command lists the contents of a virtual machine directory.
Content Listing
_________________
vmware-37.log
vmware-38.log
...
vmware.log
...
winxpPro-sp2.vmdk
winxpPro-sp2.vmx
winxpPro-sp2.vmxf
...
vifs <conn_options> --dir '[osdc-cx700-02]'
The command lists the complete contents of the datastore.
Working with Directories and Files on the Remote Server
vifs --server <my_ESXi> --username root --password <pswd> --mkdir '[Storage1] test'
Remove a directory with --rmdir <remote_dir>.
vifs --server <my_ESXi> --username root --password <pswd> --rmdir '[Storage1] test'.
Forcibly remove a directory with --rmdir --force <remote_dir>.
vifs --server <my_ESXi> --username root --password <pswd> --rmdir '[Storage1] test2' --force
Update a file on the remote server with --put <local_path> <remote_path>.
vifs --server <my_ESXi> --username root --password <pswd>
--put /tmp/testfile '[Storage1] test/testfile'
Retrieve a file from the remote server with --get <remote_path> <local_path>|<local_dir>. The command overwrites the local file if it exists. If you do not specify a file name, the filename of the remote file is used.
vifs --server <my_ESXi> --username root --password <pswd> --get '[Storage1] test/testfile' /tmp/tfile
vifs --server <my_ESXi> --username root --password <pswd> --get '[Storage1] test/testfile' /tmp
vifs --server <my_ESXi> --username root --password <pswd> --rm '[Storage1] test2/testfile'
vifs --server <my_ESXi> --username root --password <pswd> --rm '[Storage1] test2/testfile2' --force
Move a file from one location on the remote server to another location with --move <remote_source_path> <remote_target_path>. If you specify a file name, the file is moved and renamed at the same time.
vifs --server <my_ESXi> --username root --password <pswd> --move '[Storage1] test/tfile' '[Storage1] newfile'
If the target file already exists on the remote server, the command fails unless you use --force.
vifs --server <my_ESXi> --username root --password <pswd> --move '[Storage1] test/tfile2' '[Storage1] test2/tfile' --force
vifs --server <my_ESXi> --username root --password <pswd> --copy '[Storage1] test/tfile' '[Storage1] test/tfile2'
If the target file already exists on the remote server, the command fails unless you use --force.
vifs --server <my_ESXi> --username root --password <pswd> --copy '[Storage1] test/tfile' '[Storage1] test/tfile2' --force
Example Scenario
The following example scenario illustrates other uses of vifs. Specify one of the connection options listed in Connection Options in place of <conn_options>.
To manage files and directories on the remote ESXi system
1
vifs <conn_options> --mkdir '[osdc-cx700-03] vcli_test'
You must specify the precise path; there is no concept of a relative path.
2
vifs <conn_options> - -put /tmp/test_doc '[osdc-cx700-03] vcli_test/test_doc'
3
vifs <conn_options> - -move '[osdc-cx700-03] vcli_test/test_doc'
'[osdc-cx700-03] winxpPro-sp2/test_doc
A message indicates success or failure.
4
The following example retrieves a log file for analysis.
vifs <conn_options> --get '[osdc-cx700-03] winxpPro-sp2/vmware.log' ~user1/vmware.log
5
vifs <conn_options> --rm '[osdc-cx700-03] vcli_test/test_doc'
vifs <conn_options> --rmdir '[osdc-cx700-03] vcli_test'
 
Managing Storage
Managing Storage
A virtual machine uses a virtual disk to store its operating system, program files, and other data associated with its activities. A virtual disk is a large physical file, or a set of files, that can be copied, moved, archived, and backed up.
To store virtual disk files and manipulate the files, a host requires dedicated storage space. ESXi storage is storage space on a variety of physical storage systems, local or networked, that a host uses to store virtual machine disks.
This chapter includes the following topics:
Managing iSCSI Storage discusses iSCSI storage management. Managing Third-Party Storage Arrays explains how to manage the Pluggable Storage Architecture, including Path Selection Plugin (PSP) and Storage Array Type Plugin (SATP) configuration.
For information on masking and unmasking paths with ESXCLI, see the vSphere Storage documentation.
Introduction to Storage
Introduction to Storage
Fibre Channel SAN arrays, iSCSI SAN arrays, and NAS arrays are widely used storage technologies supported by VMware vSphere to meet different datacenter storage needs. The storage arrays are connected to and shared between groups of servers through storage area networks. This arrangement allows aggregation of the storage resources and provides more flexibility in provisioning them to virtual machines.
vSphere Datacenter Physical Topology
How Virtual Machines Access Storage
A virtual disk hides the physical storage layer from the virtual machine’s operating system. Regardless of the type of storage device that your host uses, the virtual disk always appears to the virtual machine as a mounted SCSI device. As a result, you can run operating systems that are not certified for specific storage equipment, such as SAN, in the virtual machine.
When a virtual machine communicates with its virtual disk stored on a datastore, it issues SCSI commands. Because datastores can exist on various types of physical storage, these commands are encapsulated into other forms, depending on the protocol that the ESXi host uses to connect to a storage device.
Virtual Machines Accessing Different Types of Storage depicts five virtual machines that use different types of storage to illustrate the differences between each type.
Virtual Machines Accessing Different Types of Storage
You can use vCLI commands to manage the virtual machine file system and storage devices.
VMFS. Use vmkfstools to create, modify, and manage VMFS virtual disks and raw device mappings. See Managing the Virtual Machine File System with vmkfstools for an introduction and the vSphere Storage documentation for a detailed reference.
Datastores. Several commands allow you to manage datastores and are useful for multiple protocols.
LUNs. Use esxcli storage core or vicfg-scsidevs commands to display available LUNs and mappings for each VMFS volume to its corresponding partition. See Examining LUNs.
Path management. Use esxcli storage core or vicfg-mpath commands to list information about Fibre Channel or iSCSI LUNs and to change a path’s state. See Managing Paths. Use the ESXCLI command to view and modify path policies. See Managing Path Policies.
Rescan. Use esxcli storage core or vicfg-rescan adapter rescan to perform a rescan operation each time you reconfigure your storage setup. See Scanning Storage Adapters.
Storage devices. Several commands manage only specific storage devices.
NFS storage. Use esxcli storage nfs or vicfg-nas to manage NAS storage devices. See Managing NFS/NAS Datastores.
iSCSI storage. Use esxcli iscsi or vicfg-iscsi to manage both hardware and software iSCSI. See Managing iSCSI Storage.
Datastores
ESXi hosts use storage space on a variety of physical storage systems, including internal and external devices and networked storage. A host can discover storage devices to which it has access and format them as datastores. Each datastore is a special logical container, analogous to a file system on a logical volume, where the host places virtual disk files and other virtual machine files. Datastores hide specifics of each storage product and provide a uniform model for storing virtual machine files.
Depending on the type of storage you use, datastores can be backed by the following file system formats:
Virtual Machine File System (VMFS). High-performance file system optimized for storing virtual machines. Your host can deploy a VMFS datastore on any SCSI-based local or networked storage device, including Fibre Channel and iSCSI SAN equipment.
As an alternative to using the VMFS datastore, your virtual machine can have direct access to raw devices and use a mapping file (RDM) as a proxy. See Managing the Virtual Machine File System with vmkfstools.
Network File System (NFS). File system on a NAS storage device. ESXi supports NFS version 3 over TCP/IP. The host can access a designated NFS volume located on an NFS server, mount the volume, and use it for any storage needs.
Storage Device Naming
Each storage device, or LUN, is identified by several names.
Name. A friendly name that the ESXi host assigns to a device based on the storage type and manufacturer, for example, DGC Fibre Channel Disk. This name is visible in the vSphere Web Client.
Device UID. A universally unique identifier assigned to a device. The type of storage determines the algorithm used to create the identifier. The identifier is persistent across reboots and is the same for all hosts sharing the device. The format is often naa.xxxxxxx or eui.xxxxxxxx.
VML Name. A legacy SCSI device name specific to VMware. Use the device UID instead.
The runtime name of the first path to the device is a path identifier and not a reliable identifier for the device. Runtime names are created by the host, and are not persistent. The runtime name has the format vmhba#:C#:T#:L#. You can view the runtime name using the vSphere Web Client.
Examining LUNs
Examining LUNs
A LUN (Logical Unit Number) is an identifier for a disk volume in a storage array target.
Target and Device Representation
In the ESXi context, the term target identifies a single storage unit that a host can access. The terms device and LUN describe a logical volume that represents storage space on a target. The terms device and LUN mean a SCSI volume presented to the host from a storage target.
Different storage vendors present their storage systems to ESXi hosts in different ways. Some vendors present a single target with multiple LUNs on it. Other vendors, especially iSCSI vendors, present multiple targets with one LUN each.
Target and LUN Representations
In Target and LUN Representations, three LUNs are available in each configuration. On the left, the host sees one target, but that target has three LUNs that can be used. Each LUN represents an individual storage volume. On the right, the host sees three different targets, each having one LUN.
Examining LUNs with esxcli storage core
Use esxcli storage core to display information about available LUNs on ESXi 5.0. For ESX/ESXi 4.x hosts, use vicfg-scsidevs. For ESX/ESXi 3.5 systems, the corresponding command is vicfg-vmhbadevs.
You can run one of the following commands to examine LUNs. Specify one of the connection options listed in Connection Options in place of <conn_options>.
esxcli <conn_options> storage core device list
The command lists device information for all logical devices on this system. The information includes the name (UUID), device type, display name, and multipathing plugin. Specify the --device option to only list information about a specific device.
naa.5000c50006ee9cc7
Display Name: Local SEAGATE Disk (naa.5000c50006ee9cc7)
Has Settable Display Name: true
Size: 286102
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.5000c50006ee9cc7
Vendor: SEAGATE
Model: ST3300555SS
Revision: T211
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: true
Is Removable: false
Is SSD: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unknown
VAAI Plugin Name:
Other UIDs: vml.02000000005000c50006ee9cc7535433333030
mpx.vmhba0:C0:T0:L0
   ...
   Attached Filters:
VAAI Status: unsupported
VAAI Plugin Name:
Other UIDs: vml.0005000000766d686261303a303a30
esxcli <conn_options> storage core device list -d mpx.vmhba32:C0:T1:L0
esxcli <conn_options> storage core device list
The command lists the primary UID for each device (naa.xxx or other primary name) and any other UIDs for each UID (VML name). You can specify --device to only list information for a specific device.
esxcli <conn_option> storage filesystem list
esxcli <conn_options> storage core adapter list
The return value includes adapter and UID information.
esxcli <conn_options> storage core path list
Examining LUNs with vicfg-scsidevs
Use vicfg-scsidevs to display information about available LUNs on ESX/ESXi 4.x hosts. For ESX/ESXi 3.5 systems, the corresponding command is vicfg-vmhbadevs.
Important You can run vicfg-scsidevs --query and vicfg-scsidevs --vmfs against ESX/ESXi version 3.5. The other options are supported only against ESX/ESXi version 4.0 and later.
You can run one of the following commands to examine LUNs. Specify one of the connection options listed in Connection Options in place of <conn_options>.
vicfg-scsidevs <conn_options> --list
The command lists device information for all logical devices on this system. The information includes the name (UUID), device type, display name, and multipathing plugin. Specify the --device option to only list information about a specific device. The following example shows output for two devices; the actual listing might include multiple devices and the precise format differs between releases.
mpx.vmhba2:C0:T1:L0
Device Type: cdrom
Size: 0 MB
Display Name: Local HL-DT-ST (mpx.vmhba2:C0:T1:L0)
Plugin: NMP
Console Device: /vmfs/devices/cdrom/mpx.vmhba2:C0:T1:L0
Devfs Path: /vmfs/devices/cdrom/mpx.vmhba2:C0:T1:L0
Vendor: SONY Model: DVD-ROM GDRXX8XX Revis: 3.00
SCSI Level: 5 Is Pseudo: Status:
Is RDM Capable: Is Removable:
Other Names:
vml.000N000000XXXdXXXXXXXXaXXXaXX
VAAI Status: nnnn
 
naa.60060...
Device Type: disk
Size: 614400 MB
Display Name: DGC Fibre Channel Disk (naa.60060...)
...
vicfg-scsidevs <conn_options> --compact-list
The information includes the device ID, device type, size, plugin, and device display name.
vicfg-scsidevs <conn_options> --uids
The command lists the primary UID for each device (naa.xxx or other primary name) and any other UIDs for each UID (VML name). You can specify --device to only list information for a specific device.
vicfg-scsidevs <conn_options> -l -d mpx.vmhba32:C0:T1:L0
vicfg-scsidevs <conn_options> --vmfs
vicfg-scsidevs <conn_options> --hbas
The return value includes the adapter ID, driver ID, adapter UID, PCI, vendor, and model.
vicfg-scsidevs <conn_options> --hba-device-list
Managing Paths
Managing Paths
To maintain a constant connection between an ESXi host and its storage, ESXi supports multipathing. With multipathing you can use more than one physical path for transferring data between the ESXi host and the external storage device.
In case of failure of an element in the SAN network, such as an HBA, switch, or cable, the ESXi host can fail over to another physical path. On some devices, multipathing also offers load balancing, which redistributes I/O loads between multiple paths to reduce or eliminate potential bottlenecks.
The storage architecture in vSphere 4.0 and later supports a special VMkernel layer, Pluggable Storage Architecture (PSA). The PSA is an open modular framework that coordinates the simultaneous operation of multiple multipathing plugins (MPPs). You can manage PSA using ESXCLI commands. See Managing Third-Party Storage Arrays. This section assumes you are using only PSA plugins included in vSphere by default.
Multipathing with Local Storage and FC SANs
In a simple multipathing local storage topology, you can use one ESXi host with two HBAs. The ESXi host connects to a dual-port local storage system through two cables. This configuration ensures fault tolerance if one of the connection elements between the ESXi host and the local storage system fails.
To support path switching with FC SAN, the ESXi host typically has two HBAs available from which the storage array can be reached through one or more switches. Alternatively, the setup can include one HBA and two storage processors so that the HBA can use a different path to reach the disk array.
In FC Multipathing, multiple paths connect each host with the storage device. For example, if HBA1 or the link between HBA1 and the switch fails, HBA2 takes over and provides the connection between the server and the switch. The process of one HBA taking over for another is called HBA failover.
FC Multipathing
If SP1 or the link between SP1 and the switch breaks, SP2 takes over and provides the connection between the switch and the storage device. This process is called SP failover. ESXi multipathing supports HBA and SP failover.
After you have set up your hardware to support multipathing, you can use the vSphere Web Client or vCLI commands to list and manage paths. You can perform the following tasks.
List path information with vicfg-mpath or esxcli storage core path. See Listing Path Information.
Change path state with vicfg-mpath or esxcli storage core path. See Changing the State of a Path.
Important Use ESXCLI for ESXi 5.0. Use vicfg-mpath for ESX/ESXi 4.0 or later. Use vicfg-mpath35 for ESX/ESXi 3.5.
Listing Path Information
You can list path information with ESXCLI or with vicfg-mpath.
Listing Path Information with ESXCLI
You can run esxcli storage core path to display information about Fibre Channel or iSCSI LUNs.
Important Use industry-standard device names, with format eui.xxx or naa.xxx to ensure consistency. Do not use VML LUN names unless device names are not available.
You can display information about paths by running esxcli storage core path. Specify one of the options listed in Connection Options in place of <conn_options>.
esxcli <conn_options> storage core path list
esxcli <conn_options> storage core path list --path <path>
esxcli <conn_options> storage core path list --device <device>
esxcli <conn_options> storage core path stats get
esxcli <conn_options> storage core path stats get --path <path
esxcli <conn_options> storage core path list -d <naa.xxxxxx>
esxcli <conn_options> storage core adapter list
esxcli <conn_options> storage core adapter rescan
Listing Path Information with vicfg-mpath
You can run vicfg-mpath to list information about Fibre Channel or iSCSI LUNs.
Important Use industry-standard device names, with format eui.xxx or naa.xxx to ensure consistency. Do not use VML LUN names unless device names are not available.
You can display information about paths by running vicfg-mpath with one of the following options. Specify one of the options listed in Connection Options in place of <conn_options>.
vicfg-mpath <conn_options> --list-paths
vicfg-mpath <conn_options> --list-compact
vicfg-mpath <conn_options> --list-map
vicfg-mpath <conn_options> --list
-P sas.5001c231c79c4a00-sas.1221000001000000-naa.5000c5000289c61b
vicfg-mpath <conn_options> -l -P vmhba32:C0:T0:L0
The return information includes the runtime name, device, device display name, adapter, adapter identifier, target identifier, plugin, state, transport, and adapter and target transport details.
vicfg-mpath <conn_options> -l -d mpx.vmhba32:C0:T1:L0
vicfg-mpath <conn_options> --list --device naa.60060...
Changing the State of a Path
You can change the state of a path with ESXCLI or with vicfg-mpath.
Changing Path State with ESXCLI
You can temporarily disable paths for maintenance or other reasons, and enable the path when you need it again. You can disable paths with ESXCLI. Specify one of the options listed in Connection Options in place of <conn_options>.
If you are changing a path’s state, the change operation fails if I/O is active when the path setting is changed. Reissue the command. You must issue at least one I/O operation before the change takes effect.
To disable a path with ESXCLI
1
esxcli <conn_options> storage core path list
The display includes information about each path’s state.
2
esxcli <conn_options> storage core path set --state off --path vmhba32:C0:T1:L0
When you are ready, set the path state to active again.
esxcli <conn_options> storage core path set --state active --path vmhba32:C0:T1:L0
Changing Path State with vicfg-mpath
You can disable paths with vicfg-mpath. Specify one of the options listed in Connection Options in place of <conn_options>.
If you are changing a path’s state, the change operation fails if I/O is active when the path setting is changed. Reissue the command. You must issue at least one I/O operation before the change takes effect.
To disable a path with vicfg-mpath
1
vicfg-mpath <conn_options> --list-paths
The display includes information about each path’s state.
2
vicfg-mpath <conn_options> --state off --path vmhba32:C0:T1:L0
When you are ready, set the path state to active again.
vicfg-mpath <conn_options> --state active --path vmhba32:C0:T1:L0
Managing Path Policies
Managing Path Policies
For each storage device managed by NMP (not PowerPath), an ESXi host uses a path selection policy. If you have a third-party PSP installed on your host, its policy also appears on the list. The following path policies are supported by default.
The host uses the designated preferred path, if it has been configured. Otherwise, the host selects the first working path discovered at system boot time. If you want the host to use a particular preferred path, specify it through the vSphere Web Client, or by using esxcli storage nmp psp fixed deviceconfig set. See Changing Path Policies.
The host selects the path that it used most recently. When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when that path becomes available again. There is no preferred path setting with the MRU policy. MRU is the default policy for active-passive storage devices.
The host uses an automatic path selection algorithm that rotates through all active paths when connecting to active-passive arrays, or through all available paths when connecting to active-active arrays. Automatic path selection implements load balancing across the physical paths available to your host. Load balancing is the process of spreading I/O requests across the paths. The goal is to optimize throughput performance such as I/O per second, megabytes per second, or response times.
VMW_PSP_RR is the default for a number of arrays and can be used with both active-active and active-passive arrays to implement load balancing across paths for different LUNs.
The type of array and the path policy determine the behavior of the host.
VMkernel resumes using the preferred path when connectivity is restored.
VMkernel attempts to resume by using the preferred path. This action can cause path thrashing or failure when another SP now owns the LUN.
Multipathing Considerations
The following considerations help you with multipathing:
When the system searches the SATP rules to locate a SATP for a given device, it searches the driver rules first. If there is no match, the vendor/model rules are searched, and finally the transport rules are searched. If no match occurs, NMP selects a default SATP for the device.
If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-aware, no claim rule match occurs for this device. The device is claimed by the default SATP based on the device's transport type.
The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The VMW_PSP_MRU selects an active/optimized path as reported by the VMW_SATP_ALUA, or an active/unoptimized path if there is no active/optimized path. This path is used until a better path is available (MRU). For example, if the VMW_PSP_MRU is currently using an active/unoptimized path and an active/optimized path becomes available, the VMW_PSP_MRU will switch the current path to the active/optimized one.
While VMW_PSP_MRU is typically selected for ALUA arrays by default, certain ALUA storage arrays need to use VMW_PSP_FIXED. To check whether your storage array requires VMW_PSP_FIXED, see the VMware Compatibility Guide or contact your storage vendor. When using VMW_PSP_FIXED with ALUA arrays, unless you explicitly specify a preferred path, the ESXi host selects the most optimal working path and designates it as the default preferred path. If the host selected path becomes unavailable, the host selects an alternative available path. However, if you explicitly designate the preferred path, it will remain preferred no matter what its status is.
Changing Path Policies
You can change path policies with ESXCLI or with vicfg-mpath.
Changing Path Policies with ESXCLI
You can change the path policy with ESXCLI. Specify one of the options listed in Connection Options in place of <conn_options>.
To change the path policy with ESXCLI
1
esxcli <conn_options> storage nmp device list
2
esxcli storage core plugin registration list --plugin-class="PSP"
3
esxcli <conn_options> storage nmp device set - -device naa.xxx - -psp VMW_PSP_RR
See Supported Path Policies.
4
(Optional) If you specified the VMW_PSP_FIXED policy, you must make sure the preferred path is set correctly.
a
esxcli <conn_options> storage nmp psp fixed deviceconfig get - -device naa.xxx
b
esxcli <conn_options> storage nmp psp fixed deviceconfig set - -device naa.xxx --path vmhba3:C0:T5:L3
The command sets the preferred path to vmhba3:C0:T5:L3. Run the command with --default to clear the preferred path selection.
Changing Path Policies with vicfg-mpath
You can change the path policy with vicfg-mpath. Specify one of the options listed in Connection Options in place of <conn_options>.
To change the path policy with vicfg-mpath
1
vicfg-mpath <conn_options> --list-plugins
At a minimum, this command returns NMP (Native Multipathing Plugin) and MASK_PATH. If other MPP plugins have been loaded, they are listed as well.
2
esxcli <conn_options> nmp device set - -device naa.xxx - -psp VMW_PSP_RR
See Supported Path Policies.
3
(Optional) If you specified the VMW_PSP_FIXED policy, you must make sure the preferred path is set correctly.
a
esxcli <conn_options> storage nmp psp fixed deviceconfig get -d naa.xxxx
b
esxcli <conn_options> storage nmp psp fixed deviceconfig set --device naa.xxx --path vmhba3:C0:T5:L3
The command sets the preferred path to vmhba3:C0:T5:L3
Setting Policy Details for Devices that Use Round Robin
ESXi hosts can use multipathing for failover. With certain storage devices, ESXi hosts can also use multipathing for load balancing. To achieve better load balancing across paths, administrators can specify that the ESXi host should switch paths under certain circumstances. Different settable options determine when the ESXi host switches paths and what paths are chosen. Only a limited number of storage arrays support round robin.
You can use esxcli nmp roundrobin to retrieve and set round robin path options on a device controlled by the roundrobin PSP. Specify one of the options listed in Connection Options in place of <conn_options>.
No vicfg- command exists for performing the operations. The ESXCLI commands for setting round robin path options have changed. The commands supported in ESX/ESXi 4.x is no longer supported.
To view and manipulate round robin path selection settings with ESXCLI
1
esxcli <conn_options> storage nmp psp roundrobin deviceconfig get --device na.xxx
2
Use --bytes or --iops to specify when the path should change, as in the following examples:
esxcli <conn_options> storage nmp psp roundrobin deviceconfig set - -type "bytes" -B 12345 - -device naa.xxx
Sets the device specified by --device to switch to the next path each time 12345 bytes have been sent along the current path.
esxcli <conn_options> storage nmp psp roundrobin deviceconfig set - -type=iops - -iops 4200 - -device naa.xxx
Sets the device specified by --device to switch after 4200 I/O operations have been performed on a path.
Use useano to specify that the round robin PSP should include paths in the active, unoptimized state in the round robin set (1) or that the PSP should use active, unoptimized paths only if no active optimized paths are available (0). If you do not include this option, the PSP includes only active optimized paths in the round robin path set.
Managing NFS/NAS Datastores
Managing NFS/NAS Datastores
ESXi hosts can access a designated NFS volume located on a NAS (Network Attached Storage) server, can mount the volume, and can use it for its storage needs. You can use NFS volumes to store and boot virtual machines in the same way that you use VMFS datastores.
Capabilities Supported by NFS/NAS
ESXi hosts support the following shared storage capabilities on NFS volumes:
NAS stores virtual machine files on remote file servers that are accessed over a standard TCP/IP network. The NFS client built into the ESXi system uses NFS version 3 to communicate with NAS/NFS servers. For network connectivity, the host requires a standard network adapter.
In addition to storing virtual disks on NFS datastores, you can also use NFS as a central repository for ISO images, virtual machine templates, and so on.
To use NFS as a shared repository, you create a directory on the NFS server and then mount the directory as a datastore on all hosts. If you use the datastore for ISO images, you can connect the virtual machine's CD-ROM device to an ISO file on the datastore and install a guest operating system from the ISO file.
Adding and Deleting NAS File Systems
You can list, add, and delete a NAS file system with ESXCLI or with vicfg-nas.
Managing NAS File Systems with ESXCLI
You can use ESXCLI as a vCLI command with connection options (see Connection Options) or in the ESXi shell.
To manage a NAS file system
1
esxcli <conn_options> storage nfs list
For each NAS file system, the command lists the mount name, share name, and host name and whether the file system is mounted.
If no NAS file systems are available, the system does not return a NAS filesystem and returns to the command prompt.
2
Add a new NAS file system to the ESXi host. Specify the NAS server with --host, the volume to use for the mount with --volume-name, and the share name on the remote system to use for this NAS mount point with --share.
esxcli <conn_options> storage nfs add --host=dir42.eng.vmware.com --share=/<mount_dir> --volume-name=nfsstore-dir42
This command adds an entry to the known NAS file system list and supplies the share name of the new NAS file system. You must supply the host name, share name, and volume name for the new NAS file system.
3
esxcli <conn_options> storage nfs add --host=dir42.eng.vmware.com --share=/home --volume-name=FileServerHome2 --readonly
4
esxcli <conn_options> storage nfs remove --volume-name=FileServerHome2
This command unmounts the NAS file system and removes it from the list of known file systems.
Managing NAS File Systems with vicfg-nas
You can use vicfg-nas as a vCLI command with connection options. See Connection Options.
To manage a NAS file system
1
vicfg-nas <conn_options> -l
For each NAS file system, the command lists the mount name, share name, and host name and whether the file system is mounted. If no NAS file systems are available, the system returns the following message:
No NAS datastore found
2
vicfg-nas <conn_options --add --nasserver dir42.eng.vmware.com -s /<mount_dir> nfsstore-dir42
This command adds an entry to the known NAS file system list and supplies the share name of the new NAS file system. You must supply the host name and the share name for the new NAS file system.
3
vicfg-nas <conn_options> -a -y --n esx42nas2 -s /home FileServerHome2
4
vicfg-nas <conn_options> -d FileServerHome1
This command unmounts the NAS file system and removes it from the list of known file systems.
Monitoring and Managing SAN Storage
Monitoring and Managing SAN Storage
The esxcli storage san commands help administrators troubleshoot issues with I/O devices and fabric, and include Fibre Channel, FCoE, iSCSI, SAS protocol statistics. The commands allow you to retrieve device information and I/O statistics from those device. You can also issue Loop Initialization Primitives LIP to FC/FCoE devices and you can reset SAS devices.
For FC and FCoE devices, you can retrieve FC events such as RSCN, LINKUP, LINKDOWN, Frame Drop and FCoE CVL. The commands log a warning in the VMkernel log if it encounters too many Link Toggling or frame drops
To retrieve and reset the IO Device Management module
1
esxcli storage san fc events get
2
esxcli storage san fc events clear --adapter adapter
Migrating Virtual Machines with svmotion
Migrating Virtual Machines with svmotion
Storage vMotion moves a virtual machine’s configuration file, and, optionally, its disks, while the virtual machine is running. You can perform Storage vMotion tasks from the vSphere Web Client or with the svmotion command.
You can place the virtual machine and all of its disks in a single location, or choose separate locations for the virtual machine configuration file and each virtual disk. You cannot change the virtual machine’s execution host during a migration with svmotion.
Storage vMotion Uses
Storage vMotion has several uses in administering your vSphere environment.
Perform storage maintenance and reconfiguration. You can use Storage vMotion to move virtual machines off a storage device to allow maintenance or reconfiguration of the storage device without virtual machine downtime.
Redistribute storage load. You can use Storage vMotion to manually redistribute virtual machines or virtual disks to different storage volumes to balance capacity or improve performance.
Storage vMotion Requirements and Limitations
You can migrate virtual machine disks with Storage vMotion if the virtual machine and its host meet the following resource and configuration requirements:
Virtual machine disks must be in persistent mode or be raw device mappings (RDMs). For physical and virtual compatibility mode RDMs, you can migrate the mapping file only. For virtual compatibility mode RDMs, you can use the vSphere Web Client to convert to thick-provisioned or thin-provisioned disks during migration as long as the destination is not an NFS datastore. You cannot use the svmotion command to perform this conversion.
ESX/ESXi 3.5 hosts must be licensed and configured for vMotion. ESX/ESXi 4.0 and later hosts do not require vMotion configuration to perform migration with Storage vMotion.
A particular host can be involved in up to four migrations with vMotion or Storage vMotion at one time. See “Limits on Simultaneous Migrations” in the vCenter Server and Host Management documentation for details.
If you use the vSphere Web Client for migration with svmotion, the system performs several compatibility checks. These checks are not supported by the svmotion vCLI command.
Running svmotion in Interactive Mode
You can run svmotion in interactive mode using the --interactive option. The command prompts you for the information it needs to complete the storage migration.
svmotion <conn_options> - -interactive
When you use --interactive, all other options are ignored.
Running svmotion in Noninteractive Mode
Important When you run svmotion, --server must point to a vCenter Server system.
In noninteractive mode, the svmotion command uses the following syntax:
svmotion [standard vCLI options] - -datacenter=<datacenter_name>
- -vm <VM config datastore path>:<new datastore>
[- -disks <virtual disk datastore path>:<new datastore>,
<virtual disk datastore path>:<new datastore>]
Square brackets indicate optional elements, not datastores.
The --vm option specifies the virtual machine and its destination. By default, all virtual disks are relocated to the same datastore as the virtual machine. This option requires the current virtual machine configuration file location. See To determine the path to the virtual machine configuration file and disk file.
The --disks option relocates individual virtual disks to different datastores. The --disks option requires the current virtual disk datastore path as an option. See To determine the path to the virtual machine configuration file and disk file.
To determine the path to the virtual machine configuration file and disk file
1
Run vmware-cmd -l to list all virtual machine configuration files (VMX files).
vmware-cmd -H <vc_server> -U <login_user> -P <login_password> -h <esx_host> -l
2
By default, the virtual disk file has the same name as the VMX file but has a .vmdk extension.
3
(Optional) Use vifs to verify that you are using the correct VMDK file.
To relocate a virtual machine’s storage (including disks)
1
2
Run svmotion:
svmotion
- -url=https://myvc.mycorp.com/sdk - -datacenter=DC1
- -vm=”[storage1] myvm/myvm.vmx:new_datastore”
The example is for Windows. Use single quotes on Linux.
To relocate a virtual machine’s configuration file, but leave virtual disks
1
2
Run svmotion, for example:
svmotion
<conn_options>
- -datacenter='My DC'
- -vm='[old_datastore] myvm/myvm.vmx:new_datastore'
- -disks='[old_datastore] myvm/myvm_1.vmdk:old_datastore, [old_datastore] myvm/myvm_2.vmdk: old_datastore'
This command relocates the virtual machine's configuration file to new_datastore, but leaves the two disks (myvm_1.vmdk and myvm_2.vmdk) in old_datastore. The example is for Linux. Use double quotes on Windows. The square brackets surround the datastore name and do not indicate an optional element.
Configuring FCoE Adapters
Configuring FCoE Adapters
ESXi can use Fibre Channel over Ethernet (FCoE) adapters to access Fibre Channel storage.
The FCoE protocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not need special Fibre Channel links to connect to Fibre Channel storage, but can use 10 Gbit lossless Ethernet to deliver Fibre Channel traffic.
To use FCoE, you need to install FCoE adapters. The adapters that VMware supports generally fall into two categories, hardware FCoE adapters and software FCoE adapters.
Hardware FCoE Adapters. Hardware FCoE adapters include completely offloaded specialized Converged Network Adapters (CNAs) that contain network and Fibre Channel functionalities on the same card. When such an adapter is installed, your host detects and can use both CNA components. In the vSphere Web Client, the networking component appears as a standard network adapter (vmnic) and the Fibre Channel component as a FCoE adapter (vmhba). You do not have to configure a hardware FCoE adapter to be able to use it.
Software FCoE Adapters. A software FCoE adapter is a software code that performs some of the FCoE processing. The adapter can be used with a number of NICs that support partial FCoE offload. Unlike the hardware FCoE adapter, the software adapter must be activated.
Scanning Storage Adapters
Scanning Storage Adapters
You must perform a rescan operation each time you reconfigure your storage setup. You can scan using the vSphere Web Client the vicfg-rescan vCLI command, or the esxcli storage core adapter rescan command.
esxcli storage core adapter rescan supports the following additional options:
-a|--all or -A|--adapter=<string> – Scan all adapters or a specified adapter.
-S|--skip-claim – Skip claiming of new devices by the appropriate multipath plugin.
-F|--skip-fs-scan – Skip filesystem scan
-t|--type – Specify the type of scan to perform. The command either scans for all changes ( all) or for added, deleted, or updated adapters (add, delete, update)
vicfg-rescan supports only a simple rescan operation on a specified adapter.
To rescan a storage adapter with vicfg-rescan
Run vicfg-rescan, specifying the adapter name.
vicfg-rescan <conn_options> vmhba1
The command returns an indication of success or failure, but no detailed information.
To rescan a storage adapter with ESXCLI
The following command scans a specific adapter and skips the filesystem scan that is performed by default.
esxcli <conn_options> storage core adapter rescan --adapter=vmhba33 --skip-claim
The command returns an indication of success or failure, but no detailed information.
Retrieving SMART Information
Retrieving SMART Information
You can use ESXCLI to retrieve information related to SMART. SMART is a monitoring system for computer hard disks that reports information about the disks.
esxcli storage core device smart get -d device
What the command returns depends on the level of SMART information that the device supports. If no information is available for a parameter, the output displays N/A, as in the following sample output.
Parameter Value Threshold Worst
-----------------------------------------------------
Health Status OK N/A N/A
Media Wearout Indicator N/A N/A N/A
Write Error Count N/A N/A N/A
Read Error Count 119 6 74
Power-on Hours 57 0 57
Power Cycle Count 100 20 100
Reallocated Sector Count 100 36 100
Raw Read Error Rate 119 6 74
Drive Temperature 38 0 49
Driver Rated Max Temperature 62 45 51
Write Sectors TOT Count 200 0 200
Read Sectors TOT Count 100 0 253
Initial Bad Block Count N/A N/A N/A
 
Managing iSCSI Storage
Managing iSCSI Storage
ESXi systems include iSCSI technology to access remote storage using an IP network. You can use the vSphere Web Client, commands in the esxcli iscsi namespace, or the vicfg-iscsi command to configure both hardware and software iSCSI storage for your ESXi system.
This chapter includes the following topics:
See the vSphere Storage documentation for additional information.
iSCSI Storage Overview
iSCSI Storage Overview
With iSCSI, SCSI storage commands that your virtual machine issues to its virtual disk are converted into TCP/IP protocol packets and transmitted to a remote device, or target, on which the virtual disk is located. To the virtual machine, the device appears as a locally attached SCSI drive.
To access remote targets, the ESXi host uses iSCSI initiators. Initiators transport SCSI requests and responses between ESXi and the target storage device on the IP network. ESXi supports these types of initiators:
Software iSCSI adapter. VMware code built into the VMkernel. Allows an ESXi host to connect to the iSCSI storage device through standard network adapters. The software initiator handles iSCSI processing while communicating with the network adapter.
Hardware iSCSI adapter. Offloads all iSCSI and network processing from your host. Hardware iSCSI adapters are broken into two types.
Dependent hardware iSCSI adapter. Leverages the VMware iSCSI management and configuration interfaces.
Independent hardware iSCSI adapter. Leverages its own iSCSI management and configuration interfaces.
See the vSphere Storage documentation for details on setup and failover scenarios.
You must configure iSCSI initiators for the host to access and display iSCSI storage devices.
iSCSI Storage depicts hosts that use different types of iSCSI initiators.
Dependent hardware iSCSI can be implemented in different ways and is not shown. iSCSI storage devices from the storage system become available to the host. You can access the storage devices and create VMFS datastores for your storage needs.
iSCSI Storage
Discovery Sessions
A discovery session is part of the iSCSI protocol. The discovery session returns the set of targets that you can access on an iSCSI storage system. ESXi systems support dynamic and static discovery.
Dynamic discovery. Also known as Send Targets discovery. Each time the ESXi host contacts a specified iSCSI storage server, it sends a Send Targets request to the server. In response, the iSCSI storage server supplies a list of available targets to the ESXi host. Monitor and manage with esxcli iscsi adapter discovery sendtarget or vicfg-iscsi commands.
Static discovery. The ESXi host does not have to perform discovery. Instead, the ESXi host uses the IP addresses or domain names and iSCSI target names (IQN or EUI format names) to communicate with the iSCSI target. Monitor and manage with esxcli iscsi adapter discovery statictarget or vicfg-iscsi commands.
For either case, you set up target discovery addresses so that the initiator can determine which storage resource on the network is available for access. You can do this setup with dynamic discovery or static discovery. With dynamic discovery, all targets associated with an IP address or host name and the iSCSI name are discovered. With static discovery, you must specify the IP address or host name and the iSCSI name of the target you want to access. The iSCSI HBA must be in the same VLAN as both ports of the iSCSI array.
Discovery Target Names
The target name is either an IQN name or an EUI name.
iqn.yyyy-mm.{reversed domain name}:id_string
For example: iqn.2007-05.com.mydomain:storage.tape.sys3.abc
The ESXi host generates an IQN name for software iSCSI and dependent hardware iSCSI adapters. You can change that default IQN name.
The IEEE Registration Authority provides a service for assigning globally unique identifiers [EUI]. The EUI-64 format is used to build a global identifier in other network protocols. For example, Fibre Channel defines a method of encoding it into a WorldWideName.
The format is eui. followed by an EUI-64 identifier (16 ASCII-encoded hexadecimal digits).
For example:
Type EUI-64 identifier (ASCII-encoded hexadecimal)
+- -++--------------+
| || |
eui.02004567A425678D
The IEEE EUI-64 iSCSI name format can be used when a manufacturer is registered with the IEEE Registration Authority and uses EUI-64 formatted worldwide unique names for its products.
Check in the UI of the storage array whether an array uses an IQN name or an EUI name.
Protecting an iSCSI SAN
Protecting an iSCSI SAN
Your iSCSI configuration is only as secure as your IP network. By enforcing good security standards when you set up your network, you help safeguard your iSCSI storage.
Protecting Transmitted Data
A primary security risk in iSCSI SANs is that an attacker might sniff transmitted storage data. Neither the iSCSI adapter nor the ESXi host iSCSI initiator encrypts the data that it transmits to and from the targets, making the data vulnerable to sniffing attacks. You must therefore take additional measures to prevent attackers from easily seeing iSCSI data.
Allowing your virtual machines to share virtual switches and VLANs with your iSCSI configuration potentially exposes iSCSI traffic to misuse by a virtual machine attacker. To help ensure that intruders cannot listen to iSCSI transmissions, make sure that none of your virtual machines can see the iSCSI storage network.
Protect your system by giving the iSCSI SAN a dedicated virtual switch.
If you use an independent hardware iSCSI adapter, make sure that the iSCSI adapter and ESXi physical network adapter are not inadvertently connected outside the host. Such a connection might result from sharing a switch.
If you use dependent hardware or software iscsi adapter, which uses ESXi networking, configure iSCSI storage through a different virtual switch than the one used by your virtual machines.
You can also configure your iSCSI SAN on its own VLAN to improve performance and security. Placing your iSCSI configuration on a separate VLAN ensures that no devices other than the iSCSI adapter can see transmissions within the iSCSI SAN. With a dedicated VLAN, network congestion from other sources cannot interfere with iSCSI traffic.
Securing iSCSI Ports
When you run iSCSI devices, the ESXi host does not open ports that listen for network connections. This measure reduces the chances that an intruder can break into the ESXi host through spare ports and gain control over the host. Therefore, running iSCSI does not present an additional security risks at the ESXi host end of the connection.
An iSCSI target device must have one or more open TCP ports to listen for iSCSI connections. If security vulnerabilities exist in the iSCSI device software, your data can be at risk through no fault of the ESXi system. To lower this risk, install all security patches that your storage equipment manufacturer provides and limit the devices connected to the iSCSI network.
Setting iSCSI CHAP
iSCSI storage systems authenticate an initiator using a name and key pair. ESXi systems support Challenge Handshake Authentication Protocol (CHAP), which VMware recommends for your SAN implementation. The ESXi host and the iSCSI storage system must have CHAP enabled and must have common credentials. During iSCSI login, the iSCSI storage system exchanges its credentials with the ESXi system and checks them.
You can set up iSCSI authentication by using the vSphere Web Client, as discussed in the vSphere Storage documentation or by using the esxcli command, discussed in Enabling iSCSI Authentication. To use CHAP authentication, you must enable CHAP on both the initiator side and the storage system side. After authentication is enabled, it applies for targets to which no connection has been established, but does not apply to targets to which a connection is established. After the discovery address is set, the new volumes to which you add a connection are exposed and can be used.
For software iSCSI and dependent hardware iSCSI, ESXi hosts support per-discovery and per-target CHAP credentials. For independent hardware iSCSI, ESXi hosts support only one set of CHAP credentials per initiator. You cannot assign different CHAP credentials for different targets.
When you configure independent hardware iSCSI initiators, ensure that the CHAP configuration matches your iSCSI storage. If CHAP is enabled on the storage array, it must be enabled on the initiator. If CHAP is enabled, you must set up the CHAP authentication credentials on the ESXi host to match the credentials on the iSCSI storage.
Supported CHAP Levels
To set CHAP levels with esxcli iscsi adapter setauth or vicfg-iscsi, specify one of the values in Supported Levels for CHAP for <level>. Only two levels are supported for independent hardware iSCSI.
Mutual CHAP is supported for software iSCSI and for dependent hardware iSCSI, but not for independent hardware iSCSI.
Important Ensure that CHAP is set to chapRequired before you set mutual CHAP, and use compatible levels for CHAP and mutual CHAP. Use different passwords for CHAP and mutual CHAP to avoid security risks.
Host does not use CHAP authentication. If authentication is enabled, specify chapProhibited to disable it.
Returning Authentication to Default Inheritance
The values of iSCSI authentication settings associated with a dynamic discovery address or a static discovery target are inherited from the corresponding settings of the parent. For the dynamic discovery address, the parent is the adapter. For the static target, the parent is the adapter or discovery address.
If you use the vSphere Web Client to modify authentication settings, you must deselect the Inherit from Parent check box before you can make a change to the discovery address or discovery target.
If you use vicfg-iscsi, the value you set overrides the inherited value.
If you use esxcli iscsi commands, the value you set overrides the inherited value. You can set CHAP at these levels:
Inheritance is relevant only if you want to return a dynamic discovery address or a static discovery target to its inherited value. In that case, use one of the following commands:
Dynamic discovery: esxcli iscsi adapter discovery sendtarget auth chap set --inherit
Static discovery: esxcli iscsi adapter target portal auth chap set --inherit.
Note You can set target-level CHAP authentication properties to be inherited from the send target level and set send target level CHAP authentication properties to be inherited from the adapter level. Resetting adapter-level properties is not supported.
Command Syntax for esxcli iscsi and vicfg-iscsi
Command Syntax for esxcli iscsi and vicfg-iscsi
In vSphere 5.0, you can manage iSCSI storage by using either esxcli iscsi commands or vicfg-iscsi options. See the vSphere Command-Line Interface Reference. esxcli iscsi Command Syntax and vicfg-iscsi Command Syntax provide an overview.
esxcli iscsi Command Syntax
The esxcli iscsi command includes a number of nested namespaces. The following table illustrates the namespace hierarchy. Commands at each level are included in bold. Many namespaces include both commands and namespaces.
adapter [get|list|set]
chap [set|get]
discovery [rediscover]
sendtarget [add|list|remove]
chap [get|set]
param [get|set]
statictarget [add|list|remove]
chap [get|set]
param [get|set]
firmware [get|set]
param [get|set]
networkportal [add|list|remove]
ipconfig [get|set]
param [get|set]
session [add|list|remove]
ibftboot [get|import]
software [get|set]
Key to esxcli iscsi Short Options
ESXCLI commands for iSCSI management consistently use the same short options. For several options, the associated full option depends on the command.
Short Options for iSCSI ESXCLI Command Options
vicfg-iscsi Command Syntax
vicfg-iscsi supports a comprehensive set of options, listed in Options for vicfg-iscsi.
-A - -authentication
[-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name>]]
- -level <level>
- -method <auth_method> - -mutual
- -mchap_username <ma_username>
- -mchap_password <ma_password>
[- -ip <stor_ip_addr|stor_hostname> [:<portnum>]
[- -name <iscsi_name>]] <adapter_name>
Enables mutual authentication. You must enable authentication before you can enable mutual authentication.
-A - -authentication
[-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name>]] <adapter_name>
- -level <level>
- -method <auth_method>
- -chap_password <auth_u_name>
- -chap_username <chap_password>
[- -ip <stor_ip_addr|stor_hostname> [:<portnum>]
[- -name <iscsi_name>]] <adapter_name>
-A - -authentication
- -list <adapter_name>
-D - -discovery
- -add - -ip <stor_ip_addr|stor_hostname> [:<portnum>] <adapter_name>
-D - -discovery
- -list <adapter_name>
-D - -discovery
- -remove - -ip <stor_ip_addr|stor_hostname> [:<portnum>] <adapter_name>
--list [<adapter_name>]
-L - -lun
- -list <adapter_name>
-L - -lun
- -list - -target_id <target_id> <adapter_name>
-N --network (Independent hardware iSCSI only)
--list <adapter_name>
-N --network (Independent hardware iSCSI only)
--ip <ip_addr> <vmhba>
-N --network (Independent hardware iSCSI only)
--subnetmask <subnet_mask> <adapter_name>
-N --network (Independent hardware iSCSI only)
--gateway <default_gateway> <adapter_name>
Sets the HBA gateway to default_gateway.
-N --network (Independent hardware iSCSI only)
--ip <ip_addr> --subnetmask <subnet_mask>
--gateway <default_gateway> <adapter_name>
Sets the IP address, subnet mask, and default gateway in one command.
-p - -pnp (Independent hardware iSCSI only)
- -list <adapter_name>
-p - -pnp (Independent hardware iSCSI only)
- -mtu <mtu-size> <adapter_name>
-I - -iscsiname
- -alias <alias_name> <adapter_name>
-I - -iscsiname
- -name <iscsi_name> <adapter_name>
-I - -iscsiname
- -list <adapter_name>
--pnp - -mtu <mtu-size> <adapter_name>
-S - -static
- -list <adapter_name>
-S - -static
- -remove - -ip <stor_ip_addr|stor_hostname> [:<portnum>] -name <target_name> <adapter_name>
-S - -static
-n <target_name> <adapter_name>
- -add - -ip <stor_ip_addr|stor_hostname> [:<portnum>]
-name <target_name> <adapter_name>
-P - -phba
- -list <adapter_name>
Lists external, vendor-specific properties of an iSCSI adapter.
-T - -target
- -list <adapter_name>
-W - -parameter
-l [-i <stor_ip_addr|stor_hostname> [:<portnum>]
- -list [- -ip <stor_ip_addr|stor_hostname> [:<portnum>]
[- -name <iscsi_name]] <adapter_name>
-W - -parameter
-l -k [-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name]] <adapter_name>
- -list - -detail
[- -ip <stor_ip_addr|stor_hostname> [:<portnum>] [- -name <iscsi_name]] <adapter_name>
-W - -parameter
- -parameter - -set <name>=<value>
- -ip <stor_ip_addr|stor_hostname> [:port_num>]
[- -name <iscsi_name>]] <adapter_name>
-W - -parameter
-W - o <param_name>
-parameter - -reset <param_name>
-ip <stor_ip_addr|stor_hostname> [:port_num>] [-name <iscsi_name>]] <adapter_name>
Returns parameters in discovery target or send target to default inheritance behavior.
[-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name>]]
- -method <auth_method>
[- -ip <stor_ip_addr|stor_hostname> [:<portnum>]
[- -name <iscsi_name>]] <adapter_name>
Resets target level authentication properties to be inherited from adapter level. Used with the --authentication option.
iSCSI Storage Setup with ESXCLI
iSCSI Storage Setup with ESXCLI
You can set up iSCSI storage using vSphere Web Client, commands in the esxcli iscsi namespace, or vicfg-iscsi commands (see iSCSI Storage Setup with vicfg-iscsi).
Setting Up Software iSCSI with ESXCLI
Software iSCSI setup requires several tasks. For each task, see the discussion of the corresponding command in this chapter or the reference information available from esxcli iscsi --help and the VMware Documentation Center. Specify one of the options listed in Connection Options in place of <conn_options>
1
esxcli <conn_options> iscsi software set --enabled=true
2
esxcli <conn_options> iscsi adapter list
3
If no adapter exists, add one. Software iSCSI does not require port binding, but requires that at least one VMkernel NIC is available and can be used as an iSCSI NIC. You can name the adapter as you add it.
esxcli <conn_options> iscsi networkportal add -n <portal_name> -A <vmhba>
4
esxcli <conn_options> iscsi software get
The system prints true if software iSCSI is enabled, or false if it is not enabled.
5
esxcli <conn_options> iscsi adapter set --adapter=<iscsi adapter> --name=<name>
esxcli <conn_options> iscsi adapter set --adapter=<iscsi adapter> --alias=<alias>
6
The two types of target differ as follows:
esxcli <conn_options> iscsi adapter discovery sendtarget add --address=<ip/dns[:port]> --adapter=<adapter_name>
esxcli <conn_options> iscsi adapter discovery statictarget add --address=<ip/dns[:port]>
--adapter=<adapter_name> --name=<target_name>
When you later remove a discovery address, it might still be displayed as the parent of a static target. You can add the discovery address and rescan to display the correct parent for the static targets.
7
(Optional) Set the authentication information for CHAP (see Setting iSCSI CHAP and Enabling iSCSI Authentication). You can set per-target CHAP for static targets, per-adapter CHAP, or apply the command to the discovery address.
esxcli iscsi adapter auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=[prohibited, discouraged, preferred, required] --secret=<string> --adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=[prohibited, discouraged, preferred, required] --secret=<string> --adapter=<vmhba> --address<sendtarget_address>
esxcli iscsi adapter target portal auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=[prohibited, discouraged, preferred, required] --secret=<string> --adapter=<vmhba> --name<iscsi_iqn_name>
Supported Levels for CHAP lists what each supported level does.
For example:
esxcli <conn_options> iscsi adapter auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=preferred --secret=uni_secret --adapter=vmhba33
8
(Optional) Set the authentication information for mutual CHAP by running esxcli iscsi adapter auth chap set again with --direction set to mutual and a different authentication user name and secret.
esxcli iscsi adapter auth chap set --direction=mutual --mchap_username=<name2> --mchap_password=<pwd2> --level=[prohibited required] --secret=<string2> --adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set --direction=mutual --mchap_username=<name2> --mchap_password=<pwd2> --level=[prohibited, required] --secret=<string2> --adapter=<vmhba> --address=<sendtarget_address>
eesxcli iscsi adapter target portal auth chap set --direction=mutual --mchap_username=<nam2e> --mchap_password=<pwd2> --level=[prohibited required] --secret=<string2> --adapter=<vmhba> --name=<iscsi_iqn_name>
Important You are responsible for making sure that CHAP is set before you set mutual CHAP, and for using compatible levels for CHAP and mutual CHAP.
9
Adapter-level parameters
esxcli iscsi adapter discovery sendtarget param set --adapter=<vmhba> --key=<key> --value=<value> --address=<sendtarget_address>
Target-level parameters
esxcli iscsi adapter target portal param set --adapter=<vmhba> --key=<key> --value=<value> --address=<address> --name=<iqn.name>
See Listing and Setting iSCSI Parameters
10
esxcli <conn_options> iscsi adapter discovery rediscover
esxcli <conn_options> storage core adapter rescan --adapter=vmhba36
11
a
Run esxcli iscsi session remove to log out.
b
Run esxcli iscsi session add or rescan the adapter to add the session back.
Setting Up Dependent Hardware iSCSI with ESXCLI
Dependent hardware iSCSI setup requires several high-level tasks. For each task, see the discussion of the corresponding command in this chapter or the reference information available from esxcli iscsi --help and the VMware Documentation Center. Specify one of the options listed in Connection Options in place of <conn_options>.
1
esxcli <conn_options> iscsi adapter list
2
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --name=<name>
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --alias=<alias>
3
a
esxcli <conn_options> iscsi logicalnetworkportal list --adapter=<adapter_name>
b
esxcli <conn_options> iscsi networkportal add --nic=<bound_vmknic> --adapter=<iscsi_adapter>
c
esxcli <conn_options> iscsi physicalnetworkportal list --adapter=<adapter_name>
4
The two types of target differ as follows:
esxcli <conn_options> iscsi adapter discovery sendtarget add --address=<ip/dns[:port]> --adapter=<adapter_name>
esxcli <conn_options> iscsi adapter discovery statictarget add --address=<ip/dns[:port]>
--adapter=<adapter_name> --name=<target_name>
When you later remove a discovery address, it might still be displayed as the parent of a static target. You can add the discovery address and rescan to display the correct parent for the static targets.
5
(Optional) Set the authentication information for CHAP (see Setting iSCSI CHAP and Enabling iSCSI Authentication). You can set per-target CHAP for static targets, per-adapter CHAP, or apply the command to the discovery address.
esxcli iscsi adapter auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=[prohibited, discouraged, preferred, required] --secret=<string> --adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=[prohibited, discouraged, preferred, required] --secret=<string> --adapter=<vmhba> --address<sendtarget_address>
esxcli iscsi adapter target portal auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=[prohibited, discouraged, preferred, required] --secret=<string> --adapter=<vmhba> --name<iscsi_iqn_name>
Supported Levels for CHAP lists what each supported level does.
For example:
esxcli <conn_options> iscsi adapter auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=preferred --secret=uni_secret --adapter=vmhba33
6
(Optional) Set the authentication information for mutual CHAP by running esxcli iscsi adapter auth chap set again with --direction set to mutual and a different authentication user name and secret.
esxcli iscsi adapter auth chap set --direction=mutual --mchap_username=<name> --mchap_password=<pwd> --level=[prohibited required] --secret=<string2> --adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set --direction=mutual --mchap_username=<name> --mchap_password=<pwd> --level=[prohibited, required] --secret=<string2> --adapter=<vmhba> --address=<sendtarget_address>
esxcli iscsi adapter target portal auth chap set --direction=mutual --mchap_username=<name> --mchap_password=<pwd> --level=[prohibited required] --secret=<string2> --adapter=<vmhba> --name=<iscsi_iqn_name>
Important You are responsible for making sure that CHAP is set before you set mutual CHAP, and for using compatible levels for CHAP and mutual CHAP.
7
Adapter-level parameters
esxcli iscsi adapter discovery sendtarget param set --adapter=<vmhba> --key=<key> --value=<value> --address=<sendtarget_address>
Target-level parameters
esxcli iscsi adapter target portal param set --adapter=<vmhba> --key=<key> --value=<value> --address=<address> --name=<iqn.name>
See Listing and Setting iSCSI Parameters
8
esxcli <conn_options> iscsi adapter discovery rediscover
esxcli <conn_options> storage core adapter rescan --adapter=vmhba36
9
a
Run esxcli iscsi session remove to log out.
b
Run esxcli iscsi session add or rescan the adapter to add the session back.
Setting Up Independent Hardware iSCSI with ESXCLI
With independent hardware-based iSCSI storage, you use a specialized third-party adapter capable of accessing iSCSI storage over TCP/IP. This iSCSI initiator handles all iSCSI and network processing and management for your ESXi system.
You must install and configure the independent hardware iSCSI adapter for your host before you can access the iSCSI storage device. For installation information, see vendor documentation.
Hardware iSCSI setup requires a number of high-level tasks. For each task, see the discussion of the corresponding command-line option in this chapter or the reference information. Specify one of the options listed in Connection Options in place of <conn_options>.
1
esxcli <conn_options> iscsi adapter list
2
Configure the hardware initiator (HBA) by running esxcli iscsi networkportal ipconfig with one or more of the following options.
3
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --name=<name>
esxcli <conn_options> iscsi adapter set --adapter <adapter_name> --alias=<alias>
4
The two types of target differ as follows:
esxcli <conn_options> iscsi adapter discovery sendtarget add --address=<ip/dns[:port]> --adapter=<adapter_name>
esxcli <conn_options> iscsi adapter discovery statictarget add --address=<ip/dns[:port]>
5
(Optional) Set the authentication information for CHAP (see Setting iSCSI CHAP and Enabling iSCSI Authentication). You can set per-target CHAP for static targets, per-adapter CHAP, or apply the command to the discovery address.
esxcli iscsi adapter auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=[prohibited, discouraged, preferred, required] --secret=<string> --adapter=<vmhba>
esxcli iscsi adapter discovery sendtarget auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=[prohibited, discouraged, preferred, required] --secret=<string> --adapter=<vmhba> --address<sendtarget_address>
esxcli iscsi adapter target portal auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=[prohibited, discouraged, preferred, required] --secret=<string> --adapter=<vmhba> --name<iscsi_iqn_name>
Supported Levels for CHAP lists what each supported level does.
For example:
esxcli <conn_options> iscsi adapter auth chap set --direction=uni --chap_username=<name> --chap_password=<pwd> --level=preferred --secret=uni_secret --adapter=vmhba33
Mutual CHAP is not supported for independent hardware iSCSI storage.
6
Adapter-level parameters
esxcli iscsi adapter discovery sendtarget param set --adapter=<vmhba> --key=<key> --value=<value> --address=<sendtarget_address>
Target-level parameters
esxcli iscsi adapter target portal param set --adapter=<vmhba> --key=<key> --value=<value> --address=<address> --name=<iqn.name>
See Listing and Setting iSCSI Parameters
7
After setup is complete, run esxcli storage core adapter rescan --adapter=<iscsi_adapter> to rescan all storage devices.
8
esxcli <conn_options> iscsi adapter discovery rediscover
esxcli <conn_options> storage core adapter rescan --adapter=vmhba36
iSCSI Storage Setup with vicfg-iscsi
iSCSI Storage Setup with vicfg-iscsi
You can set up iSCSI storage using the vSphere Web Client, commands in the esxcli iscsi namespace (see iSCSI Storage Setup with ESXCLI) or the vicfg-iscsi command.
Setting Up Software iSCSI with vicfg-iscsi
Software iSCSI setup requires a number of high-level tasks. For each task, see the discussion of the corresponding command-line option in this chapter or the reference information. Specify one of the options listed in Connection Options in place of <conn_options>.
1
vicfg-iscsi <conn_options> --adapter --list
2
vicfg-iscsi <conn_options> - -swiscsi - -enable
3
vicfg-iscsi <conn_options> - -swiscsi - -list
The system prints Software iSCSI is enabled or Software iSCSI is not enabled.
4
vicfg-iscsi <conn_options> -I -n <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> - -iscsiname - -name <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> -I -a <alias_name> <adapter_name>
vicfg-iscsi <conn_options> - -iscsiname - -alias <alias_name> <adapter_name>
5
The two types of target differ as follows:
vicfg-iscsi <conn_options> --discovery --add --ip <ip_addr | domain_name> <adapter_name>
vicfg-iscsi <conn_options> --static --add --ip <ip_addr | domain_name>
--name <iscsi_name> <adapter_name>
When you later remove a discovery address, it might still be displayed as the parent of a static target. You can add the discovery address and rescan to display the correct parent for the static targets.
6
vicfg-iscsi <conn_options> -A -c <level> -m <auth_method> -u <auth_u_name> -w <chap_password>
[-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name]] <adapter_name>
vicfg-iscsi <conn_options> - -authentication - -level <level> - -method <auth_method>
- -chap_username <auth_u_name> - -chap_password <chap_password>
[- -ip <stor_ip_addr|stor_hostname> [:<portnum>] [-name <iscsi_name]]
<adapter_name>
The target (-i) and name (-n) option determine what the command applies to.
-i and -n
Neither -i nor -n
7
(Optional) Set the authentication information for mutual CHAP by running vicfg-iscsi -A again with the -b option and a different authentication user name and password.
For <level>, specify chapProhibited or chapRequired.
chapProhibited – The host does not use CHAP authentication. If authentication is enabled, specify chapProhibited to disable it.
chapRequired – The host requires successful CHAP authentication. The connection fails if CHAP negotiation fails. You can set this value for mutual CHAP only if CHAP is set to chapRequired.
For <auth_method>, CHAP is the only valid value.
Important You are responsible for making sure that CHAP is set before you set mutual CHAP, and for using compatible levels for CHAP and mutual CHAP.
8
9
After setup is complete, run vicfg-rescan to rescan all storage devices.
Setting Up Dependent Hardware iSCSI with vicfg-iscsi
Dependent hardware iSCSI setup requires a number of high-level tasks. For each task, see the discussion of the corresponding command-line option in this chapter, or the reference information. Specify one of the options listed in Connection Options in place of <conn_options>.
1
vicf-iscsi <conn_options> --adapter --list
2
vicfg-iscsi <conn_options> -I -n <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> - -iscsiname - -name <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> -I -a <alias_name> <adapter_name>
vicfg-iscsi <conn_options> - -iscsiname - -alias <alias_name> <adapter_name>
3
a
esxcli <conn_options> swiscsi vmknic list -d <vmhba>
b
esxcli <conn_options> swiscsi nic add -n <port_name> -d <vmhba>
c
esxcli <conn_options> swiscsi nic list -d <vmhba>
d
vicfg-rescan <conn_options> <vmhba>
4
The two types of target differ as follows:
vicfg-iscsi <conn_options> --discovery --add --ip <ip_addr | domain_name> <adapter_name>
vicfg-iscsi <conn_options> --static --add --ip <ip_addr | domain_name>
--name <iscsi_name> <adapter_name>
When you later remove a discovery address, it might still be displayed as the parent of a static target. You can add the discovery address and rescan to display the correct parent for the static targets.
5
vicfg-iscsi <conn_options> -A -c <level> -m <auth_method> -u <auth_u_name> -w <chap_password>
[-i <stor_ip_addr|stor_hostname> [:<portnum>] [-n <iscsi_name]] <adapter_name>
vicfg-iscsi <conn_options> - -authentication - -level <level> - -method <auth_method>
- -chap_username <auth_u_name> - -chap_password <chap_password>
[- -ip <stor_ip_addr|stor_hostname> [:<portnum>] [-name <iscsi_name]]
<adapter_name>
The target (-i) and name (-n) option determine what the command applies to.
-i and -n
Neither -i nor -n
6
(Optional) Set the authentication information for mutual CHAP by running vicfg-iscsi -A again with the -b option and a different authentication user name and password.
For <level>, specify chapProhibited or chapRequired.
chapProhibited – The host does not use CHAP authentication. If authentication is enabled, specify chapProhibited to disable it.
chapRequired – The host requires successful CHAP authentication. The connection fails if CHAP negotiation fails. You can set this value for mutual CHAP only if CHAP is set to chapRequired.
For <auth_method>, CHAP is the only valid value.
Important You are responsible for making sure that CHAP is set before you set mutual CHAP, and for using compatible levels for CHAP and mutual CHAP.
7
8
After setup is complete, run vicfg-rescan to rescan all storage devices.
Setting Up Independent Hardware iSCSI with vicfg-iscsi
With independent hardware-based iSCSI storage, you use a specialized third-party adapter capable of accessing iSCSI storage over TCP/IP. This iSCSI initiator handles all iSCSI and network processing and management for your ESXi system.
You must install and configure the independent hardware iSCSI adapter for your host before you can access the iSCSI storage device. For installation information, see vendor documentation.
Hardware iSCSI setup requires a number of high-level tasks. For each task, see the discussion of the corresponding command-line option in this chapter, the manpage (Linux), or the reference information. Specify one of the options listed in Connection Options in place of <conn_options>.
1
vicf-iscsi <conn_options> --adapter --list
2
Configure the hardware initiator (HBA) by running vicfg-iscsi -N with one or more of the following options.
--list – List network properties.
--ip <ip_addr> – Set HBA IPv4 address.
--subnetmask <subnet_mask> – Set HBA network mask.
--gateway <default_gateway> – Set HBA gateway.
--set ARP=true|false – Enable or disable ARP redirect.
You can also set the HBA IPv4 address and network mask and gateway in one command.
vicfg-iscsi <conn_options> --ip <ip_addr> --subnetmask <subnet_mask> --gateway <default_gateway>
3
vicfg-iscsi <conn_options> -I -n <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> - -iscsiname - -name <iscsi_name> <adapter_name>
vicfg-iscsi <conn_options> -I -a <alias_name> <adapter_name>
vicfg-iscsi <conn_options> - -iscsiname - -alias <alias_name> <adapter_name>
4
The two types of target differ as follows:
vicfg-iscsi <conn_options> --discovery --add --ip <ip_addr> <adapter_name>
vicfg-iscsi <conn_options> --static --add --ip <ip_addr>
--name <iscsi_name> <adapter_name>
When you later remove a discovery address, it might still be displayed as the parent of a static target. You can later add the discovery address and rescan to display the correct parent for the static targets.
5
You can set the information for per adapter, per discovery, and per target CHAP. See Setting iSCSI CHAP and Enabling iSCSI Authentication.
vicfg-iscsi <conn_options> - -authentication - -level <level> - -method <auth_method>
- -chap_username <auth_u_name> - -chap_password <chap_password>
[- -ip <stor_ip_addr|stor_hostname> [:<portnum>] [-name <iscsi_name]]
<adapter_name>
The target (-i) and name (-n) option determine what the command applies to.
-i and -n
Neither -i nor -n
Mutual CHAP is not supported for independent hardware iSCSI storage.
6
7
After setup is complete, call vicfg-rescan to rescan all storage devices.
Listing and Setting iSCSI Options
Listing and Setting iSCSI Options
You can list and set iSCSI options with ESXCLI or with vicfg-iscsi. You can also manage parameters. See Listing and Setting iSCSI Parameters.
Listing iSCSI Options with ESXCLI
Use esxcli iscsi information retrieval commands to list external HBA properties, information about targets, and LUNs. Specify one of the options listed in Connection Options in place of <conn_options>.
Run esxcli iscsi adapter firmware to list or upload the firmware for the iSCSI adapter.
esxcli <conn_options> iscsi adapter firmware get --adapter=<adapter_name>
esxcli <conn_options> iscsi adapter firmware set --file=<firmware_file_path>
The system returns information about the vendor, model, description, and serial number of the HBA.
Run commands in the esxcli iscsi adapter target name space.
esxcli iscsi adapter target portal lists and sets authentication and portal parameters.
esxcli iscsi adapter target list lists LUN information.
Setting MTU with ESXCLI
If you want to change the MTU used for your iSCSI storage, you must make the change in two places.
Run esxcli network vswitch standard set to change the MTU of the virtual switch.
Run esxcli network ip interface set to change the MTU of the network interface.
Listing and Setting iSCSI Options with vicfg-iscsi
Use vicfg-iscsi information retrieval options to list external HBA properties, information about targets, and LUNs. You can use the following vicfg-iscsi options to list iSCSI parameters. Specify one of the options listed in Connection Options in place of <conn_options>.
Run vicfg-iscsi -P|--phba to list external (vendor-specific) properties of an iSCSI adapter.
vicfg-iscsi <conn_options> -P -l <adapter_name>
vicfg-iscsi <conn_options> - -phba - -list <adapter_name>
The system returns information about the vendor, model, description, and serial number of the HBA.
Run vicfg-iscsi -T | --target to list target information.
vicfg-iscsi <conn_options> -T -l <adapter_name>
vicfg-iscsi <conn_options> - -target - -list <adapter_name>
The system returns information about targets for the specified adapter, including the iSCSI name (IQN or EUI format) and alias. See Discovery Target Names.
Run vicfg-iscsi -L|--lun to list LUN information.
vicfg-iscsi <conn_options> -L -l <adapter_name>
vicfg-iscsi <conn_options> - -lun - -list <adapter_name>
The command returns the operating system device name, bus number, target ID, LUN ID, and LUN size for the LUN.
Run vicfg-iscsi -L with -t to list only LUNs on a specified target.
vicfg-iscsi <conn_options> -L -l -t <target_ID> <adapter_name>
vicfg-iscsi <conn_options> - -lun - -list - -target_id <target_id> <adapter_name>
The system returns the LUNs on the specified target and the corresponding device name, device number, LUN ID, and LUN size.
Run vicfg-iscsi -p|--pnp to list physical network portal information for independent hardware iSCSI devices. You also use this option with --mtu.
vicfg-iscsi <conn_options> -p -l <adapter_name>
vicfg-iscsi <conn_options> - -pnp - -list <adapter_name>
The system returns information about the MAC address, MTU, and current transfer rate.
Run vicfg-iscsi -I -l to list information about the iSCSI initiator. ESXi systems use a software-based iSCSI initiator in the VMkernel to connect to storage. The command returns the iSCSI name, alias name, and alias settable bit for the initiator.
vicfg-iscsi <conn_options> -I -l vmhba42
Run vicfg-iscsi -p -M to set the MTU for the adapter. You specify the size and adapter name.
vicfg-iscsi <conn_options> -p -M <mtu_size> <adapter_name>
vicfg-iscsi <conn_options> - -pnp - -mtu <mtu-size> <adapter_name>
Listing and Setting iSCSI Parameters
Listing and Setting iSCSI Parameters
You can list and set iSCSI parameters for software iSCSI and for dependent hardware iSCSI with ESXCLI or with vicfg-iscsi.
Listing and Setting iSCSI Parameters with ESXCLI
You can retrieve and set iSCSI parameters by running one of the following commands.
Adapter-level parameters
Target-level parameters
esxcli iscsi adapter target portal param set --adapter=<vmhba> --key=<key> --value=<value> --address=<address> --name=<iqn.name>
esxcli iscsi adapter discovery sendtarget param set --adapter=<vmhba> --key=<key> --value=<value> --address=<address>
Settable iSCSI Parameters lists all settable parameters. These parameters are also described in the IETF rfc 3720. You can run esxcli iscsi adapter param get to determine whether a parameter is settable or not.
The parameters in Settable iSCSI Parameters apply to software iSCSI and dependent hardware iSCSI.
Increases data integrity. When data digest is enabled, the system performs a checksum over each PDUs data part and verifies using the CRC32C algorithm.
Note: Systems that use Intel Nehalem processors offload the iSCSI digest calculations for software iSCSI, thus reducing the impact on performance.
Valid values are digestProhibited, digestDiscouraged, digestPreferred, or digestRequired.
Increases data integrity. When header digest is enabled, the system performs a checksum over the header part of each iSCSI Protocol Data Unit (PDU) and verifies using the CRC32C algorithm.
Time interval, in seconds, between NOP-Out requests sent from your iSCSI initiator to an iSCSI target. The NOP-Out requests serve as the ping mechanism to verify that a connection between the iSCSI initiator and the iSCSI target is active.
Amount of time, in seconds, that can lapse before your host receives a NOP-In message. The message is sent by the iSCSI target in response to the NOP-Out request. When the NoopTimeout limit is exceeded, the initiator terminates the current session and starts a new one.
You can use the following ESXCLI commands to list parameter options.
Run esxcli iscsi adapter param get to list parameter options for the iSCSI adapter.
Run esxcli iscsi adapter discovery sendtarget param get or esxcli iscsi adapter target portal param set to retrieve information about iSCSI parameters and whether they are settable.
Run esxcli iscsi adapter discovery sendtarget param get or esxcli iscsi adapter target portal param set to set iSCSI parameter options.
If special characters are in the <name>=<value> sequence, for example, if you add a space, you must surround the sequence with double quotes (“<name> = <value>”).
Returning Parameters to Default Inheritance
The values of iSCSI parameters associated with a dynamic discovery address or a static discovery target are inherited from the corresponding settings of the parent. For the dynamic discovery address, the parent is the adapter. For the static target, the parent is the adapter or discovery address.
If you use the vSphere Web Client to modify authentication settings, you deselect the Inherit from Parent check box before you can make a change to the discovery address or discovery target.