docs.vmware.com
Open in
urlscan Pro
2a02:26f0:480:591::2ef
Public Scan
URL:
https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-701-release-notes.html
Submission: On June 17 via api from US — Scanned from DE
Submission: On June 17 via api from US — Scanned from DE
Form analysis
0 forms found in the DOMText Content
Ihre Sitzung wird in Kürze ablaufen Sitzung fortsetzen Ihre Sitzung ist abgelaufen. Bitte melden Sie sich erneut bei VMware Docs an. Hallo, Docs * Docs times * Start * Produkte * Ressourcen * Meine Bibliothek * VMware Customer Connect * VMware Communities * Downloads von VMware-Produkten * VMware Support-Angebote * Sprachen * English * Deutsch * Français * Español * 日本語 * 한국어 * 简体中文 * 繁體中文 * Русский * Italiano * Nederlands * Português Brasileiro * Dansk * Čeština * Polskie * Svenska * Türkçe * Português de Portugal * Ελληνικά * Anmelden Abmelden * Ressourcen MEINE BIBLIOTHEK Melden Sie sich bei VMware Docs an, um Themen, KB-Artikel, Code-Beispiele und mehr zu erstellen, zu kommentieren und zu teilen. MEINE BIBLIOTHEK ANZEIGEN VMware Customer Connect VMware-Kundenportal zu personalisierten Ressourcen, Support und Community. VMware Support-Angebote Hier finden Sie eine umfassende Liste von Supportangeboten für alle geeigneten VMware-Produkte VMware Communities Wählen Sie aus einer Vielzahl von Gruppen aus, um mitreden zu können. Downloads von VMware-Produkten Hier können Sie eine umfassende Liste aller zum Download verfügbaren VMware-Produkte anzeigen. Filtern nach: vSphere Alle Buch vSphere de EnglishDeutschFrançaisEspañol日本語한국어简体中文繁體中文РусскийItalianoNederlandsPortuguês BrasileiroDanskČeštinaPolskieSvenskaTürkçePortuguês de PortugalΕλληνικά Anmelden Abmelden VMware vSphere Produktdokumentation Technische Artikel Blogs FAQ Alle erweitern Alle reduzieren * vSphere 8.0 * ESXi and vCenter * Release Notes * VMware vSphere 8.0 Release Notes * VMware vCenter Server Photon OS Security Patches * VMware Host Client 2.5.0 Release Notes * ESXi Update and Patch Release Notes * VMware ESXi 8.0 Update 2c Release Notes * VMware ESXi 8.0 Update 1d Release Notes * VMware ESXi 8.0 Update 2b Release Notes * VMware ESXi 8.0 Update 2 Release Notes * VMware ESXi 8.0 Update 1c Release Notes * VMware ESXi 8.0 Update 1a Release Notes * VMware ESXi 8.0 Update 1 Release Notes * VMware ESXi 8.0c Release Notes * VMware ESXi 8.0b Release Notes * VMware ESXi 8.0a Release Notes * vCenter Server Update and Patch Releases * VMware vCenter Server 8.0 Update 2c Release Notes * VMware vCenter Server 8.0 Update 2b Release Notes * VMware vCenter Server 8.0 Update 2a Release Notes * VMware vCenter Server 8.0 Update 1d Release Notes * VMware vCenter Server 8.0 Update 2 Release Notes * VMware vCenter Server 8.0 Update 1c Release Notes * VMware vCenter Server 8.0 Update 1b Release Notes * VMware vCenter Server 8.0 Update 1a Release Notes * VMware vCenter Server 8.0 Update 1 Release Notes * VMware vCenter Server 8.0c Release Notes * VMware vCenter Server 8.0b Release Notes * VMware vCenter Server 8.0a Release Notes * VMware Host Client Release Notes * VMware Host Client 2.14.0 Release Notes * VMware Host Client 2.12.0 Release Notes * Product Documentation * Configuration Maximums * VMware ESXi Installation and Setup * VMware ESXi Upgrade * vCenter Server Installation and Setup * vCenter Server Upgrade * vSphere Authentication * Managing Host and Cluster Lifecycle * vCenter Server Configuration * vCenter Server and Host Management * vSphere Virtual Machine Administration * vSphere Host Profiles * vSphere Networking * vSphere Storage * vSphere Security * vSphere Resource Management * vSphere Availability * vSphere Monitoring and Performance * vSphere Single Host Management - VMware Host Client * Additional Resources * Setup for Windows Server Failover Clustering * PDFs * VMware ESXi Installation and Setup * VMware ESXi Upgrade * vCenter Server Installation and Setup * vCenter Server Upgrade * vSphere Authentication * Managing Host and Cluster Lifecycle * vCenter Server Configuration * vCenter Server and Host Management * vSphere Virtual Machine Administration * vSphere Host Profiles * vSphere Networking * vSphere Storage * vSphere Security * vSphere Resource Management * vSphere Availability * vSphere Monitoring and Performance * vSphere Single Host Management - VMware Host Client * Setup for Windows Server Failover Clustering * vSphere Archive Packages * vSphere Documentation 80 Update 2 * vSphere Documentation 80 Update 1 * vSphere Documentation 80 * vSphere with Tanzu * Release Notes * VMware vSphere with Tanzu 8.0 Release Notes * VMware Tanzu Kubernetes releases Release Notes * Product Documentation * vSphere with Tanzu Concepts and Planning * Installing and Configuring vSphere with Tanzu * vSphere with Tanzu Services and Workloads * Using Tanzu Kubernetes Grid 2.2 on Supervisor with vSphere with Tanzu 8 * Maintaining vSphere with Tanzu * VMware Tanzu Packages Documentation * Additional Resources * VMware Tanzu Kubernetes Grid Documentation * Reference Designs * PDFs * vSphere with Tanzu Concepts and Planning * Installing and Configuring vSphere with Tanzu * vSphere with Tanzu Services and Workloads * Using Tanzu Kubernetes Grid 2.2 on Supervisor with vSphere with Tanzu 8 * Maintaining vSphere with Tanzu * vSphere with Tanzu Archive Packages * VMware vSphere with Tanzu Documentation 80 Update 2 * VMware vSphere with Tanzu Documentation 80 Update 1 * VMware vSphere with Tanzu Documentation 80 * VMware vSAN * Release Notes * VMware vSAN 8.0 Update 2 Release Notes * VMware vSAN 8.0 Update 1 Release Notes * VMware vSAN 8.0 Release Notes * Product Documentation * vSAN Planning and Deployment * vSAN Network Design * Administering VMware vSAN * vSAN Monitoring and Troubleshooting * PDFs * vSAN Planning and Deployment * vSAN Network Design * Administering VMware vSAN * vSAN Monitoring and Troubleshooting * VMware vSAN Archive Packages * VMware vSAN Documentation 80 Update 2 * VMware vSAN Documentation 80 Update 1 * VMware vSAN Documentation 80 * SDK and API Documentation * Release Notes * vSphere Management SDK * vSphere Management SDK 8.0 Update 2 Release Notes * vSphere Management SDK 8.0 Update 1 Release Notes * vSphere Management SDK 8.0 Release Notes * vSphere Solutions Manager, vServices, and ESX Agent Manager 8.0 Update 2 Release Notes * vSphere Solutions Manager, vServices, and ESX Agent Manager 8.0 Update 1 Release Notes * vSphere Solutions Manager, vServices, and ESX Agent Manager 8.0 Release Notes * pyVmomi SDK 8.0 Release Notes * vSphere Client SDK * vSphere Client SDK 8.0 Update 2 Release Notes * vSphere Client SDK 8.0 Update 1 Release Notes * vSphere Client SDK 8.0 Release Notes * Virtual Disk Development Kit * Virtual Disk Development Kit (VDDK) 8.0 Update 2 Patch 1 Release Notes * Virtual Disk Development Kit (VDDK) 8.0 Update 2 Release Notes * Virtual Disk Development Kit (VDDK) 8.0 Update 1 Release Notes * Virtual Disk Development Kit (VDDK) 8.0 Release Notes * vSphere Guest SDK * Guest SDK 12.3.0 Release Notes * Guest SDK 12.1.0 Release Notes * VMware HTML Console SDK 2.2.0 Release Notes * Product Documentation * An Introduction to vSphere APIs * Getting Started with vSphere APIs and SDKs * vSphere Management SDK * vSphere Web Services SDK Developer’s Setup Guide * vSphere Web Services SDK Programming Guide * VMware Storage Policy SDK Programming Guide * vCenter Single Sign-On Programming Guide * Using a Proxy with vSphere Virtual Serial Ports * Developing and Deploying vSphere Solutions, vServices, and ESX Agents * VMware vSphere Automation SDK * VMware vSphere Automation SDKs Programming Guide * VMware vSphere Automation REST API Programming Guide * vSphere Client SDK * Developing Remote Plug-ins with the vSphere Client SDK * Developing Local Plug-ins with the vSphere Client SDK * JavaScript API Migration Guide * Preparing Local Plug-ins for FIPS Compliance * Local Plug-in Library Isolation * vSphere Client Plug-in Manifest Conversion Tool * Remote Plug-in Manifest Schema 8.0 Update 2 * Remote Plug-in Manifest Schema 8.0 Update 1 * Remote Plug-in Manifest Schema 8.0 * VMware Guest SDK Programming Guide * Virtual Disk Development Kit (VDDK) Programming Guide * vSAN SDKs Programming Guide * HTML Console SDK Programming Guide * SDK and API Archive Packages * SDK and API Documentation 80 Update 2 * SDK and API Documentation 80 Update 1 * SDK and API Documentation 80 * CLI Documentation * Release Notes * ESXCLI 8.0 Release Notes * VMware OVF Tool * VMware OVF Tool 4.6.2 Release Notes * VMware OVF Tool 4.6.0 Release Notes * VMware OVF Tool 4.5.4 Release Notes * VMware OVF Tool 4.5.0 Release Notes * Using Extended XCOPY with vSphere 8.0 Update 1 * Product Documentation * Getting Started with ESXCLI * ESXCLI Concepts and Examples * OVF Tool User Guide * CLI Archive Packages * CLI Documentation 80 Update 2 * CLI Documentation 80 Update 1 * CLI Documentation 80 * vSphere 7.0 * ESXi and vCenter Server * Release Notes * VMware vSphere 7.0 Release Notes * VMware vCenter Server Photon OS Security Patches * ESXi Update and Patch Release Notes * VMware ESXi 7.0 Update 3q Release Notes * VMware ESXi 7.0 Update 3p Release Notes * VMware ESXi 7.0 Update 3o Release Notes * VMware ESXi 7.0 Update 3n Release Notes * VMware ESXi 7.0 Update 3m Release Notes * VMware ESXi 7.0 Update 3l Release Notes * VMware ESXi 7.0 Update 3k Release Notes * VMware ESXi 7.0 Update 3j Release Notes * VMware ESXi 7.0 Update 3i Release Notes * VMware ESXi 7.0 Update 3g Release Notes * VMware ESXi 7.0 Update 3f Release Notes * VMware ESXi 7.0 Update 3e Release Notes * VMware ESXi 7.0 Update 3d Release Notes * VMware ESXi 7.0 Update 2e Release Notes * VMware ESXi 7.0 Update 1e Release Notes * VMware ESXi 7.0 Update 3c Release Notes * VMware ESXi 7.0 Update 2d Release Notes * VMware ESXi 7.0 Update 2c Release Notes * VMware ESXi 7.0 Update 2a Release Notes * VMware ESXi 7.0 Update 2 Release Notes * VMware ESXi 7.0 Update 1d Release Notes * VMware ESXi 7.0 Update 1c Release Notes * VMware ESXi 7.0 Update 1b Release Notes * VMware ESXi 7.0 Update 1a Release Notes * VMware ESXi 7.0 Update 1 Release Notes * VMware ESXi 7.0, Patch Release ESXi 7.0b * vCenter Server Update and Patch Releases * VMware vCenter Server 7.0 Update 3q Release Notes * VMware vCenter Server 7.0 Update 3p Release Notes * VMware vCenter Server 7.0 Update 3o Release Notes * VMware vCenter Server 7.0 Update 3n Release Notes * VMware vCenter Server 7.0 Update 3m Release Notes * VMware vCenter Server 7.0 Update 3l Release Notes * VMware vCenter Server 7.0 Update 3k Release Notes * VMware vCenter Server 7.0 Update 3j Release Notes * VMware vCenter Server 7.0 Update 3i Release Notes * VMware vCenter Server 7.0 Update 3h Release Notes * VMware vCenter Server 7.0 Update 3g Release Notes * VMware vCenter Server 7.0 Update 3f Release Notes * VMware vCenter Server 7.0 Update 3e Release Notes * VMware vCenter Server 7.0 Update 3d Release Notes * VMware vCenter Server 7.0 Update 3c Release Notes * VMware vCenter Server 7.0 Update 3a Release Notes * VMware vCenter Server 7.0 Update 3 Release Notes * VMware vCenter Server 7.0 Update 2d Release Notes * VMware vCenter Server 7.0 Update 2c Release Notes * VMware vCenter Server 7.0 Update 2b Release Notes * VMware vCenter Server 7.0 Update 2a Release Notes * VMware vCenter Server 7.0 Update 2 Release Notes * VMware vCenter Server 7.0 Update 1d Release Notes * VMware vCenter Server 7.0 Update 1c Release Notes * VMware vCenter Server 7.0 Update 1a Release Notes * VMware vCenter Server 7.0 Update 1 Release Notes * VMware vCenter Server 7.0.0d Release Notes * VMware vCenter Server 7.0.0c Release Notes * VMware vCenter Server 7.0.0b Release Notes * VMware vCenter Server 7.0.0a Release Notes * VMware Host Client Release Notes * VMware Host Client 1.37.0 Release Notes * VMware Host Client 1.34.8 Release Notes * VMware Host Client 1.34.4 Release Notes * VMware Host Client 1.34.0 Release Notes * Product Documentation * Configuration Maximums * VMware ESXi Installation and Setup * VMware ESXi Upgrade * vCenter Server Installation and Setup * vCenter Server Upgrade * vSphere Authentication * Managing Host and Cluster Lifecycle * vCenter Server Configuration * vCenter Server and Host Management * vSphere Virtual Machine Administration * vSphere Host Profiles * vSphere Networking * vSphere Storage * vSphere Security * vSphere Resource Management * vSphere Availability * vSphere Monitoring and Performance * vSphere Single Host Management - VMware Host Client * Additional Resources * Setup for Windows Server Failover Clustering * PDFs * VMware ESXi Installation and Setup * VMware ESXi Upgrade * vCenter Server Installation and Setup * vCenter Server Upgrade * vSphere Authentication * Managing Host and Cluster Lifecycle * vCenter Server Configuration * vCenter Server and Host Management * vSphere Virtual Machine Administration * vSphere Host Profiles * vSphere Networking * vSphere Storage * vSphere Security * vSphere Resource Management * vSphere Availability * vSphere Monitoring and Performance * vSphere Single Host Management - VMware Host Client * Setup for Windows Server Failover Clustering * vSphere Archive Packages * vSphere Documentation 70 Update 3 * vSphere Documentation 70 Update 2 * vSphere Documentation 70 Update 1 * vSphere Documentation 70 * vSphere with Tanzu * Release Notes * VMware vSphere with Tanzu 7.0 Release Notes * VMware Tanzu Kubernetes releases Release Notes * Product Documentation * vSphere with Tanzu Configuration and Management * Additional Resources * Reference Designs * PDFs * vSphere with Tanzu Configuration and Management * vSphere with Tanzu Archive Packages * VMware vSphere with Tanzu Documentation 70 Update 3 * VMware vSphere with Tanzu Documentation 70 Update 2 * VMware vSphere with Tanzu Documentation 70 Update 1 * VMware vSphere with Kubernetes Documentation 70 * VMware vSAN * Release Notes * VMware vSAN 7.0 Update 3 Release Notes * VMware vSAN 7.0 Update 2 Release Notes * VMware vSAN 7.0 Update 1 Release Notes * VMware vSAN 7.0 Release Notes * Product Documentation * vSAN Planning and Deployment * vSAN Network Design * Administering VMware vSAN * vSAN Monitoring and Troubleshooting * PDFs * vSAN Planning and Deployment * vSAN Network Design * Administering VMware vSAN * vSAN Monitoring and Troubleshooting * VMware vSAN Archive Packages * VMware vSAN Documentation 70 Update 3 * VMware vSAN Documentation 70 Update 2 * VMware vSAN Documentation 70 Update 1 * VMware vSAN Documentation 70 * vSphere Bitfusion * vSphere Bitfusion Documentation * SDK and API Documentation * Release Notes * vSphere Management SDK * vSphere Management SDK 7.0 U3f Release Notes * vSphere Management SDK 7.0 U3 Release Notes * vSphere Management SDK 7.0 Update 2 Release Notes * vSphere Management SDK 7.0 Update 1 Release Notes * vSphere Management SDK 7.0 Release Notes * vSphere Solutions Manager, vServices, and ESX Agent Manager 7.0 Update 3 Release Notes * vSphere Solutions Manager, vServices, and ESX Agent Manager 7.0 Release Notes * vSphere Client SDK * vSphere Client SDK 7.0 Update 3 Release Notes * vSphere Client SDK 7.0 Update 2 Release Notes * vSphere Client SDK 7.0 Update 1 Release Notes * vSphere Client SDK 7.0 Release Notes * Virtual Disk Development Kit * Virtual Disk Development Kit 7.0.3.4 Release Notes * Virtual Disk Development Kit 7.0.3.3 Release Notes * Virtual Disk Development Kit 7.0.3.1 Release Notes * Virtual Disk Development Kit 7.0.3 Release Notes * Virtual Disk Development Kit 7.0.2 Release Notes * Virtual Disk Development Kit 7.0.1 Release Notes * Virtual Disk Development Kit 7.0.3.2 Release Notes * Virtual Disk Development Kit 7.0 Release Notes * vSphere Perl SDK * vSphere SDK for Perl 7.0 Update 2 Release Notes * vSphere SDK for Perl 7.0 Update 1 Release Notes * vSphere SDK for Perl 7.0 Release Notes * Guest SDK and HA Application Monitoring Release Notes * Product Documentation * vSphere Management SDK * vSphere Web Services SDK Developer’s Setup Guide * vSphere Web Services SDK Programming Guide * CIM SMASH/Server Management API Programming Guide * VMware Storage Policy SDK Programming Guide * vCenter Single Sign-On Programming Guide * Using a Proxy with vSphere Virtual Serial Ports * Developing and Deploying vSphere Solutions, vServices, and ESX Agents * VMware vSphere Automation SDK * VMware vSphere Automation SDKs Programming Guide * VMware vCenter Server Management Programming Guide * vSphere Client SDK * Developing Remote Plug-ins with the vSphere Client SDK * Developing Local Plug-ins with the vSphere Client SDK * JavaScript API Migration Guide * Preparing Local Plug-ins for FIPS Compliance * Local Plug-in Library Isolation * vSphere Client Plug-in Manifest Conversion Tool * Remote Plug-in Manifest Schema 7.0 * vSphere SDK for Perl * vSphere SDK for Perl Installation Guide * vSphere SDK for Perl Programming Guide * Guest and HA Application Monitoring SDK Programming Guide * Virtual Disk Development Kit Programming Guide * vSAN SDKs Programming Guide * SDK and API Archive Packages * SDK and API Documentation 70 Update 3 * SDK and API Documentation 70 Update 2 * SDK and API Documentation 70 Update 1 * SDK and API Documentation 70 * CLI Documentation * Release Notes * ESXCLI 7.0 Release Notes * VMware OVF Tool * VMware OVF Tool 4.4.3 Release Notes * VMware OVF Tool 4.4 Patch 1 Release Notes * VMware OVF Tool 4.4.1 Release Notes * VMware OVF Tool 4.4 Release Notes * Product Documentation * Getting Started with ESXCLI * ESXCLI Concepts and Examples * OVF Tool User’s Guide * CLI Archive Packages * CLI Documentation 70 Update 3 * CLI Documentation 70 Update 2 * CLI Documentation 70 * vSphere 6.7 * ESXi and vCenter Server * Release Notes * vSphere 6.7 Release Notes * Functionality Updates for the vSphere Client * VMware vCenter Server Appliance Photon OS Security Patches * VMware Host Client 1.33.3 Release Notes * ESXi Update and Patch Release Notes * VMware ESXi 6.7, Patch Release ESXi670-202403001 * VMware ESXi 6.7, Patch Release ESXi670-202210001 * VMware ESXi 6.7, Patch Release ESXi670-202207001 * VMware ESXi 6.7, Patch Release ESXi670-202206001 * VMware ESXi 6.7, Patch Release ESXi670-202201001 * VMware ESXi 6.7, Patch Release ESXi670-202111001 * VMware ESXi 6.7, Patch Release ESXi670-202103001 * VMware ESXi 6.7, Patch Release ESXi670-202102001 * VMware ESXi 6.7, Patch Release ESXi670-202011002 * VMware ESXi 6.7, Patch Release ESXi670-202011001 * VMware ESXi 6.7, Patch Release ESXi670-202010001 * VMware ESXi 6.7, Patch Release ESXi670-202008001 * VMware ESXi 6.7, Patch Release ESXi670-202006001 * VMware ESXi 6.7, Patch Release ESXi670-202004002 * VMware ESXi 6.7, Patch Release ESXi670-202004001 * VMware ESXi 6.7, Patch Release ESXi670-201912001 * VMware ESXi 6.7, Patch Release ESXi670-201911001 * VMware ESXi 6.7 Update 3 Release Notes * VMware ESXi 6.7, Patch Release ESXi670-201906002 * VMware ESXi 6.7, Patch Release ESXi670-201905001 * VMware ESXi 6.7, Patch Release ESXi670-201904001 * VMware ESXi 6.7 Update 2 Release Notes * VMware ESXi 6.7, Patch Release ESXi670-201903001 * VMware ESXi 6.7, Patch Release ESXi670-201901001 * VMware ESXi 6.7, Patch Release ESXi670-201811001 * VMware ESXi 6.7 Update 1 Release Notes * vCenter Server Update and Patch Releases * VMware vCenter Server 6.7 Update 3t Release Notes * VMware vCenter Server 6.7 Update 3s Release Notes * VMware vCenter Server 6.7 Update 3r Release Notes * VMware vCenter Server 6.7 Update 3q Release Notes * VMware vCenter Server 6.7 Update 3p Release Notes * VMware vCenter Server 6.7 Update 3o Release Notes * VMware vCenter Server 6.7 Update 3n Release Notes * VMware vCenter Server 6.7 Update 3m Release Notes * VMware vCenter Server 6.7 Update 3l Release Notes * VMware vCenter Server 6.7 Update 3j Release Notes * VMware vCenter Server 6.7 Update 3g Release Notes * VMware vCenter Server 6.7 Update 3f Release Notes * VMware vCenter Server 6.7 Update 3b Release Notes * VMware vCenter Server 6.7 Update 3a Release Notes * VMware vCenter Server 6.7 Update 3 Release Notes * VMware vCenter Server 6.7 Update 2c Release Notes * VMware vCenter Server 6.7 Update 2a Release Notes * VMware vCenter Server 6.7 Update 2 Release Notes * VMware vCenter Server 6.7 Update 1b Release Notes * VMware vCenter Server 6.7 Update 1 Release Notes * VMware vCenter Server 6.7.0d Release Notes * VMware vCenter Server 6.7.0c Release Notes * VMware vCenter Server 6.7.0b Release Notes * VMware vCenter Server 6.7.0a Release Notes * VMware vCenter Server 6.7 Update 3r Release Notes * VMware Host Client Release Notes * VMware Host Client 1.30.0 Release Notes * VMware Host Client 1.25.0 Release Notes * Product Documentation * Configuration Maximums * VMware ESXi Installation and Setup * VMware ESXi Upgrade * vCenter Server Installation and Setup * vCenter Server Upgrade * vCenter Server and Host Management * vCenter Server Appliance Configuration * Platform Services Controller Administration * vSphere Virtual Machine Administration * vSphere Host Profiles * vSphere Networking * vSphere Storage * vSphere Security * vSphere Resource Management * vSphere Availability * vSphere Monitoring and Performance * vSphere Single Host Management - VMware Host Client * Additional Resources * Getting Started with VMware Cloud Native Storage * Setup for Failover Clustering and Microsoft Cluster Service * vRealize Operations Manager Plugin in vCenter Server * PDFs * VMware ESXi Installation and Setup * VMware ESXi Upgrade * vCenter Server Installation and Setup * vCenter Server Upgrade * vCenter Server and Host Management * vCenter Server Appliance Configuration * Platform Services Controller Administration * vSphere Virtual Machine Administration * vSphere Host Profiles * vSphere Networking * vSphere Storage * vSphere Security * vSphere Resource Management * vSphereAvailability * vSphere Monitoring and Performance * vSphere Single Host Management - VMware Host Client * Getting Started with VMware Cloud Native Storage * Setup for Failover Clustering and Microsoft Cluster Service * vRealize Operations Manager Plugin in vCenter Server * vSphere Archive Packages * vSphere Documentation 67 Update 2 * vSphere Documentation 67 Update 1 * vSphere Documentation 67 * vSphere Update Manager * Release Notes * VMware vSphere Update Manager 6.7 Release Notes * Update and Patch Releases * VMware vSphere Update Manager 6.7 Update 3s Release Notes * VMware vSphere Update Manager 6.7 Update 3r Release Notes * VMware vSphere Update Manager 6.7 Update 3p Release Notes * VMware vSphere Update Manager 6.7 Update 3o Release Notes * VMware vSphere Update Manager 6.7 Update 3m Release Notes * VMware vSphere Update Manager 6.7 Update 3l Release Notes * VMware vSphere Update Manager 6.7 Update 3j Release Notes * VMware vSphere Update Manager 6.7 Update 3g Release Notes * VMware vSphere Update Manager 6.7 Update 3b Release Notes * VMware vSphere Update Manager 6.7 Update 3 Release Notes * VMware vSphere Update Manager 6.7 Update 2 Release Notes * VMware vSphere Update Manager 6.7 Update 1 Release Notes * VMware vSphere Update Manager 6.7.0c Release Notes * Product Documentation * vSphere Update Manager Installation and Administration Guide * Reconfiguring VMware vSphere Update Manager * PDFs * vSphere Update Manager Installation and Administration Guide * Reconfiguring VMware vSphere Update Manager * vSphere Update Manager Archive Packages * vSphere Update Manager Documentation 67 Update 2 * vSphere Update Manager Documentation 67 Update 1 * vSphere Update Manager Documentation 67 * VMware vSAN * Release Notes * VMware vSAN 6.7 Update 3 Release Notes * VMware vSAN 6.7 Update 1 Release Notes * VMware vSAN 6.7 Release Notes * Product Documentation * vSAN Planning and Deployment * Administering VMware vSAN * vSAN Monitoring and Troubleshooting * PDFs * vSAN Planning and Deployment * Administering VMware vSAN * vSAN Monitoring and Troubleshooting * VMware vSAN Archive Packages * VMware vSAN Documentation 67 Update 3 * VMware vSAN Documentation 67 Update 1 * Administering VMware vSAN 67 * SDK and API Documentation * Release Notes * vSphere Management SDK * vSphere Web Services SDK 6.7 U3 Release Notes * vSphere Web Services SDK 6.7 U1 Release Notes * vSphere Web Services SDK 6.7 Release Notes * vCenter Storage Monitoring Service SDK Release Notes * vCenter Single Sign-On SDK Release Notes * VMware CIM SMASH/Server Management API 6.7 Release Notes * VMware Storage Policy SDK (SPBM) Release Notes * vSphere Solutions Manager, vServices, and ESX Agent Manager 6.7 Update 1 Release Notes * vSphere Solutions Manager, vServices, and ESX Agent Manager 6.7 Release Notes * VMware vSphere Automation SDK * VMware vSphere Automation SDK for Python 6.7 Release Notes * VMware vSphere Automation SDK for Java 6.7 Release Notes * VMware vSphere Automation SDK for REST 6.7 Release Notes * VMware vSphere Automation SDK for Perl 6.7 Release Notes * VMware vSphere Automation SDK for .NET 6.7 Release Notes * vCenter Server Appliance Management API 6.7 Release Notes * VMware Virtual Disk Development Kit * Best Practices for NBD Transport * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Guest and HA Application Monitoring SDK Release Notes * vSphere SDK for Perl 6.7 Release Notes * VMware HTML Console SDK Release Notes * Product Documentation * vSphere Management SDK * vSphere Web Services SDK Developer’s Setup Guide * vSphere Web Services SDK Programming Guide * VMware Storage Policy SDK Programming Guide * CIM SMASH/Server Management API Programming Guide * vCenter Single Sign-On Programming Guide * Using a Proxy with vSphere Virtual Serial Ports * Developing and Deploying vSphere Solutions, vServices, and ESX Agents * VMware vSphere Automation SDK * VMware vSphere Automation SDKs Programming Guide * VMware vCenter Server Appliance Management Programming Guide * Virtual Disk Development Kit Programming Guide * vSphere SDK for Perl * vSphere SDK for Perl Installation Guide * vSphere SDK for Perl Programming Guide * HTML Console SDK Programming Guide * vSAN SDKs Programming Guide * Guest and HA Application Monitoring SDK Programming Guide * SDK and API Archive Packages * SDK and API Documentation 67 Update 3 * SDK and API Documentation 67 Update 2 * SDK and API Documentation 67 Update 1 * SDK and API Documentation 67 * CLI Documentation * Release Notes * vSphere Command-Line Interface 6.7 Release Notes * VMware OVF Tool * VMware OVF Tool Release Notes * VMware OVF Tool Release Notes * VMware OVF Tool Release Notes * VMware OVF Tool Release Notes * VMware OVF Tool Release Notes * VMware OVF Tool Release Notes * Product Documentation * Getting Started with vSphere Command-Line Interfaces * vSphere Command-Line Interface Concepts and Examples * OVF Tool User’s Guide * CLI Archive Packages * CLI Documentation 67 * vSphere 6.5 * ESXi and vCenter Server * Release Notes * VMware vSphere 6.5 Release Notes * VMware Host Client Release Notes * VMware vCenter Server Appliance Photon OS Security Patches * ESXi Update and Patch Releases * VMware ESXi 6.5, Patch Release ESXi650-202403001 * VMware ESXi 6.5, Patch Release ESXi650-202210001 * VMware ESXi 6.5, Patch Release ESXi650-202207001 * VMware ESXi 6.5, Patch Release ESXi650-202205001 * VMware ESXi 6.5, Patch Release ESXi650-202202001 * VMware ESXi 6.5, Patch Release ESXi650-202110001 * VMware ESXi 6.5, Patch Release ESXi650-202107001 * VMware ESXi 6.5, Patch Release ESXi650-202102001 * VMware ESXi 6.5, Patch Release ESXi650-202011002 * VMware ESXi 6.5, Patch Release ESXi650-202011001 * VMware ESXi 6.5, Patch Release ESXi650-202010001 * VMware ESXi 6.5, Patch Release ESXi650-202007001 * VMware ESXi 6.5, Patch Release ESXi650-202006001 * VMware ESXi 6.5, Patch Release ESXi650-202005001 * VMware ESXi 6.5, Patch Release ESXi650-201912002 * VMware ESXi 6.5, Patch Release ESXi650-201912001 * VMware ESXi 6.5, Patch Release ESXi650-201911001 * VMware ESXi 6.5, Patch Release ESXi650-201910001 * VMware ESXi 6.5, Patch Release ESXi650-201908001 * VMware ESXi 6.5 Update 3 Release Notes * VMware ESXi 6.5, Patch Release ESXi650-201905001 * VMware ESXi 6.5, Patch Release ESXi650-201903001 * VMware ESXi 6.5, Patch Release ESXi650-201901001 * VMware ESXi 6.5, Patch Release ESXi650-201811002 * VMware ESXi 6.5, Patch Release ESXi650-201811001 * VMware ESXi 6.5, Patch Release ESXi650-201810002 * VMware ESXi 6.5 Update 2 Release Notes * VMware ESXi 6.5 Update 1 Release Notes * VMware ESXi 6.5.0d Release Notes * VMware ESXi 6.5.0a Release Notes * vCenter Server Update and Patch Releases * VMware vCenter Server 6.5 Update 3v Release Notes * VMware vCenter Server 6.5 Update 3u Release Notes * VMware vCenter Server 6.5 Update 3t Release Notes * VMware vCenter Server 6.5 Update 3s Release Notes * VMware vCenter Server 6.5 Update 3r Release Notes * VMware vCenter Server 6.5 Update 3q Release Notes * VMware vCenter Server 6.5 Update 3p Release Notes * VMware vCenter Server 6.5 Update 3n Release Notes * VMware vCenter Server 6.5 Update 3k Release Notes * VMware vCenter Server 6.5 Update 3f Release Notes * VMware vCenter Server 6.5 Update 3d Release Notes * VMware vCenter Server 6.5 Update 3 Release Notes * VMware vCenter Server 6.5 Update 2g Release Notes * VMware vCenter Server 6.5 Update 2d Release Notes * VMware vCenter Server 6.5 Update 2c Release Notes * VMware vCenter Server 6.5 Update 2b Release Notes * VMware vCenter Server 6.5 Update 2 Release Notes * VMware vCenter Server 6.5 Update 1g Release Notes * VMware vCenter Server 6.5 Update 1e Release Notes * VMware vCenter Server 6.5 Update 1d Release Notes * VMware vCenter Server 6.5 Update 1c Release Notes * VMware vCenter Server 6.5 Update 1b Release Notes * VMware vCenter Server 6.5 Update 1 Release Notes * VMware vCenter Server 6.5.0f Release Notes * vCenter Server 6.5.0e Release Notes * vCenter Server 6.5.0d Release Notes * vCenter Server 6.5.0c Release Notes * vCenter Server 6.5.0b Release Notes * VMware vCenter Server 6.5.0a Release Notes * Product Documentation * Configuration Maximums * vSphere Installation and Setup * vSphere Upgrade * vCenter Server and Host Management * vCenter Server Appliance Configuration * Platform Services Controller Administration * vSphere Virtual Machine Administration * vSphere Host Profiles * vSphere Networking * vSphere Storage * vSphere Security * vSphere Resource Management * vSphere Availability * vSphere Monitoring and Performance * vSphere Single Host Management - VMware Host Client * vSphere Troubleshooting * Additional Resources * Setup for Failover Clustering and Microsoft Cluster Service * PDFs * Configuration Maximums * vSphere Installation and Setup * vSphere Upgrade * vCenter Server and Host Management * vCenter Server Appliance Configuration * Platform Services Controller Administration * vSphere Virtual Machine Administration * vSphere Host Profiles * vSphere Networking * vSphere Storage * vSphere Security * vSphere Resource Management * vSphere Availability * vSphere Monitoring and Performance * vSphere Single Host Management - VMware Host Client * vSphere Troubleshooting * Setup for Failover Clustering and Microsoft Cluster Service * vSphere Archive Packages * vSphere Documentation 65 Update 2 * vSphere Documentation 65 Update 1 * vSphere Documentation 65 * vSphere Update Manager * Release Notes * VMware vSphere Update Manager 6.5 Release Notes * Update and Patch Releases * VMware vSphere Update Manager 6.5 Update 3u Release Notes * VMware vSphere Update Manager 6.5 Update 3t Release Notes * VMware vSphere Update Manager 6.5 Update 3r Release Notes * VMware vSphere Update Manager 6.5 Update 3q Release Notes * VMware vSphere Update Manager 6.5 Update 3n Release Notes * VMware vSphere Update Manager 6.5 Update 3k Release Notes * VMware vSphere Update Manager 6.5 Update 3f Release Notes * VMware vSphere Update Manager 6.5 Update 3 Release Notes * VMware vSphere Update Manager 6.5 Update 2d Release Notes * VMware vSphere Update Manager 6.5 Update 2 Release Notes * VMware vSphere Update Manager 6.5 Update 1d Release Notes * VMware vSphere Update Manager 6.5 Update 1 Release Notes * VMware vSphere Update Manager 6.5.0b Release Notes * Product Documentation * vSphere Update Manager Installation and Administration Guide * Reconfiguring VMware vSphere Update Manager * PDFs * vSphere Update Manager Installation and Administration Guide * Reconfiguring VMware vSphere Update Manager * vSphere Update Manager Archive Packages * vSphere Update Manager Documentation 65 Update 1 * vSphere Update Manager Documentation 65 * VMware Virtual SAN * Release Notes * VMware vSAN 6.6.1 Release Notes * VMware vSAN 6.6 Release Notes * VMware Virtual SAN 6.5 Release Notes * Product Documentation * Administering VMware vSAN * PDFs * Administering VMware vSAN * VMware Virtual SAN Archive Packages * Administering VMware Virtual SAN 661 * Administering VMware Virtual SAN 66 * Administering VMware Virtual SAN 6.5 * vSphere Data Protection * Release Notes * vSphere Data Protection 6.1.11 Release Notes * vSphere Data Protection 6.1.10 Release Notes * vSphere Data Protection 6.1.9 Release Notes * vSphere Data Protection 6.1.8 Release Notes * vSphere Data Protection 6.1.7 Release Notes * vSphere Data Protection 6.1.6 Release Notes * vSphere Data Protection 6.1.5 Release Notes * vSphere Data Protection 6.1.4 Release Notes * vSphere Data Protection 6.1.3 Release Notes * vSphere Data Protection 6.1.2 Release Notes * vSphere Data Protection 6.1.1 Release Notes * vSphere Data Protection (VDP) 6.1 Release Notes * vSphere Data Protection 6.1 Administration Guide * SDK and API Documentation * Release Notes * vSphere Management SDK * vSphere Web Services SDK 6.5 Release Notes * vCenter Single Sign-On SDK 2.0 Release Notes * VMware vCenter Storage Monitoring Service SDK 6.5 Release Notes * VMware Storage Policy SDK 6.5 Release Notes * vSphere Solutions Manager, vServices, and ESX Agent Manager 6.5 Release Notes * VMware CIM SMASH/Server Management API 6.5 Release Notes * VMware vSphere Automation SDK * VMware vSphere Automation SDK for Python 6.5 Release Notes * VMware vSphere Automation SDK for Ruby 6.5 Release Notes * VMware vSphere Automation SDK for Java 6.5 Release Notes * VMware vSphere Automation SDK for .NET 6.5 Release Notes * VMware vSphere Automation SDK for REST 6.5 Release Notes * VMware vSphere Automation SDK for Perl 6.5 Release Notes * vCenter Server Appliance Management API 6.5 Release Notes * Virtual Disk Development Kit * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Virtual Disk Development Kit Release Notes * Guest and HA Application Monitoring SDK Release Notes * vSphere SDK for Perl 6.5 Release Notes * Product Documentation * vSphere Management SDK * vSphere Web Services SDK Developer’s Setup Guide * vSphere Web Services SDK Programming Guide * VMware Storage Policy SDK Programming Guide * vCenter Single Sign-On Programming Guide * Developing and Deploying vSphere Solutions, vServices, and ESX Agents * VMware vSphere Automation SDK * VMware vSphere Automation SDKs Programming Guide * VMware vCenter Server Appliance Management Programming Guide * vSphere SDK for Perl * vSphere SDK for Perl Installation Guide * vSphere SDK for Perl Programming Guide * SDK and API Archive Packages * SDK and API Documentation 65 * CLI Documentation * Release Notes * vSphere Command-Line Interface 6.5 Release Notes * VMware OVF Tool Release Notes * Product Documentation * Getting Started with vSphere Command-Line Interfaces * vSphere Command-Line Interface Concepts and Examples * CLI Archive Packages * CLI Documentation 65 * Legacy Release Archives * vSphere 60 Archive of Guides * vSphere 55 Archive of Guides * vSphere 51 Archive of Guides * vSphere 50 Archive of Guides * Learn More About vSphere * vSphere Technical Papers Docs / VMware vSphere VMWARE ESXI 7.0 UPDATE 1 RELEASE NOTES Zu Bibliothek hinzufügen Aus MyLibrary entfernen RSS PDF herunterladen Nur Text Feedback Bearbeiten Prüfen Freigeben Twitter Facebook LinkedIn 微博 Aktualisiert am 24.06.2021 ESXi 7.0 Update 1 | 06 OCT 2020 | ISO Build 16850804 Check for additions and updates to these release notes. WHAT'S IN THE RELEASE NOTES The release notes cover the following topics: * What's New * Earlier Releases of ESXi 7.0 * Patches Contained in this Release * Product Support Notices * Resolved Issues * Known Issues WHAT'S NEW * ESXi 7.0 Update 1 supports vSphere Quick Boot on the following servers: * HPE ProLiant BL460c Gen9 * HPE ProLiant DL325 Gen10 Plus * HPE ProLiant DL360 Gen9 * HPE ProLiant DL385 Gen10 Plus * HPE ProLiant XL225n Gen10 Plus * HPE Synergy 480 Gen9 * Enhanced vSphere Lifecycle Manager hardware compatibility pre-checks for vSAN environments: ESXi 7.0 Update 1 adds vSphere Lifecycle Manager hardware compatibility pre-checks. The pre-checks automatically trigger after certain change events such as modification of the cluster desired image or addition of a new ESXi host in vSAN environments. Also, the hardware compatibility framework automatically polls the Hardware Compatibility List database at predefined intervals for changes that trigger pre-checks as necessary. * Increased number of vSphere Lifecycle Manager concurrent operations on clusters: With ESXi 7.0 Update 1, if you initiate remediation at a data center level, the number of clusters on which you can run remediation in parallel, increases from 15 to 64 clusters. * vSphere Lifecycle Manager support for coordinated updates between availability zones: With ESXi 7.0 Update 1, to prevent overlapping operations, vSphere Lifecycle Manager updates fault domains in vSAN clusters in a sequence. ESXi hosts within each fault domain are still updated in a rolling fashion. For vSAN stretched clusters, the first fault domain is always the preferred site. * Extended list of supported Red Hat Enterprise Linux and Ubuntu versions for the VMware vSphere Update Manager Download Service (UMDS): ESXi 7.0 Update 1 adds new Red Hat Enterprise Linux and Ubuntu versions that UMDS supports. For the complete list of supported versions, see Supported Linux-Based Operating Systems for Installing UMDS. * Improved control of VMware Tools time synchronization: With ESXi 7.0 Update 1, you can select a VMware Tools time synchronization mode from the vSphere Client instead of using the command prompt. When you navigate to VM Options > VMware Tools > Synchronize Time with Host, you can select Synchronize at startup and resume (recommended), Synchronize time periodically, or, if no option is selected, you can prevent synchronization. * Increased Support for Multi-Processor Fault Tolerance (SMP-FT) maximums: With ESXi 7.0 Update 1, you can configure more SMP-FT VMs, and more total SMP-FT vCPUs in an ESXi host, or a cluster, depending on your workloads and capacity planning. * Virtual hardware version 18: ESXi Update 7.0 Update 1 introduces virtual hardware version 18 to enable support for virtual machines with higher resource maximums, and: * Secure Encrypted Virtualization - Encrypted State (SEV-ES) * Virtual remote direct memory access (vRDMA) native endpoints * EVC Graphics Mode (vSGA). * Increased resource maximums for virtual machines and performance enhancements: * With ESXi 7.0 Update 1, you can create virtual machines with three times more virtual CPUs and four times more memory to enable applications with larger memory and CPU footprint to scale in an almost linear fashion, comparable with bare metal. Virtual machine resource maximums are up to 768 vCPUs from 256 vCPUs, and to 24 TB of virtual RAM from 6 TB. Still, not over-committing memory remains a best practice. Only virtual machines with hardware version 18 and operating systems supporting such large configurations can be set up with these resource maximums. * Performance enhancements in ESXi that support the larger scale of virtual machines include widening of the physical address, address space optimizations, better NUMA awareness for guest virtual machines, and more scalable synchronization techniques. vSphere vMotion is also optimized to work with the larger virtual machine configurations. * ESXi hosts with AMD processors can support virtual machines with twice more vCPUs, 256, and up to 8 TB of RAM. * Persistent memory (PMEM) support is up twofold to 12 TB from 6 TB for both Memory Mode and App Direct Mode. EARLIER RELEASES OF ESXI 7.0 Features, resolved and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 7.0 are: * VMware ESXi 7.0, Patch Release ESXi 7.0b For internationalization, compatibility, and open source components, see the VMware vSphere 7.0 Release Notes. PATCHES CONTAINED IN THIS RELEASE This release of ESXi 7.0 Update 1 delivers the following patches: Build Details Download Filename: VMware-ESXi-7.0U1-16850804-depot Build: 16850804 Download Size: 360.6 MB md5sum: 3c12872658250d3bd12ed91de0d83109 sha1checksum: 7cc4e669e3dddd0834487ebc7f90031ae265746c Host Reboot Required: Yes Virtual Machine Migration or Shutdown Required: Yes IMPORTANT: * Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. * In the Lifecycle Manager plug-in of the vSphere Client, the release date for the ESXi 7.0.1 base image, profiles, and components is 2020-09-04. This is expected. To ensure you can use correct filters by release date, only the release date of the rollup bulletin is 2020-10-06. * The name of the no-tools Image Profile bulletin is ESXi-7.0.1-16850804-no-tools4611675547841277300 and this is not a typo. Rollup Bulletin This rollup bulletin contains the latest VIBs with all the fixes after the initial release of ESXi 7.0. Bulletin ID Category Severity ESXi70U1-16850804 Bugfix Critical Image Profiles VMware patch and update releases contain general and critical image profiles. Application of the general release image profile applies to new bug fixes. Image Profile Name ESXi-7.0.1-16850804-standard-5818809527488818992 ESXi-7.0.1-16850804-no-tools4611675547841277300 ESXi Image Name and Version Release Date Category Detail ESXi_7.0.1-0.0.16850804 10/06/2020 General Bugfix image For information about the individual components and bulletins, see the Product Patches page and the Resolved Issues section. PATCH DOWNLOAD AND INSTALLATION In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager. The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images. You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from the VMware download page or the Product Patches page and use the esxcli software profile update command. For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide. PRODUCT SUPPORT NOTICES * VMware Tools 9.10.x and 10.0.x has reached End of General Support. For more details, refer to VMware Tools listed under the VMware Product Lifecycle Matrix. * Intent to deprecate SHA-1 The SHA-1 cryptographic hashing algorithm will be deprecated in a future release of vSphere. SHA-1 and the already-deprecated MD5 have known weaknesses, and practical attacks against them have been demonstrated. RESOLVED ISSUES The resolved issues are grouped as follows. * ESXi_7.0.1-0.0.16850804 * esx-update_7.0.1-0.0.16850804 * VMware-nvmxnet3-ens_2.0.0.22-1vmw.701.0.0.16850804 * Mellanox-nmlx4_3.19.16.8-2vmw.701.0.0.16850804 * Broadcom-elxiscsi_12.0.1200.0-2vmw.701.0.0.16850804 * Cisco-nfnic_4.0.0.44-2vmw.701.0.0.16850804 * MRVL-E3-Ethernet_1.1.0.11-1vmw.701.0.0.16850804 * VMware-icen_1.0.0.9-1vmw.701.0.0.16850804 * Intel-ne1000_0.8.4-11vmw.701.0.0.16850804 * Intel-Volume-Mgmt-Device_2.0.0.1055-5vmw.701.0.0.16850804 * Broadcom-ELX-brcmnvmefc_12.6.278.10-3vmw.701.0.0.16850804 * Broadcom-ntg3_4.1.5.0-0vmw.701.0.0.16850804 * HPE-hpv2-hpsa-plugin_1.0.0-3vmw.701.0.0.16850804 * Broadcom-lsi-msgpt3_17.00.10.00-1vmw.701.0.0.16850804 * Intel-SCU-rste_2.0.2.0088-7vmw.701.0.0.16850804 * Intel-ixgben_1.7.1.28-1vmw.701.0.0.16850804 * Intel-NVMe-Vol-Mgmt-Dev-Plugin_1.0.0-2vmw.701.0.0.16850804 * VMware-iser_1.1.0.1-1vmw.701.0.0.16850804 * Broadcom-lpnic_11.4.62.0-1vmw.701.0.0.16850804 * VMware-NVMe-PCIe_1.2.3.9-2vmw.701.0.0.16850804 * Microchip-smartpqi_70.4000.0.100-3vmw.701.0.0.16850804 * VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 * VMware-oem-lenovo-plugin_1.0.0-1vmw.701.0.0.16850804 * Intel-igbn_0.1.1.0-7vmw.701.0.0.16850804 * VMware-vmkata_0.1-1vmw.701.0.0.16850804 * VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 * VMware-oem-hp-plugin_1.0.0-1vmw.701.0.0.16850804 * Cisco-nenic_1.0.29.0-2vmw.701.0.0.16850804 * Broadcom-ELX-brcmfcoe_12.0.1500.0-1vmw.701.0.0.16850804 * HPE-nhpsa_70.0050.0.100-1vmw.701.0.0.16850804 * Mellanox-nmlx5_4.19.16.8-2vmw.701.0.0.16850804 * Broadcom-ELX-IMA-plugin_12.0.1200.0-3vmw.701.0.0.16850804 * Microchip-smartpqiv2-plugin_1.0.0-4vmw.701.0.0.16850804 * VMware-nvme-pcie-plugin_1.0.0-1vmw.701.0.0.16850804 * MRVL-E4-CNA-Driver-Bundle_1.0.0.0-1vmw.701.0.0.16850804 * VMware-oem-dell-plugin_1.0.0-1vmw.701.0.0.16850804 * VMware-nvmxnet3_2.0.0.30-1vmw.701.0.0.16850804 * VMware-ahci_2.0.5-1vmw.701.0.0.16850804 * Intel-i40iwn_1.1.2.6-1vmw.701.0.0.16850804 * Broadcom-bnxt-Net-RoCE_216.0.0.0-1vmw.701.0.0.16850804 * Broadcom-lsiv2-drivers-plugin_1.0.0-4vmw.701.0.0.16850804 * Micron-mtip32xx-native_3.9.8-1vmw.701.0.0.16850804 * VMware-nvme-plugin_1.2.0.38-1vmw.701.0.0.16850804 * MRVL-QLogic-FC_4.0.3.0-17vmw.701.0.0.16850804 * VMware-vmkusb_0.1-1vmw.701.0.0.16850804 * VMware-VM-Tools_11.1.1.16303738-16850804 * Solarflare-NIC_2.4.0.0010-15vmw.701.0.0.16850804 * Intel-i40en_1.8.1.123-1vmw.701.0.0.16850804 * Broadcom-ELX-lpfc_12.6.278.10-8vmw.701.0.0.16850804 * MRVL-E3-Ethernet-iSCSI-FCoE_1.0.0.0-1vmw.701.0.0.16850804 * Broadcom-lsi-msgpt35_13.00.13.00-1vmw.701.0.0.16850804 * VMware-pvscsi_0.1-2vmw.701.0.0.16850804 * Broadcom-lsi-msgpt2_20.00.06.00-2vmw.701.0.0.16850804 * Broadcom-elxnet_12.0.1250.0-5vmw.701.0.0.16850804 * Broadcom-lsi-mr3_7.712.51.00-1vmw.701.0.0.16850804 ESXi_7.0.1-0.0.16850804 Patch Category General Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_esx-xserver_7.0.1-0.0.16850804 * VMware_bootbank_cpu-microcode_7.0.1-0.0.16850804 * VMware_bootbank_esx-dvfilter-generic-fastpath_7.0.1-0.0.16850804 * VMware_bootbank_esx-base_7.0.1-0.0.16850804 * VMware_bootbank_vsan_7.0.1-0.0.16850804 * VMware_bootbank_esx-ui_1.34.4-0.0.16850804 * VMware_bootbank_crx_7.0.1-0.0.16850804 * VMware_bootbank_vsanhealth_7.0.1-0.0.16850804 * VMware_bootbank_native-misc-drivers_7.0.1-0.0.16850804 * VMware_bootbank_vdfs_7.0.1-0.0.16850804 * VMware_bootbank_gc_7.0.1-0.0.16850804 PRs Fixed 2086530, 2226245, 2495261, 2156103 CVE numbers N/A The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. Updates the esx-dvfilter-generic-fastpath, vsanhealth, esx-ui, vdfs, vsan, esx-base, crx, native-misc-drivers, esx-xserver, gc and cpu-microcode VIBs to resolve the following issues: * NEW: Heap memory issue in VMFS6 datastores causes various problems with virtual machines In certain workflows, VMFS6 datastores might allocate memory but not free it up, which leads to VMFS heap memory exhaustion. This issue might lead to the following problems: * VMFS6 datastores display as "Not consumed" on ESXi hosts. * vSphere vMotion operations with virtual machines fail. * Virtual machines become orphaned when powered off. * Snapshot based backups fail. * Creating or consolidating snapshots in your vCenter Server system or ESXi host fail with a error such as: Consolidation failed for disk node 'scsi0:1': 12 (Cannot allocate memory). In the vmkwarning.* files, you see errors such as: vmkwarning.0:2020-06-16T13:28:23.291Z cpu48:3479102)WARNING: Heap: 3651: Heap vmfs3 already at its maximum size. Cannot expand. In the vmkernel.* logs, you see errors such as: 2020-06-29T14:59:36.351Z cpu21:5630454)WARNING: HBX: 2439: Failed to initialize VMFS distributed locking on volume 5eb9e8f1-f4aeef84-4256-1c34da50d370: Out of memory 020-06-29T14:59:36.351Z cpu21:5630454)Vol3: 4202: Failed to get object 28 type 1 uuid 5eb9e8f1-f4aeef84-4256-1c34da50d370 FD 0 gen 0 :Out of memory 2020-06-29T14:59:36.351Z cpu21:5630454)Vol3: 4202: Failed to get object 28 type 2 uuid 5eb9e8f1-f4aeef84-4256-1c34da50d370 FD 4 gen 1 :Out of memory 2020-06-29T14:59:36.356Z cpu21:5630454)WARNING: HBX: 2439: Failed to initialize VMFS distributed locking on volume 5eb9e8f1-f4aeef84-4256-1c34da50d370: Out of memory 2020-06-29T14:59:36.356Z cpu21:5630454)Vol3: 4202: Failed to get object 28 type 1 uuid 5eb9e8f1-f4aeef84-4256-1c34da50d370 FD 0 gen 0 :Out of memory 2020-06-29T14:59:36.356Z cpu21:5630454)Vol3: 4202: Failed to get object 28 type 2 uuid 5eb9e8f1-f4aeef84-4256-1c34da50d370 FD 4 gen 1 :Out of memory This issue is resolved in this release. * PR 2086530: Setting the loglevel for the nvme_pcie driver fails with an error When you set the loglevel for the nvme_pcie driver with the command esxcli nvme driver loglevel set -l <log level>, the action fails with the error message: Failed to set log level 0x2. This command was retained for compatibility consideration with the NVMe driver, but it is not supported for the nvme_pcie driver. This issue is resolved in this release. You can modify the log level by using the VMkernel system information shell. * PR 2226245: When a host profile is copied from an ESXi host or a host profile is edited, the user input values are lost Some of the host profile keys are generated from hash calculation even when explicit rules for key generation are provided. As a result, when you copy settings from a host or edit a host profile, the user input values in the answer file are lost. This issue is resolved in this release. * PR 2495261: Checking the compliance state of an ESXi 7.0 host against a host profile with version 6.5 or 6.7 results in an error for vmhba and vmrdma devices When checking the compliance of a ESXi 7.0 host that uses a nmlx5_core or nvme_pcie driver against a host profile with version 6.5 or 6.7, you might observe the following errors, where address1 and address2 are specific to the affected system. * A vmhba device with bus type logical, address1 is not present on your host. * A vmrdma device with bus type logical, address2 is not present on your host. The error is due to a mismatch between device addresses generated by the nmlx5_core or nvme_pcie driver in ESXi version 7.0 and earlier. This issue is resolved in this release. * PR 2156103: SNMP dynamic firewall ruleset is modified by Host Profiles during a remediation process The SNMP firewall ruleset is a dynamic state, which is handled during runtime. When a host profile is applied, the configuration of the ruleset is managed simultaneously by Host Profiles and SNMP, which can modify the firewall settings unexpectedly. This issue is resolved in this release. esx-update_7.0.1-0.0.16850804 Patch Category General Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs I ncluded * VMware_bootbank_loadesx_7.0.1-0.0.16850804 * VMware_bootbank_esx-update_7.0.1-0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the loadesx and esx-update VIBs. VMware-nvmxnet3-ens_2.0.0.22-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_nvmxnet3-ens_2.0.0.22-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the nvmxnet3-ens VIB. Mellanox-nmlx4_3.19.16.8-2vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_nmlx4-core_3.19.16.8-2vmw.701.0.0.16850804 * VMW_bootbank_nmlx4-rdma_3.19.16.8-2vmw.701.0.0.16850804 * VMW_bootbank_nmlx4-en_3.19.16.8-2vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the nmlx4-core, nmlx4-rdma, and nmlx4 VIBs. Broadcom-elxiscsi_12.0.1200.0-2vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_elxiscsi_12.0.1200.0-2vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the elxiscsi VIB. Cisco-nfnic_4.0.0.44-2vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_nfnic_4.0.0.44-2vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the nfnic VIB. MRVL-E3-Ethernet_1.1.0.11-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_qflge_1.1.0.11-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the qflge VIB. VMware-icen_1.0.0.9-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_icen_1.0.0.9-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the icen VIB. Intel-ne1000_0.8.4-11vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_ne1000_0.8.4-11vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the ne1000 VIB. Intel-Volume-Mgmt-Device_2.0.0.1055-5vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_iavmd_2.0.0.1055-5vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the iavmd VIB. Broadcom-ELX-brcmnvmefc_12.6.278.10-3vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_brcmnvmefc_12.6.278.10-3vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the brcmnvmefc VIB. Broadcom-ntg3_4.1.5.0-0vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_ntg3_4.1.5.0-0vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the ntg3 VIB. HPE-hpv2-hpsa-plugin_1.0.0-3vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_lsuv2-hpv2-hpsa-plugin_1.0.0-3vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsuv2-hpv2-hpsa-plugin VIB. Broadcom-lsi-msgpt3_17.00.10.00-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_lsi-msgpt3_17.00.10.00-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsi-msgpt3 VIB. Intel-SCU-rste_2.0.2.0088-7vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_rste_2.0.2.0088-7vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the rste VIB. Intel-ixgben_1.7.1.28-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_ixgben_1.7.1.28-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the ixgben VIB. Intel-NVMe-Vol-Mgmt-Dev-Plugin_1.0.0-2vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_lsuv2-intelv2-nvme-vmd-plugin_1.0.0-2vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsuv2-intelv2-nvme-vmd-plugin VIB. VMware-iser_1.1.0.1-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_iser_1.1.0.1-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the iser VIB. Broadcom-lpnic_11.4.62.0-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_lpnic_11.4.62.0-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lpnic VIB. VMware-NVMe-PCIe_1.2.3.9-2vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_nvme-pcie_1.2.3.9-2vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the nvme-pcie VIB. Microchip-smartpqi_70.4000.0.100-3vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_smartpqi_70.4000.0.100-3vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the smartpqi VIB. VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_nvmerdma_1.0.1.2-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the nvmerdma VIB. VMware-oem-lenovo-plugin_1.0.0-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_lsuv2-oem-lenovo-plugin_1.0.0-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsuv2-oem-lenovo-plugin VIB. Intel-igbn_0.1.1.0-7vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_igbn_0.1.1.0-7vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the igbn VIB. VMware-vmkata_0.1-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_vmkata_0.1-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the vmkata VIB. VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the vmkfcoe VIB. VMware-oem-hp-plugin_1.0.0-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_lsuv2-oem-hp-plugin_1.0.0-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsuv2-oem-hp-plugin VIB. Cisco-nenic_1.0.29.0-2vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_nenic_1.0.29.0-2vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the nenic VIB. Broadcom-ELX-brcmfcoe_12.0.1500.0-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_brcmfcoe_12.0.1500.0-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the brcmfcoe VIB. HPE-nhpsa_70.0050.0.100-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_nhpsa_70.0050.0.100-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the nhpsa VIB. Mellanox-nmlx5_4.19.16.8-2vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_nmlx5-rdma_4.19.16.8-2vmw.701.0.0.16850804 * VMW_bootbank_nmlx5-core_4.19.16.8-2vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the nmlx5-rdma and nmlx5-core VIBs. Broadcom-ELX-IMA-plugin_12.0.1200.0-3vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_elx-esx-libelxima.so_12.0.1200.0-3vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the elx-esx-libelxima.so VIB. Microchip-smartpqiv2-plugin_1.0.0-4vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_lsuv2-smartpqiv2-plugin_1.0.0-4vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsuv2-smartpqiv2-plugin VIB. VMware-nvme-pcie-plugin_1.0.0-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_lsuv2-nvme-pcie-plugin_1.0.0-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsuv2-nvme-pcie-plugin VIB. MRVL-E4-CNA-Driver-Bundle_1.0.0.0-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_qedentv_3.40.3.0-12vmw.701.0.0.16850804 * VMW_bootbank_qedrntv_3.40.4.0-12vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the qedentv VIB. VMware-oem-dell-plugin_1.0.0-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_lsuv2-oem-dell-plugin_1.0.0-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsuv2-oem-dell-plugin VIB. VMware-nvmxnet3_2.0.0.30-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_nvmxnet3_2.0.0.30-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the nvmxnet3 VIB. VMware-ahci_2.0.5-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_vmw-ahci_2.0.5-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the vmw-ahci VIB. Intel-i40iwn_1.1.2.6-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_i40iwn_1.1.2.6-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the i40iwn VIB. Broadcom-bnxt-Net-RoCE_216.0.0.0-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_bnxtnet_216.0.50.0-16vmw.701.0.0.16850804 * VMW_bootbank_bnxtroce_216.0.58.0-7vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the bnxtnet and bnxtroce VIBs. Broadcom-lsiv2-drivers-plugin_1.0.0-4vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_lsuv2-lsiv2-drivers-plugin_1.0.0-4vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsuv2-lsiv2-drivers-plugin VIB. Micron-mtip32xx-native_3.9.8-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_mtip32xx-native_3.9.8-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the mtip32xx-native VIB. VMware-nvme-plugin_1.2.0.38-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required No Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.38-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the vmware-esx-esxcli-nvme-plugin VIB. MRVL-QLogic-FC_4.0.3.0-17vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMware_bootbank_qlnativefc_4.0.3.0-17vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the qlnativefc VIB. VMware-vmkusb_0.1-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_vmkusb_0.1-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the vmkusb VIB. VMware-VM-Tools_11.1.1.16303738-16850804 Patch Category General Patch Severity Important Host Reboot Required No Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMware_locker_tools-light_11.1.1.16303738-16850804 PRs Fixed N/A CVE numbers N/A Updates the tools-light VIB. Solarflare-NIC_2.4.0.0010-15vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required No Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_sfvmk_2.4.0.0010-15vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the sfvmk VIB. Intel-i40en_1.8.1.123-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_i40en_1.8.1.123-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the i40en VIB. Broadcom-ELX-lpfc_12.6.278.10-8vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_lpfc_12.6.278.10-8vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lpfc VIB. MRVL-E3-Ethernet-iSCSI-FCoE_1.0.0.0-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_qcnic_1.0.15.0-10vmw.701.0.0.16850804 * VMW_bootbank_qfle3f_1.0.51.0-14vmw.701.0.0.16850804 * VMW_bootbank_qfle3i_1.0.15.0-9vmw.701.0.0.16850804 * VMW_bootbank_qfle3_1.0.67.0-9vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the qcnic, qfle3f, qfle3i, and qfle3 VIBs. Broadcom-lsi-msgpt35_13.00.13.00-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_lsi-msgpt35_13.00.13.00-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsi-msgpt35 VIB. VMware-pvscsi_0.1-2vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_pvscsi_0.1-2vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the pvscsi VIB. Broadcom-lsi-msgpt2_20.00.06.00-2vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_lsi-msgpt2_20.00.06.00-2vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsi-msgpt2 VIB. Broadcom-elxnet_12.0.1250.0-5vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_elxnet_12.0.1250.0-5vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the elxnet VIB. Broadcom-lsi-mr3_7.712.51.00-1vmw.701.0.0.16850804 Patch Category Enhancement Patch Severity Important Host Reboot Required Yes Virtual Machine Migration or Shutdown Required Yes Affected Hardware N/A Affected Software N/A VIBs Included * VMW_bootbank_lsi-mr3_7.712.51.00-1vmw.701.0.0.16850804 PRs Fixed N/A CVE numbers N/A Updates the lsi-mr3 VIB. KNOWN ISSUES The known issues are grouped as follows. * Installation, Upgrade and Migration Issues * Networking Issues * Miscellaneous Issues * Storage Issues * Virtual Machines Management Issues * Auto Deploy Issues * Known Issues from Earlier Releases Installation, Upgrade and Migration Issues * NEW: If a vCenter Server system is of version 7.0, ESXi host upgrades to a later version by using the vSphere Lifecycle Manager and an ISO image fail If you use an ISO image to upgrade ESXi hosts to a version later than 7.0 by using the vSphere Lifecycle Manager, and the vCenter Server system is still on version 7.0, the upgrade fails. In the vSphere Client, you see an error Upgrade is not supported for host. Workaround: First upgrade your vCenter Server system to the 7.0.x version to which you plan to upgrade the ESXi hosts and then retry the host upgrade by using the vSphere Update Manager and an ISO image. Alternatively, use another upgrade path, such as an interactive upgrade from a CD, DVD, or USB, a scripted upgrade, or ESXCLI, instead of the vSphere Lifecycle Manager and an ISO image. * Installation of 7.0 Update 1 drivers on ESXi 7.0 hosts might fail You cannot install drivers applicable to ESXi 7.0 Update 1 on hosts that run ESXi 7.0 or 7.0b. The operation fails with an error, such as: VMW_bootbank_qedrntv_3.40.4.0-12vmw.701.0.0.xxxxxxx requires vmkapi_2_7_0_0, but the requirement cannot be satisfied within the ImageProfile. Please refer to the log file for more details. Workaround: Update the ESXi host to 7.0 Update 1. Retry the driver installation. Networking Issues * One or more I/O devices do not generate interrupts when the AMD IOMMU is in use If the I/O devices on your ESXi host provide more than a total of 512 distinct interrupt sources, some sources are erroneously assigned an interrupt-remapping table entry (IRTE) index in the AMD IOMMU that is greater than the maximum value. Interrupts from such a source are lost, so the corresponding I/O device behaves as if interrupts are disabled. Workaround: Use the ESXCLI command esxcli system settings kernel set -s iovDisableIR -v true to disable the AMD IOMMU interrupt remapper. Reboot the ESXi host so that the command takes effect. Miscellaneous Issues * If you run the ESXCLI command to unload the firewall module, the hostd service fails and ESXi hosts lose connectivity If you automate the firewall configuration in an environment that includes multiple ESXi hosts, and run the ESXCLI command esxcli network firewall unload that destroys filters and unloads the firewall module, the hostd service fails and ESXi hosts lose connectivity. Workaround: Unloading the firewall module is not recommended at any time. If you must unload the firewall module, use the following steps: 1. Stop the hostd service by using the command: /etc/init.d/hostd stop. 2. Unload the firewall module by using the command: esxcli network firewall unload. 3. Perform the required operations. 4. Load the firewall module by using the command: esxcli network firewall load. 5. Start the hostd service by using the command: /etc/init.d/hostd start. * vSphere Storage vMotion operations might fail in a vSAN environment due to an unauthenticated session of the Network File Copy (NFC) manager Migrations to a vSAN datastore by using vSphere Storage vMotion of virtual machines that have at least one snapshot and more than one virtual disk with different storage policy might fail. The issue occurs due to an unauthenticated session of the NFC manager because the Simple Object Access Protocol (SOAP) body exceeds the allowed size. Workaround: First migrate the VM home namespace and just one of the virtual disks. After the operation completes, perform a disk only migration of the remaining 2 disks. * Changes in the properties and attributes of the devices and storage on an ESXi host might not persist after a reboot If the device discovery routine during a reboot of an ESXi host times out, the jumpstart plug-in might not receive all configuration changes of the devices and storage from all the registered devices on the host. As a result, the process might restore the properties of some devices or storage to the default values after the reboot. Workaround: Manually restore the changes in the properties of the affected device or storage. * If you use a beta build of ESXi 7.0, ESXi hosts might fail with a purple diagnostic screen during some lifecycle operations If you use a beta build of ESXi 7.0, ESXi hosts might fail with a purple diagnostic screen during some lifecycle operations such as unloading a driver or switching between ENS mode and native driver mode. For example, if you try to change the ENS mode, in the backtrace you see an error message similar to: case ENS::INTERRUPT::NoVM_DeviceStateWithGracefulRemove hit BlueScreen: ASSERT bora/vmkernel/main/dlmalloc.c:2733 This issue is specific for beta builds and does not affect release builds such as ESXi 7.0. Workaround: Update to ESXi 7.0 GA. Storage Issues * A VMFS datastore backed by an NVMe over Fabrics namespace or device might become permanently inaccessible after recovering from an APD or PDL failure If a VMFS datastore on an ESXi host is backed by an NVMe over Fabrics namespace or device, in case of an all paths down (APD) or permanent device loss (PDL) failure, the datastore might be inaccessible even after recovery. You cannot access the datastore from either the ESXi host or the vCenter Server system. Workaround: To recover from this state, perform a rescan on a host or cluster level. For more information, see Perform Storage Rescan. Virtual Machines Management Issues * Virtual machines with enabled AMD Secure Encrypted Virtualization-Encrypted State (SEV-ES) cannot create Virtual Machine Communication Interface (VMCI) sockets Performance and functionality of features that require VMCI might be affected on virtual machines with enabled AMD SEV-ES, because such virtual machines cannot create VMCI sockets. Workaround: None. * smpboot fails for Linux virtual machines with enabled SEV-ES When a Linux virtual machine with multiple virtual CPUs and enabled SEV-ES boots, all CPUs except CPU0 are offline. You cannot bring the remaining CPUs online. The dmesg command returns an error such as smpboot: do_boot_cpu failed(-1) to wakeup CPU#1. Workaround: None Auto Deploy Issues * You cannot PXE boot an ESXi host by using vSphere Auto Deploy due to a network error For ESXi hosts with Emulex and Qlogic host bus adapters (HBA), attempts to PXE boot the host by using vSphere Auto Deploy might fail due to a network error. For some Emulex adapters, in the PXE Boot console you see a message such as: Could not open net0: Input/output error http://ipxe.org/1d6a4a98' Network error encountered while PXE booting. Scanning the local disk for cached image. If no image is found, the system will reboot in 20 seconds …… Could not boot. No such device (http://ipxe.org/2c048087) Emulex HBA adapters that persistently face the issue are: * HPE StoreFabric CN1200E-T 10Gb Converged Network Adapter * HPE StoreFabric CN1200E 10Gb Converged Network Adapter * HP FlexFabric 20Gb 2-port 650FLB Adapter * HP FlexFabric 20Gb 2-port 650M Adapter For ESXi hosts with QLogic HBAs, attempts to PXE boot the host by using vSphere Auto Deploy do not always fail. If the ESX host encounters the issue, in the PXE Boot console you see a message such as: Configuring (net0 f4:03:43:b4:88:d0)...... No configuration methods succeeded (http://ipxe.org/040ee186) Network error encountered while PXE booting. The affected Qlogic HBA adapter is HP Ethernet 10Gb 2-port 530T. Workaround: None KNOWN ISSUES FROM EARLIER RELEASES To view a list of previous known issues, click here. The earlier known issues are grouped as follows. * Installation, Upgrade, and Migration Issues * Security Features Issues * Networking Issues * Storage Issues * vCenter Server and vSphere Client Issues * Virtual Machine Management Issues * vSphere HA and Fault Tolerance Issues * vSphere Lifecycle Manager Issues * Miscellaneous Issues Installation, Upgrade, and Migration Issues * The vCenter Upgrade/Migration pre-checks fail with "Unexpected error 87" The vCenter Server Upgrade/Migration pre-checks fail when the Security Token Service (STS) certificate does not contain a Subject Alternative Name (SAN) field. This situation occurs when you have replaced the vCenter 5.5 Single Sign-On certificate with a custom certificate that has no SAN field, and you attempt to upgrade to vCenter Server 7.0. The upgrade considers the STS certificate invalid and the pre-checks prevent the upgrade process from continuing. Workaround: Replace the STS certificate with a valid certificate that contains a SAN field then proceed with the vCenter Server 7.0 Upgrade/Migration. * Problems upgrading to vSphere 7.0 with pre-existing CIM providers After upgrade, previously installed 32-bit CIM providers stop working because ESXi requires 64-bit CIM providers. Customers may lose management API functions related to CIMPDK, NDDK (native DDK), HEXDK, VAIODK (IO filters), and see errors related to uwglibc dependency. The syslog reports module missing, "32 bit shared libraries not loaded." Workaround: There is no workaround. The fix is to download new 64-bit CIM providers from your vendor. * Smart Card and RSA SecurID authentication might stop working after upgrading to vCenter Server 7.0 If you have configured vCenter Server for either Smart Card or RSA SecurID authentication, see the VMware knowledge base article at https://kb.vmware.com/s/article/78057 before starting the vSphere 7.0 upgrade process. If you do not perform the workaround as described in the KB, you might see the following error messages and Smart Card or RSA SecurID authentication does not work. "Smart card authentication may stop working. Smart card settings may not be preserved, and smart card authentication may stop working." or "RSA SecurID authentication may stop working. RSA SecurID settings may not be preserved, and RSA SecurID authentication may stop working." Workaround: Before upgrading to vSphere 7.0, see the VMware knowledge base article at https://kb.vmware.com/s/article/78057. * Upgrading a vCenter Server with an external Platform Services Controller from 6.7u3 to 7.0 fails with VMAFD error When you upgrade a vCenter Server deployment using an external Platform Services Controller, you converge the Platform Services Controller into a vCenter Server appliance. If the upgrade fails with the error install.vmafd.vmdir_vdcpromo_error_21, the VMAFD firstboot process has failed. The VMAFD firstboot process copies the VMware Directory Service Database (data.mdb) from the source Platform Services Controller and replication partner vCenter Server appliance. Workaround: Disable TCP Segmentation Offload (TSO) and Generic Segmentation Offload (GSO) on the Ethernet adapter of the source Platform Services Controller or replication partner vCenter Server appliance before upgrading a vCenter Server with an external Platform Services Controller. See Knowledge Base article: https://kb.vmware.com/s/article/74678 * Upgrading vCenter Server using the CLI incorrectly preserves the Transport Security Layer (TLS) configuration for the vSphere Authentication Proxy service If the vSphere Authentication Proxy service (vmcam) is configured to use a particular TLS protocol other than the default TLS 1.2 protocol, this configuration is preserved during the CLI upgrade process. By default, vSphere supports the TLS 1.2 encryption protocol. If you must use the TLS 1.0 and TLS 1.1 protocols to support products or services that do not support TLS 1.2, use the TLS Configurator Utility to enable or disable different TLS protocol versions. Workaround: Use the TLS Configurator Utility to configure the vmcam port. To learn how to manage TLS protocol configuration and use the TLS Configurator Utility, see the VMware Security documentation. * Smart card and RSA SecurID settings may not be preserved during vCenter Server upgrade Authentication using RSA SecurID will not work after upgrading to vCenter Server 7.0. An error message will alert you to this issue when attempting to login using your RSA SecurID login. Workaround: Reconfigure the smart card or RSA SecureID. * Migration of vCenter Server for Windows to vCenter Server appliance 7.0 fails with network error message Migration of vCenter Server for Windows to vCenter Server appliance 7.0 fails with the error message IP already exists in the network. This prevents the migration process from configuring the network parameters on the new vCenter Server appliance. For more information, examine the log file: /var/log/vmware/upgrade/UpgradeRunner.log Workaround: 1. Verify that all Windows Updates have been completed on the source vCenter Server for Windows instance, or disable automatic Windows Updates until after the migration finishes. 2. Retry the migration of vCenter Server for Windows to vCenter Server appliance 7.0. * When you configure the number of virtual functions for an SR-IOV device by using the max_vfs module parameter, the changes might not take effect In vSphere 7.0, you can configure the number of virtual functions for an SR-IOV device by using the Virtual Infrastructure Management (VIM) API, for example, through the vSphere Client. The task does not require reboot of the ESXi host. After you use the VIM API configuration, if you try to configure the number of SR-IOV virtual functions by using the max_vfs module parameter, the changes might not take effect because they are overridden by the VIM API configuration. Workaround: None. To configure the number of virtual functions for an SR-IOV device, use the same method every time. Use the VIM API or use the max_vfs module parameter and reboot the ESXi host. * Upgraded vCenter Server appliance instance does not retain all the secondary networks (NICs) from the source instance During a major upgrade, if the source instance of the vCenter Server appliance is configured with multiple secondary networks other than the VCHA NIC, the target vCenter Server instance will not retain secondary networks other than the VCHA NIC. If the source instance is configured with multiple NICs that are part of DVS port groups, the NIC configuration will not be preserved during the upgrade. Configurations for vCenter Server appliance instances that are part of the standard port group will be preserved. Workaround: None. Manually configure the secondary network in the target vCenter Server appliance instance. * After upgrading or migrating a vCenter Server with an external Platform Services Controller, users authenticating using Active Directory lose access to the newly upgraded vCenter Server instance After upgrading or migrating a vCenter Server with an external Platform Services Controller, if the newly upgraded vCenter Server is not joined to an Active Directory domain, users authenticating using Active Directory will lose access to the vCenter Server instance. Workaround: Verify that the new vCenter Server instance has been joined to an Active Directory domain. See Knowledge Base article: https://kb.vmware.com/s/article/2118543 * Migrating a vCenter Server for Windows with an external Platform Services Controller using an Oracle database fails If there are non-ASCII strings in the Oracle events and tasks table the migration can fail when exporting events and tasks data. The following error message is provided: UnicodeDecodeError Workaround: None. * After an ESXi host upgrade, a Host Profile compliance check shows non-compliant status while host remediation tasks fail The non-compliant status indicates an inconsistency between the profile and the host. This inconsistency might occur because ESXi 7.0 does not allow duplicate claim rules, but the profile you use contains duplicate rules. For example, if you attempt to use the Host Profile that you extracted from the host before upgrading ESXi 6.5 or ESXi 6.7 to version 7.0 and the Host Profile contains any duplicate claim rules of system default rules, you might experience the problems. Workaround: 1. Remove any duplicate claim rules of the system default rules from the Host Profile document. 2. Check the compliance status. 3. Remediate the host. 4. If the previous steps do not help, reboot the host. * Error message displays in the vCenter Server Management Interface After installing or upgrading to vCenter Server 7.0, when you navigate to the Update panel within the vCenter Server Management Interface, the error message "Check the URL and try again" displays. The error message does not prevent you from using the functions within the Update panel, and you can view, stage, and install any available updates. Workaround: None. Security Features Issues * Encrypted virtual machine fails to power on when HA-enabled Trusted Cluster contains an unattested host In VMware® vSphere Trust Authority™, if you have enabled HA on the Trusted Cluster and one or more hosts in the cluster fails attestation, an encrypted virtual machine cannot power on. Workaround: Either remove or remediate all hosts that failed attestation from the Trusted Cluster. * Encrypted virtual machine fails to power on when DRS-enabled Trusted Cluster contains an unattested host In VMware® vSphere Trust Authority™, if you have enabled DRS on the Trusted Cluster and one or more hosts in the cluster fails attestation, DRS might try to power on an encrypted virtual machine on an unattested host in the cluster. This operation puts the virtual machine in a locked state. Workaround: Either remove or remediate all hosts that failed attestation from the Trusted Cluster. * Migrating or cloning encrypted virtual machines across vCenter Server instances fails when attempting to do so using the vSphere Client If you try to migrate or clone an encrypted virtual machine across vCenter Server instances using the vSphere Client, the operation fails with the following error message: "The operation is not allowed in the current state." Workaround: You must use the vSphere APIs to migrate or clone encrypted virtual machines across vCenter Server instances. Networking Issues * Reduced throughput in networking performance on Intel 82599/X540/X550 NICs The new queue-pair feature added to ixgben driver to improve networking performance on Intel 82599EB/X540/X550 series NICs might reduce throughput under some workloads in vSphere 7.0 as compared to vSphere 6.7. Workaround: To achieve the same networking performance as vSphere 6.7, you can disable the queue-pair with a module parameter. To disable the queue-pair, run the command: # esxcli system module parameters set -p "QPair=0,0,0,0..." -m ixgben After running the command, reboot. * High throughput virtual machines may experience degradation in network performance when Network I/O Control (NetIOC) is enabled Virtual machines requiring high network throughput can experience throughput degradation when upgrading from vSphere 6.7 to vSphere 7.0 with NetIOC enabled. Workaround: Adjust the ethernetx.ctxPerDev setting to enable multiple worlds. * IPv6 traffic fails to pass through VMkernel ports using IPsec When you migrate VMkernel ports from one port group to another, IPv6 traffic does not pass through VMkernel ports using IPsec. Workaround: Remove the IPsec security association (SA) from the affected server, and then reapply the SA. To learn how to set and remove an IPsec SA, see the vSphere Security documentation. * Higher ESX network performance with a portion of CPU usage increase ESX network performance may increase with a portion of CPU usage. Workaround: Remove and add the network interface with only 1 rx dispatch queue. For example: esxcli network ip interface remove --interface-name=vmk1 esxcli network ip interface add --interface-name=vmk1 --num-rxqueue=1 * VM might lose Ethernet traffic after hot-add, hot-remove or storage vMotion A VM might stop receiving Ethernet traffic after a hot-add, hot-remove or storage vMotion. This issue affects VMs where the uplink of the VNIC has SR-IOV enabled. PVRDMA virtual NIC exhibits this issue when the uplink of the virtual network is a Mellanox RDMA capable NIC and RDMA namespaces are configured. Workaround: You can hot-remove and hot-add the affected Ethernet NICs of the VM to restore traffic. On Linux guest operating systems, restarting the network might also resolve the issue. If these workarounds have no effect, you can reboot the VM to restore network connectivity. * Change of IP address for a VCSA deployed with static IP address requires that you create the DNS records in advance With the introduction of the DDNS, the DNS record update only works for VCSA deployed with DHCP configured networking. While changing the IP address of the vCenter server via VAMI, the following error is displayed: The specified IP address does not resolve to the specified hostname. Workaround: There are two possible workarounds. 1. Create an additional DNS entry with the same FQDN and desired IP address. Log in to the VAMI and follow the steps to change the IP address. 2. Log in to the VCSA using ssh. Execute the following script: ./opt/vmware/share/vami/vami_config_net Use option 6 to change the IP adddress of eth0. Once changed, execute the following script: ./opt/likewise/bin/lw-update-dns Restart all the services on the VCSA to update the IP information on the DNS server. * It may take several seconds for the NSX Distributed Virtual Port Group (NSX DVPG) to be removed after deleting the corresponding logical switch in NSX Manager. As the number of logical switches increases, it may take more time for the NSX DVPG in vCenter Server to be removed after deleting the corresponding logical switch in NSX Manager. In an environment with 12000 logical switches, it takes approximately 10 seconds for an NSX DVPG to be deleted from vCenter Server. Workaround: None. * Hostd runs out of memory and fails if a large number of NSX Distributed Virtual port groups are created. In vSphere 7.0, NSX Distributed Virtual port groups consume significantly larger amounts of memory than opaque networks. For this reason, NSX Distributed Virtual port groups can not support the same scale as an opaque network given the same amount of memory. Workaround:To support the use of NSX Distributed Virtual port groups, increase the amount of memory in your ESXi hosts. If you verify that your system has adequate memory to support your VMs, you can directly increase the memory of hostd using the following command. localcli --plugin-dir /usr/lib/vmware/esxcli/int/ sched group setmemconfig --group-path host/vim/vmvisor/hostd --units mb --min 2048 --max 2048 Note that this will cause hostd to use memory normally reserved for your environment's VMs. This may have the affect of reducing the number of VMs your ESXi host can support. * DRS may incorrectly launch vMotion if the network reservation is configured on a VM If the network reservation is configured on a VM, it is expected that DRS only migrates the VM to a host that meets the specified requirements. In a cluster with NSX transport nodes, if some of the transport nodes join the transport zone by NSX-T Virtual Distributed Switch (N-VDS), and others by vSphere Distributed Switch (VDS) 7.0, DRS may incorrectly launch vMotion. You might encounter this issue when: * The VM connects to an NSX logical switch configured with a network reservation. * Some transport nodes join transport zone using N-VDS, and others by VDS 7.0, or, transport nodes join the transport zone through different VDS 7.0 instances. Workaround: Make all transport nodes join the transport zone by N-VDS or the same VDS 7.0 instance. * When adding a VMkernel NIC (vmknic) to an NSX portgroup, vCenter Server reports the error "Connecting VMKernel adapter to a NSX Portgroup on a Stateless host is not a supported operation. Please use Distributed Port Group instead." * For stateless ESXi on Distributed Virtual Switch (DVS), the vmknic on a NSX port group is blocked. You must instead use a Distributed Port Group. * For stateful ESXi on DVS, vmknic on NSX port group is supported, but vSAN may have an issue if it is using vmknic on a NSX port group. Workaround: Use a Distributed Port Group on the same DVS. * Enabling SRIOV from vCenter for QLogic 4x10GE QL41164HFCU CNA might fail If you navigate to the Edit Settings dialog for physical network adapters and attempt to enable SR-IOV, the operation might fail when using QLogic 4x10GE QL41164HFCU CNA. Attempting to enable SR-IOV might lead to a network outage of the ESXi host. Workaround: Use the following command on the ESXi host to enable SRIOV: esxcfg-module * New vCenter Server fails if the hosts in a cluster using Distributed Resource Scheduler (DRS) join NSX-T networking by a different Virtual Distributed Switch (VDS) or combination of NSX-T Virtual Distributed Switch (NVDS) and VDS In vSphere 7.0, when using NSX-T networking on vSphere VDS with a DRS cluster, if the hosts do not join the NSX transport zone by the same VDS or NVDS, it can cause vCenter Server to fail. Workaround: Have hosts in a DRS cluster join the NSX transport zone using the same VDS or NVDS. Storage Issues * VMFS datastores are not mounted automatically after disk hot remove and hot insert on HPE Gen10 servers with SmartPQI controllers When SATA disks on HPE Gen10 servers with SmartPQI controllers without expanders are hot removed and hot inserted back to a different disk bay of the same machine, or when multiple disks are hot removed and hot inserted back in a different order, sometimes a new local name is assigned to the disk. The VMFS datastore on that disk appears as a snapshot and will not be mounted back automatically because the device name has changed. Workaround: None. SmartPQI controller does not support unordered hot remove and hot insert operations. * ESXi might terminate I/O to NVMeOF devices due to errors on all active paths Occasionally, all active paths to NVMeOF device register I/O errors due to link issues or controller state. If the status of one of the paths changes to Dead, the High Performance Plug-in (HPP) might not select another path if it shows high volume of errors. As a result, the I/O fails. Workaround: Disable the configuration option /Misc/HppManageDegradedPaths to unblock the I/O. * VOMA check on NVMe based VMFS datastores fails with error VOMA check is not supported for NVMe based VMFS datastores and will fail with the error: ERROR: Failed to reserve device. Function not implemented Example: # voma -m vmfs -f check -d /vmfs/devices/disks/: <partition#> Running VMFS Checker version 2.1 in check mode Initializing LVM metadata, Basic Checks will be done Checking for filesystem activity Performing filesystem liveness check..|Scanning for VMFS-6 host activity (4096 bytes/HB, 1024 HBs). ERROR: Failed to reserve device. Function not implemented Aborting VMFS volume check VOMA failed to check device : General Error Workaround: None. If you need to analyse VMFS metadata, collect it using the -l option, and pass to VMware customer support. The command for collecting the dump is: voma -l -f dump -d /vmfs/devices/disks/:<partition#> * Using the VM reconfigure API to attach an encrypted First Class Disk to an encrypted virtual machine might fail with error If an FCD and a VM are encrypted with different crypto keys, your attempts to attach the encrypted FCD to the encrypted VM using the VM reconfigure API might fail with the error message: Cannot decrypt disk because key or password is incorrect. Workaround: Use the attachDisk API rather than the VM reconfigure API to attach an encrypted FCD to an encrypted VM. * ESXi host might get in non responding state if a non-head extent of its spanned VMFS datastore enters the Permanent Device Loss (PDL) state This problem does not occur when a non-head extent of the spanned VMFS datastore fails along with the head extent. In this case, the entire datastore becomes inaccessible and no longer allows I/Os. In contrast, when only a non-head extent fails, but the head extent remains accessible, the datastore heartbeat appears to be normal. And the I/Os between the host and the datastore continue. However, any I/Os that depend on the failed non-head extent start failing as well. Other I/O transactions might accumulate while waiting for the failing I/Os to resolve, and cause the host to enter the non responding state. Workaround: Fix the PDL condition of the non-head extent to resolve this issue. * After recovering from APD or PDL conditions, VMFS datastore with enabled support for clustered virtual disks might remain inaccessible You can encounter this problem only on datastores where the clustered virtual disk support is enabled. When the datastore recovers from an All Paths Down (APD) or Permanent Device Loss (PDL) condition, it remains inaccessible. The VMkernel log might show multiple SCSI3 reservation conflict messages similar to the following: 2020-02-18T07:41:10.273Z cpu22:1001391219)ScsiDeviceIO: vm 1001391219: SCSIDeviceCmdCompleteCB:2972: Reservation conflict retries 544 for command 0x45ba814b8340 (op: 0x89) to device "naa.624a9370b97601e346f64ba900024d53" The problem can occur because the ESXi host participating in the cluster loses SCSI reservations for the datastore and cannot always reacquire them automatically after the datastore recovers. Workaround: Manually register the reservation using the following command: vmkfstools -L registerkey /vmfs/devices/disks/<device name> where the <device name> is the name of the device on which the datastore is created. * Virtual NVMe Controller is the default disk controller for Windows 10 guest operating systems The Virtual NVMe Controller is the default disk controller for the following guest operating systems when using Hardware Version 15 or later: Windows 10 Windows Server 2016 Windows Server 2019 Some features might not be available when using a Virtual NVMe Controller. For more information, see https://kb.vmware.com/s/article/2147714 Note: Some clients use the previous default of LSI Logic SAS. This includes ESXi host client and PowerCLI. Workaround: If you need features not available on Virtual NVMe, switch to VMware Paravirtual SCSI (PVSCSI) or LSI Logic SAS. For information on using VMware Paravirtual SCSI (PVSCSI), see https://kb.vmware.com/s/article/1010398 * After an ESXi host upgrade to vSphere 7.0, presence of duplicate core claim rules might cause unexpected behavior Claim rules determine which multipathing plugin, such as NMP, HPP, and so on, owns paths to a particular storage device. ESXi 7.0 does not support duplicate claim rules. However, the ESXi 7.0 host does not alert you if you add duplicate rules to the existing claim rules inherited through an upgrade from a legacy release. As a result of using duplicate rules, storage devices might be claimed by unintended plugins, which can cause unexpected outcome. Workaround: Do not use duplicate core claim rules. Before adding a new claim rule, delete any existing matching claim rule. * A CNS query with the compliance status filter set might take unusually long time to complete The CNS QueryVolume API enables you to obtain information about the CNS volumes, such as volume health and compliance status. When you check the compliance status of individual volumes, the results are obtained quickly. However, when you invoke the CNS QueryVolume API to check the compliance status of multiple volumes, several tens or hundreds, the query might perform slowly. Workaround: Avoid using bulk queries. When you need to get compliance status, query one volume at a time or limit the number of volumes in the query API to 20 or fewer. While using the query, avoid running other CNS operations to get the best performance. * New Deleted CNS volumes might temporarily appear as existing in the CNS UI After you delete an FCD disk that backs a CNS volume, the volume might still show up as existing in the CNS UI. However, your attempts to delete the volume fail. You might see an error message similar to the following: The object or item referred to could not be found. Workaround: The next full synchronization will resolve the inconsistency and correctly update the CNS UI. * New Attempts to attach multiple CNS volumes to the same pod might occasionally fail with an error When you attach multiple volumes to the same pod simultaneously, the attach operation might occasionally choose the same controller slot. As a result, only one of the operations succeeds, while other volume mounts fail. Workaround: After Kubernetes retries the failed operation, the operation succeeds if a controller slot is available on the node VM. * New Under certain circumstances, while a CNS operation fails, the task status appears as successful in the vSphere Client This might occur when, for example, you use an incompliant storage policy to create a CNS volume. The operation fails, while the vSphere Client shows the task status as successful. Workaround: The successful task status in the vSphere Client does not guarantee that the CNS operation succeeded. To make sure the operation succeeded, verify its results. * New Unsuccessful delete operation for a CNS persistent volume might leave the volume undeleted on the vSphere datastore This issue might occur when the CNS Delete API attempts to delete a persistent volume that is still attached to a pod. For example, when you delete the Kubernetes namespace where the pod runs. As a result, the volume gets cleared from CNS and the CNS query operation does not return the volume. However, the volume continues to reside on the datastore and cannot be deleted through the repeated CNS Delete API operations. Workaround: None. vCenter Server and vSphere Client Issues * Vendor providers go offline after a PNID change When you change the vCenter IP address (PNID change), the registered vendor providers go offline. Workaround: Re-register the vendor providers. * Cross vCenter migration of a virtual machine fails with an error When you use cross vCenter vMotion to move a VM's storage and host to a different vCenter server instance, you might receive the error The operation is not allowed in the current state. This error appears in the UI wizard after the Host Selection step and before the Datastore Selection step, in cases where the VM has an assigned storage policy containing host-based rules such as encryption or any other IO filter rule. Workaround: Assign the VM and its disks to a storage policy without host-based rules. You might need to decrypt the VM if the source VM is encrypted. Then retry the cross vCenter vMotion action. * Storage Sensors information in Hardware Health tab shows incorrect values on vCenter UI, host UI, and MOB When you navigate to Host > Monitor > Hardware Health > Storage Sensors on vCenter UI, the storage information displays either incorrect or unknown values. The same issue is observed on the host UI and the MOB path “runtime.hardwareStatusInfo.storageStatusInfo” as well. Workaround: None. * vSphere UI host advanced settings shows the current product locker location as empty with an empty default vSphere UI host advanced settings shows the current product locker location as empty with an empty default. This is inconsistent as the actual product location symlink is created and valid. This causes confusion to the user. The default cannot be corrected from UI. Workaround: User can use the esxcli command on the host to correct the current product locker location default as below. 1. Remove the existing Product Locker Location setting with: "esxcli system settings advanced remove -o ProductLockerLocation" 2. Re-add the Product Locker Location setting with the appropriate default: 2.a. If the ESXi is a full installation, the default value is "/locker/packages/vmtoolsRepo" export PRODUCT_LOCKER_DEFAULT="/locker/packages/vmtoolsRepo" 2.b. If the ESXi is a PXEboot configuration such as autodeploy, the default value is: "/vmtoolsRepo" export PRODUCT_LOCKER_DEFAULT="/vmtoolsRepo" Run the following command to automatically figure out the location: export PRODUCT_LOCKER_DEFAULT=`readlink /productLocker` Add the setting: esxcli system settings advanced add -d "Path to VMware Tools repository" -o ProductLockerLocation -t string -s $PRODUCT_LOCKER_DEFAULT You can combine all the above steps in step 2 by issuing the single command: esxcli system settings advanced add -d "Path to VMware Tools repository" -o ProductLockerLocation -t string -s `readlink /productLocker` * Linked Software-Defined Data Center (SDDC) vCenter Server instances appear in the on-premises vSphere Client if a vCenter Cloud Gateway is linked to the SDDC. When a vCenter Cloud Gateway is deployed in the same environment as an on-premises vCenter Server, and linked to an SDDC, the SDDC vCenter Server will appear in the on-premises vSphere Client. This is unexpected behavior and the linked SDDC vCenter Server should be ignored. All operations involving the linked SDDC vCenter Server should be performed on the vSphere Client running within the vCenter Cloud Gateway. Workaround: None. Virtual Machine Management Issues * The postcustomization section of the customization script runs before the guest customization When you run the guest customization script for a Linux guest operating system, the precustomization section of the customization script that is defined in the customization specification runs before the guest customization and the postcustomization section runs after that. If you enable Cloud-Init in the guest operating system of a virtual machine, the postcustomization section runs before the customization due to a known issue in Cloud-Init. Workaround: Disable Cloud-Init and use the standard guest customization. * Group migration operations in vSphere vMotion, Storage vMotion, and vMotion without shared storage fail with error When you perform group migration operations on VMs with multiple disks and multi-level snapshots, the operations might fail with the error com.vmware.vc.GenericVmConfigFault Failed waiting for data. Error 195887167. Connection closed by remote host, possibly due to timeout. Workaround: Retry the migration operation on the failed VMs one at a time. * Deploying an OVF or OVA template from a URL fails with a 403 Forbidden error The URLs that contain an HTTP query parameter are not supported. For example, http://webaddress.com?file=abc.ovf or the Amazon pre-signed S3 URLs. Workaround: Download the files and deploy them from your local file system. * Importing or deploying local OVF files containing non-ASCII characters in their name might fail with an error When you import local .ovf files containing non-ASCII characters in their name, you might receive 400 Bad Request Error. When you use such .ovf files to deploy a virtual machine in the vSphere Client, the deployment process stops at 0%. As a result, you might receive 400 Bad Request Error or 500 Internal Server Error. Workaround: 1. Remove the non-ASCII characters from the .ovf and .vmdk file names. * To edit the .ovf file, open it with a text editor. * Search the non-ASCII .vmdk file name and change it to ASCII. 2. Import or deploy the saved files again. * New The third level of nested objects in a virtual machine folder is not visible Perform the following steps: 1. Navigate to a data center and create a virtual machine folder. 2. In the virtual machine folder, create a nested virtual machine folder. 3. In the second folder, create another nested virtual machine, virtual machine folder, vApp, or VM Template. As a result, from the VMs and Templates inventory tree you cannot see the objects in the third nested folder. Workaround: To see the objects in the third nested folder, navigate to the second nested folder and select the VMs tab. vSphere HA and Fault Tolerance Issues * VMs in a cluster might be orphaned after recovering from storage inaccessibility such as a cluster wide APD Some VMs might be in orphaned state after cluster wide APD recovers, even if HA and VMCP are enabled on the cluster. This issue might be encountered when the following conditions occur simultaneously: * All hosts in the cluster experience APD and do not recover until VMCP timeout is reached. * HA primary initiates failover due to APD on a host. * Power on API during HA failover fails due to one of the following: * APD across the same host * Cascading APD across the entire cluster * Storage issues * Resource unavailability * FDM unregistration and VCs steal VM logic might initiate during a window where FDM has not unregistered the failed VM and VC's host synchronization responds that multiple hosts are reporting the same VM. Both FDM and VC unregister the different registered copies of the same VM from different hosts, causing the VM to be orphaned. Workaround: You must unregister and reregister the orphaned VMs manually within the cluster after the APD recovers. If you do not manually reregister the orphaned VMs, HA attempts failover of the orphaned VMs, but it might take between 5 to 10 hours depending on when APD recovers. The overall functionality of the cluster is not affected in these cases and HA will continue to protect the VMs. This is an anomaly in what gets displayed on VC for the duration of the problem. vSphere Lifecycle Manager Issues * You cannot enable NSX-T on a cluster that is already enabled for managing image setup and updates on all hosts collectively NSX-T is not compatible with the vSphere Lifecycle Manager functionality for image management. When you enable a cluster for image setup and updates on all hosts in the cluster collectively, you cannot enable NSX-T on that cluster. However, you can deploy NSX Edges to this cluster. Workaround: Move the hosts to a new cluster that you can manage with baselines and enable NSX-T on that new cluster. * vSphere Lifecycle Manager and vSAN File Services cannot be simultaneously enabled on a vSAN cluster in vSphere 7.0 release If vSphere Lifecycle Manager is enabled on a cluster, vSAN File Services cannot be enabled on the same cluster and vice versa. In order to enable vSphere Lifecycle Manager on a cluster, which has VSAN File Services enabled already, first disable vSAN File Services and retry the operation. Please note that if you transition to a cluster that is managed by a single image, vSphere Lifecycle Manager cannot be disabled on that cluster. Workaround: None. * ESXi 7.0 hosts cannot be added to а cluster that you manage with a single image by using vSphere Auto Deploy Attempting to add ESXi hosts to а cluster that you manage with a single image by using the "Add to Inventory" workflow in vSphere Auto Deploy fails. The failure occurs because no patterns are matched in an existing Auto Deploy ruleset. The task fails silently and the hosts remain in the Discovered Hosts tab. Workaround: 1. Remove the ESXi hosts that did not match the ruleset from the Discovered Hosts tab. 2. Create a rule or edit an existing Auto Deploy rule, where the host target location is a cluster managed by an image. 3. Reboot the hosts. The hosts are added to the cluster that you manage by an image in vSphere Lifecycle Manager. * When a hardware support manager is unavailable, vSphere High Availability (HA) functionality is impacted If hardware support manager is unavailable for a cluster that you manage with a single image, where a firmware and drivers addon is selected and vSphere HA is enabled, the vSphere HA functionality is impacted. You may experience the following errors. * Configuring vSphere HA on a cluster fails. * Cannot complete the configuration of the vSphere HA agent on a host: Applying HA VIBs on the cluster encountered a failure. * Remediating vSphere HA fails: A general system error occurred: Failed to get Effective Component map. * Disabling vSphere HA fails: Delete Solution task failed. A general system error occurred: Cannot find hardware support package from depot or hardware support manager. Workaround: * If the hardware support manager is temporarily unavailable, perform the following steps. 1. Reconnect the hardware support manager to vCenter Server. 2. Select a cluster from the Hosts and Cluster menu. 3. Select the Configure tab. 4. Under Services, click vSphere Availability. 5. Re-enable vSphere HA. * If the hardware support manager is permanently unavailable, perform the following steps. 1. Remove the hardware support manager and the hardware support package from the image specification 2. Re-enable vSphere HA. 3. Select a cluster from the Hosts and Cluster menu. 4. Select the Updates tab. 5. Click Edit . 6. Remove the firmware and drivers addon and click Save. 7. Select the Configure tab. 8. Under Services, click vSphere Availability. 9. Re-enable vSphere HA. * I/OFilter is not removed from a cluster after a remediation process in vSphere Lifecycle Manager Removing I/OFilter from a cluster by remediating the cluster in vSphere Lifecycle Manager, fails with the following error message: iofilter XXX already exists. Тhe iofilter remains listed as installed. Workaround: 1. Call IOFilter API UninstallIoFilter_Task from the vCenter Server managed object (IoFilterManager). 2. Remediate the cluster in vSphere Lifecycle Manager. 3. Call IOFilter API ResolveInstallationErrorsOnCluster_Task from the vCenter Server managed object (IoFilterManager) to update the database. * While remediating a vSphere HA enabled cluster in vSphere Lifecycle Manager, adding hosts causes a vSphere HA error state Adding one or multiple ESXi hosts during a remediation process of a vSphere HA enabled cluster, results in the following error message: Applying HA VIBs on the cluster encountered a failure. Workaround: Аfter the cluster remediation operation has finished, perform one of the following tasks. * Right-click the failed ESXi host and select Reconfigure for vSphere HA. * Disable and re-enable vSphere HA for the cluster. * While remediating a vSphere HA enabled cluster in vSphere Lifecycle Manager, disabling and re-enabling vSphere HA causes a vSphere HA error state Disabling and re-enabling vSphere HA during remediation process of a cluster, may fail the remediation process due to vSphere HA health checks reporting that hosts don't have vSphere HA VIBs installed. You may see the following error message: Setting desired image spec for cluster failed. Workaround: Аfter the cluster remediation operation has finished, disable and re-enable vSphere HA for the cluster. * Checking for recommended images in vSphere Lifecycle Manager has slow performance in large clusters In large clusters with more than 16 hosts, the recommendation generation task could take more than an hour to finish or may appear to hang. The completion time for the recommendation task depends on the number of devices configured on each host and the number of image candidates from the depot that vSphere Lifecycle Manager needs to process before obtaining a valid image to recommend. Workaround: None. * Checking for hardware compatibility in vSphere Lifecycle Manager has slow performance in large clusters In large clusters with more than 16 hosts, the validation report generation task could take up to 30 minutes to finish or may appear to hang. The completion time depends on the number of devices configured on each host and the number of hosts configured in the cluster. Workaround: None * Incomplete error messages in non-English languages are displayed, when remediating a cluster in vSphere Lifecycle Manager You can encounter incomplete error messages for localized languages in the vCenter Server user interface. The messages are displayed, after a cluster remediation process in vSphere Lifecycle Manager fails. For example, your can observe the following error message. The error message in English language: Virtual machine 'VMC on DELL EMC -FileServer' that runs on cluster 'Cluster-1' reported an issue which prevents entering maintenance mode: Unable to access the virtual machine configuration: Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC - FileServer.vmx The error message in French language: La VM « VMC on DELL EMC -FileServer », située sur le cluster « {Cluster-1} », a signalé un problème empêchant le passage en mode de maintenance : Unable to access the virtual machine configuration: Unable to access file[local-0] VMC on Dell EMC - FileServer/VMC on Dell EMC - FileServer.vmx Workaround: None. * Importing an image with no vendor addon, components, or firmware and drivers addon to a cluster which image contains such elements, does not remove the image elements of the existing image Only the ESXi base image is replaced with the one from the imported image. Workaround: After the import process finishes, edit the image, and if needed, remove the vendor addon, components, and firmware and drivers addon. * When you convert a cluster that uses baselines to a cluster that uses a single image, a warning is displayed that vSphere HA VIBs will be removed Converting a vSphere HA enabled cluster that uses baselines to a cluster that uses a single image, may result a warning message displaying that vmware-fdm component will be removed. Workaround: This message can be ignored. The conversion process installs the vmware-fdm component. * If vSphere Update Manager is configured to download patch updates from the Internet through a proxy server, after upgrade to vSphere 7.0 that converts Update Manager to vSphere Lifecycle Manager, downloading patches from VMware patch repository might fail In earlier releases of vCenter Server you could configure independent proxy settings for vCenter Server and vSphere Update Manager. After an upgrade to vSphere 7.0, vSphere Update Manager service becomes part of the vSphere Lifecycle Manager service. For the vSphere Lifecycle Manager service, the proxy settings are configured from the vCenter Server appliance settings. If you had configured Update Manager to download patch updates from the Internet through a proxy server but the vCenter Server appliance had no proxy setting configuration, after a vCenter Server upgrade to version 7.0, the vSphere Lifecycle Manager fails to connect to the VMware depot and is unable to download patches or updates. Workaround: Log in to the vCenter Server Appliance Management Interface, https://vcenter-server-appliance-FQDN-or-IP-address:5480, to configure proxy settings for the vCenter Server appliance and enable vSphere Lifecycle Manager to use proxy. Miscellaneous Issues * When applying a host profile with version 6.5 to a ESXi host with version 7.0, the compliance check fails Applying a host profile with version 6.5 to a ESXi host with version 7.0, results in Coredump file profile reported as not compliant with the host. Workaround: There are two possible workarounds. 1. When you create a host profile with version 6.5, set an advanced configuration option VMkernel.Boot.autoCreateDumpFile to false on the ESXi host. 2. When you apply an existing host profile with version 6.5, add an advanced configuration option VMkernel.Boot.autoCreateDumpFile in the host profile, configure the option to a fixed policy, and set value to false. * The Actions drop-down menu does not contain any items when your browser is set to language different from English When your browser is set to language different from English and you click the Switch to New View button from the virtual machine Summary tab of the vSphere Client inventory, the Actions drop-down menu in the Guest OS panel does not contain any items. Workaround: Select the Actions drop-down menu on the top of the virtual machine page. * Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit minor throughput degradation when Dynamic Receive Side Scaling (DYN_RSS) or Generic RSS (GEN_RSS) feature is turned on Mellanox ConnectX-4 or ConnectX-5 native ESXi drivers might exhibit less than 5 percent throughput degradation when DYN_RSS and GEN_RSS feature is turned on, which is unlikely to impact normal workloads. Workaround: You can disable DYN_RSS and GEN_RSS feature with the following commands: # esxcli system module parameters set -m nmlx5_core -p "DYN_RSS=0 GEN_RSS=0" # reboot * RDMA traffic between two VMs on the same host might fail in PVRDMA environment In a vSphere 7.0 implementation of a PVRDMA environment, VMs pass traffic through the HCA for local communication if an HCA is present. However, loopback of RDMA traffic does not work on qedrntv driver. For instance, RDMA Queue Pairs running on VMs that are configured under same uplink port cannot communicate with each other. In vSphere 6.7 and earlier, HCA was used for local RDMA traffic if SRQ was enabled. vSphere 7.0 uses HCA loopback with VMs using versions of PVRDMA that have SRQ enabled with a minimum of HW v14 using RoCE v2. The current version of Marvell FastLinQ adapter firmware does not support loopback traffic between QPs of the same PF or port. Workaround: Required support is being added in the out-of-box driver certified for vSphere 7.0. If you are using the inbox qedrntv driver, you must use a 3-host configuration and migrate VMs to the third host. * Unreliable Datagram traffic QP limitations in qedrntv driver There are limitations with the Marvell FastLinQ qedrntv RoCE driver and Unreliable Datagram (UD) traffic. UD applications involving bulk traffic might fail with qedrntv driver. Additionally, UD QPs can only work with DMA Memory Regions (MR). Physical MRs or FRMR are not supported. Applications attempting to use physical MR or FRMR along with UD QP fail to pass traffic when used with qedrntv driver. Known examples of such test applications are ibv_ud_pingpong and ib_send_bw. Standard RoCE and RoCEv2 use cases in a VMware ESXi environment such as iSER, NVMe-oF (RoCE) and PVRDMA are not impacted by this issue. Use cases for UD traffic are limited and this issue impacts a small set of applications requiring bulk UD traffic. Marvell FastLinQ hardware does not support RDMA UD traffic offload. In order to meet the VMware PVRDMA requirement to support GSI QP, a restricted software only implementation of UD QP support was added to the qedrntv driver. The goal of the implementation is to provide support for control path GSI communication and is not a complete implementation of UD QP supporting bulk traffic and advanced features. Since UD support is implemented in software, the implementation might not keep up with heavy traffic and packets might be dropped. This can result in failures with bulk UD traffic. Workaround: Bulk UD QP traffic is not supported with qedrntv driver and there is no workaround at this time. VMware ESXi RDMA (RoCE) use cases like iSER, NVMe, RDMA and PVRDMA are unaffected by this issue. * Servers equipped with QLogic 578xx NIC might fail when frequently connecting or disconnecting iSCSI LUNs If you trigger QLogic 578xx NIC iSCSI connection or disconnection frequently in a short time, the server might fail due to an issue with the qfle3 driver. This is caused by a known defect in the device's firmware. Workaround: None. * ESXi might fail during driver unload or controller disconnect operation in Broadcom NVMe over FC environment In Broadcom NVMe over FC environment, ESXi might fail during driver unload or controller disconnect operation and display an error message such as: @BlueScreen: #PF Exception 14 in world 2098707:vmknvmeGener IP 0x4200225021cc addr 0x19 Workaround: None. * ESXi does not display OEM firmware version number of i350/X550 NICs on some Dell servers The inbox ixgben driver only recognizes firmware data version or signature for i350/X550 NICs. On some Dell servers the OEM firmware version number is programmed into the OEM package version region, and the inbox ixgben driver does not read this information. Only the 8-digit firmware signature is displayed. Workaround: To display the OEM firmware version number, install async ixgben driver version 1.7.15 or later. * X710 or XL710 NICs might fail in ESXi When you initiate certain destructive operations to X710 or XL710 NICs, such as resetting the NIC or manipulating VMKernel's internal device tree, the NIC hardware might read data from non-packet memory. Workaround: Do not reset the NIC or manipulate vmkernel internal device state. * NVMe-oF does not guarantee persistent VMHBA name after system reboot NVMe-oF is a new feature in vSphere 7.0. If your server has a USB storage installation that uses vmhba30+ and also has NVMe over RDMA configuration, the VMHBA name might change after a system reboot. This is because the VMHBA name assignment for NVMe over RDMA is different from PCIe devices. ESXi does not guarantee persistence. Workaround: None. * Backup fails for vCenter database size of 300 GB or greater If the vCenter database size is 300 GB or greater, the file-based backup will fail with a timeout. The following error message is displayed: Timeout! Failed to complete in 72000 seconds Workaround: None. * A restore of vCenter Server 7.0 which is upgraded from vCenter Server 6.x with External Platform Services Controller to vCenter Server 7.0 might fail When you restore a vCenter Server 7.0 which is upgraded from 6.x with External Platform Services Controller to vCenter Server 7.0, the restore might fail and display the following error: Failed to retrieve appliance storage list Workaround: During the first stage of the restore process, increase the storage level of the vCenter Server 7.0. For example if the vCenter Server 6.7 External Platform Services Controller setup storage type is small, select storage type large for the restore process. * Enabled SSL protocols configuration parameter is not configured during a host profile remediation process Enabled SSL protocols configuration parameter is not configured during a host profile remediation and only the system default protocol tlsv1.2 is enabled. This behavior is observed for a host profile with version 7.0 and earlier in a vCenter Server 7.0 environment. Workaround: To enable TLSV 1.0 or TLSV 1.1 SSL protocols for SFCB, log in to an ESXi host by using SSH, and run the following ESXCLI command: esxcli system wbem -P <protocol_name> * Unable to configure Lockdown Mode settings by using Host Profiles Lockdown Мode cannot be configured by using a security host profile and cannot be applied to multiple ESXi hosts at once. You must manually configure each host. Workaround: In vCenter Server 7.0, you can configure Lockdown Mode and manage Lockdown Mode exception user list by using a security host profile. * When a host profile is applied to a cluster, Enhanced vMotion Compatibility (EVC) settings are missing from the ESXi hosts Some settings in the VMware config file /etc/vmware/config are not managed by Host Profiles and are blocked, when the config file is modified. As a result, when the host profile is applied to a cluster, the EVC settings are lost, which causes loss of EVC functionalities. For example, unmasked CPUs can be exposed to workloads. Workaround: Reconfigure the relevant EVC baseline on cluster to recover the EVC settings. * Using a host profile that defines a core dump partition in vCenter Server 7.0 results in an error In vCenter Server 7.0, configuring and managing a core dump partition in a host profile is not available. Attempting to apply a host profile that defines a core dump partition, results in the following error: No valid coredump partition found. Workaround: None. In vCenter Server 7.0., Host Profiles supports only file-based core dumps. * HTTP requests from certain libraries to vSphere might be rejected The HTTP reverse proxy in vSphere 7.0 enforces stricter standard compliance than in previous releases. This might expose pre-existing problems in some third-party libraries used by applications for SOAP calls to vSphere. If you develop vSphere applications that use such libraries or include applications that rely on such libraries in your vSphere stack, you might experience connection issues when these libraries send HTTP requests to VMOMI. For example, HTTP requests issued from vijava libraries can take the following form: POST /sdk HTTP/1.1 SOAPAction Content-Type: text/xml; charset=utf-8 User-Agent: Java/1.8.0_221 The syntax in this example violates an HTTP protocol header field requirement that mandates a colon after SOAPAction. Hence, the request is rejected in flight. Workaround: Developers leveraging noncompliant libraries in their applications can consider using a library that follows HTTP standards instead. For example, developers who use the vijava library can consider using the latest version of the yavijava library instead. * Editing an advanced options parameter in a host profile and setting a value to false, results in setting the value to true When attempting to set a value to false for an advanced option parameter in a host profile, the user interface creates a non-empty string value. Values that are not empty are interpreted as true and the advanced option parameter receives a true value in the host profile. Workaround: There are two possible workarounds. * Set the advanced option parameter to false on a reference ESXi host and copy settings from this host in Host Profiles. Note: The host must be compliant with the host profile before modifying the advanced option parameter on the host. * Set the advanced option parameter to false on a reference ESXi host and create a host profile from this host. Then copy the host profile settings from the new host profile to the existing host profile. * You might see a dump file when using Broadcom driver lsi_msgpt3, lsi_msgpt35 and lsi_mr3 When using the lsi_msgpt3, lsi_msgpt35 and lsi_mr3 controllers, there is a potential risk to see dump file lsuv2-lsi-drivers-plugin-util-zdump. There is an issue when exiting the storelib used in this plugin utility. There is no impact on ESXi operations, you can ignore the dump file. Workaround: You can safely ignore this message. You can remove the lsuv2-lsi-drivers-plugin with the following command: esxcli software vib remove -n lsuv2-lsiv2-drivers-plugin * You might see reboot is not required after configuring SR-IOV of a PCI device in vCenter, but device configurations made by third party extensions might be lost and require reboot to be re-applied. In ESXi 7.0, SR-IOV configuration is applied without a reboot and the device driver is reloaded. ESXi hosts might have third party extensions perform device configurations that need to run after the device driver is loaded during boot. A reboot is required for those third party extensions to re-apply the device configuration. Workaround: You must reboot after configuring SR-IOV to apply third party device configurations. To collapse the list of previous known issues, click here. check-circle-line exclamation-circle-line Übersetzungsfehler MyLibrary öffnen close-line In diesem Artikel What's in the Release Notes What's New Earlier Releases of ESXi 7.0 Patches Contained in This Release Patch Download and Installation Product Support Notices Resolved Issues Known Issues Known Issues from Earlier Releases Feedback senden Produkt-Download Unternehmen * Über uns * Führungsebene * Newsroom * Investoren * Kundenreferenzen * Vielfalt, Gleichstellung & Inklusion * Umwelt, Soziales & Governance * KI bei VMware * Stellenangebote * Blogs * Communitys * Akquisitionen * Niederlassungen * VMware Cloud Trust Center * COVID-19 Support * VMware Customer Connect * Support-Richtlinien * Produktdokumentation * Kompatibilitätsleitfaden * Geschäftsbedingungen * Hands-on Labs & Testversionen * Twitter * YouTube * Blog * Xing * Kontakt zum Vertrieb -------------------------------------------------------------------------------- Copyright © 2005-2024 Broadcom. Alle Rechte vorbehalten. Der Begriff „Broadcom“ bezieht sich auf Broadcom Inc. und/oder die Tochtergesellschaften. Nutzungsbedingungen Datenschutz Barrierefreiheit Marken Glossar Hilfe Feedback FREIGEBEN × Jeder mit diesem Link kann anzeigen Link kopieren Sammlung teilen AN Sammlung teilen AUS Share on Social Media? Übersetzungsfehler Übersetzungsfehler × Übersetzungsfehler Abbrechen Abschicken Übersetzungsfehler ÜBERSETZUNGSFEHLER × exclamation-circle-line Übersetzungsfehler check-circle-line Übersetzungsfehler Übersetzungsfehler Übersetzungsfehler Übersetzungsfehler Übersetzungsfehler Übersetzungsfehler : Abbrechen Abschicken ÜBERSETZUNGSFEHLER × exclamation-circle-line Übersetzungsfehler Übersetzungsfehler| Übersetzungsfehler check-circle-line Übersetzungsfehler exclamation-circle-line Übersetzungsfehler Übersetzungsfehler x Übersetzungsfehler Abbrechen Übersetzungsfehler ÜBERSETZUNGSFEHLER × Übersetzungsfehler " "? Abbrechen Übersetzungsfehler × Cookies Settings WE CARE ABOUT YOUR PRIVACY We use cookies to provide you with the best experience on our website, to improve usability and performance and thereby improve what we offer to you. Our website may also use third-party cookies to display advertising that is more relevant to you. If you want to know more about how we use cookies, please see our Cookie Policy. Cookies Settings Reject All Accept All Cookies COOKIE PREFERENCE CENTER * GENERAL INFORMATION ON COOKIES * STRICTLY NECESSARY * PERFORMANCE * FUNCTIONAL * ADVERTISING GENERAL INFORMATION ON COOKIES When you visit our website, we use cookies to ensure that we give you the best experience. This information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies by clicking on the different category headings to find out more and change your settings. However, blocking some types of cookies may impact your experience on the site and the services we are able to offer. Further information can be found in our Cookie Policy. STRICTLY NECESSARY Always Active Strictly necessary cookies are always enabled since they are essential for our website to function. They enable core functionality such as security, network management, and website accessibility. You can set your browser to block or alert you about these cookies, but this may affect how the website functions. For more information please visit www.aboutcookies.org or www.allaboutcookies.org. Cookies Details PERFORMANCE Performance Performance cookies are used to analyze the user experience to improve our website by collecting and reporting information on how you use it. They allow us to know which pages are the most and least popular, see how visitors move around the site, optimize our website and make it easier to navigate. Cookies Details FUNCTIONAL Functional Functional cookies help us keep track of your past browsing choices so we can improve usability and customize your experience. These cookies enable the website to remember your preferred settings, language preferences, location and other customizable elements such as font or text size. If you do not allow these cookies, then some or all of these services may not function properly. Cookies Details ADVERTISING Advertising Advertising cookies are used to send you relevant advertising and promotional information. They may be set through our site by third parties to build a profile of your interests and show you relevant advertisements on other sites. These cookies do not directly store personal information, but their function is based on uniquely identifying your browser and internet device. Cookies Details Back Button COOKIE LIST Filter Button Consent Leg.Interest checkbox label label checkbox label label checkbox label label Clear checkbox label label Apply Cancel Confirm My Choices Reject All Allow All word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word word mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1 mmMwWLliI0fiflO&1