text
stringlengths
511
18.1k
### Human: Can you tell about Move : Online upgrades from newly deployed Move 4.1.0 appliances are not working. ### Assistant: This article describes an issue where online upgrades from newly deployed Move 4.1.0 VMs are not working.. Online upgrades from newly deployed Move 4.1.0 VMs are not working. Troubleshooting: SSH to the Move VM. (See Accessing Move VM with SSH.) Run the /opt/xtract-vm/scripts/check_upgrade_connectivity.sh script. It shows the error "curl: no URL specified!" For example: admin@move on ~ $ sh /opt/xtract-vm/scripts/check_upgrade_connectivity.sh. This issue is fixed in Move 4.1.2. Use Move 4.1.2 or later. Workaround: Perform the following steps: SSH to the Move VM.Type the rs command and enter the password.Open the check_upgrade_connectivity.sh file in an editor and delete the contents of the file. admin@move on ~ $ vi /opt/xtract-vm/scripts/check_upgrade_connectivity.sh Copy and paste the following content: admin@move on ~ $ vi /opt/xtract-vm/scripts/check_upgrade_connectivity.sh Retry Upgrade Software from the Move UI.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Performance benchmarking with CrystalDiskMark on Nutanix. ### Assistant: Specifics about using the CrystalDiskMark on Nutanix. CrystalDiskMark is an open-source disk drive benchmark tool for Microsoft Windows. Based on Microsoft MIT-licensed diskspd tool, this graphical benchmark is commonly used to test storage performance. It generates a workload with a varying number of queues and threads. The detailed information about using the parent tool diskspd can be found in this Nutanix KB article here. If there is any help needed feel free to open a support case to help you with CrystalDiskMark. . CrystalDiskMark default settingsCrystalDiskMark benchmark is very simple to use and provides a one-click experience for storage benchmarking. However, the default settings are not a summary for testing enterprise-level storage but rather built for PC/laptop local disk testing. Please use the latest version of CrystalDiskMark for testing.Let's take a closer look at the settings of the CrystalDiskMark:Figure 1: Default CrystalDiskMark settings Working SetBy default, CrystalDiskMark is set to a 1GiB test file size. It is recommended to test with a realistic working set size (the "hot" data which is frequently read and written) of the production application or at least get closer to the size that simulates the real life behaviour. Nutanix as a distributed architecture uses DRAM as read cache as well as Oplog as a write cache for random data and low parallel sequential write patterns. Hence the 1 GiB data will get into Oplog first and later into DRAM while parts of the test data will potentially get read from Oplog. This is unlikely to happen in real life. While customers often compare CrystalDiskMark results on Nutanix with existing SAN based storage systems and local storage solutions, the 1 GiB dataset will come 100% from the SAN cache or Raid Controller cache for both reads and writes and will show how this extremely small dataset with a short run time would behave differently.The below example illustrates the 100% reads from DRAM on a Nutanix based system:Figure 2: Reads with DRAM cachingFor this reason, it is important to understand what the working set would be before doing the benchmarking. Typically, the application owners can provide such information. For spot testing, we recommend using at least a 16 GiB file to bypass the controller cache for both Nutanix and SAN based storage to see realistic synthetic storage performance benchmarking results. Queues and ThreadsCrystalDiskMark allows setting queues and threads. All four of the default tests are single-threaded, which does not show a realistic I/O profile for the vast majority of workloads as most modern applications are multi-threaded. With Database products like Microsoft SQL, the default of parallel execution is set to a given amount of CPUs configured in the VM. More information about MAXDOP (max degree of parallelism) can be found here. It is highly recommended to understand the application as best as possible and identify whether it uses single-threaded or multi-threaded processing before doing the benchmarking. The test should be as close to the production workload as possible to best understand what the performance for a workload will be. The amount of queues is the amount of I/O that each thread is generating at once (OIO - Outstanding IO). Both of the default random I/O tests are set to 4K block size. While 4K is a block size that can be seen at times, it is usually not a common block size in most workloads today. When possible the block size in the test should match the application's block size. The common block size for both Oracle and MS SQL is 8 KB as well as 64 KB. While these applications have variable block sizes dependent on the work happening, 8 KB and 64 KB are very commonly used block sizes in the field.CrystalDiskMark tests settings can be adjusted in the "Settings" menu. CrystalDiskMark allows a user to specify the following parameters: Type (sequential or random) Block Size Queues Threads   Suggested example for testing Nutanix and other enterprise storageAlways create a new vDisk for the VM before running the tests. The main reason for that is the old disk might have some data from the previous tests or other workloads and it could have been down migrated to the HDD tier (in case of a hybrid cluster). The new disk will guarantee the correct data path of the I/O.Create a 50 GiB or 100 GiB disk. The disk then should be initialized and formatted with the default settings. The disk will get formatted and will get a letter assigned. Make sure that the new disk is selected in the CrystalDiskMark for the tests.NOTE: Nutanix scales performance per node and with the number of vDisks. The testing with CrystalDiskMark is based on a single vDisk on a single VM running on a single Node. While testing is important to understand a specific use case for a single vDisk, it should be clear that CrystalDiskMark does not replace a distributed performance test like Nutanix X-Ray to understand the full capability of a Nutanix cluster.The following settings are recommended as they provide a good mix of block sizes as well as parallel IO vs. single threaded IO:Figure 3: Recommended CrystalDiskMark settingsThe below table explains the different configuration details and differences to the default settings: Default tests: Recommended tests: Explanation: Sequential reads and writes of: Block size: 1 MB Queue depth: 8 Threads: 1 Sequential reads and writes of: Block size: 1 MB Queue depth: 8 Threads: 8 The test is very similar to the default sequential 1MB test, the only adjustment is the amount of threads has been changed from 1 to 8. Such a test would be similar to what happens during a large file copy or during a backup job. Sequential reads and writes of: Block size: 1 MB Queue depth: 1 Threads: 1 Sequential reads and writes of: Block size: 64 KB Queue depth: 16 Threads: 8 As the default single-threaded 1MB test won't give much more insight with a queue depth of 1 Nutanix recommends replacing this test with a 64KB block size sequential test as it is more common for applications to use that block size. The queue depth of 16 and 8 threads will ensure that the test generates enough load on the storage subsystem. Random reads and writes of: Block size: 4 KB Queue depth: 32 Threads: 1 Random reads and writes of: Block size: 8 KB Queue depth: 32 Threads: 8 The default 4K block size tests do not reflect the real-life usage of the application. However, 8K block size is very common and is used by most databases.  The queue depth of 32 and 8 threads will ensure that the test generates enough load on the storage subsystem. Random reads and writes of: Block size: 4 KB Queue depth: 1 Threads: 1 Random reads and writes of: Block size: 8 KB Queue depth: 1 Threads: 1 The default 4K block size tests do not reflect the real-life usage of the application. However, 8K block size is very common and is used by most databases. This test is left with a single thread and a queue depth of 1 just to show the difference between the multi-threaded test above and the single-threaded result. Table1: Explanation of default vs. recommended CrystalDiskMark settingsA good Measure Time recommendation is 30 seconds. The default setting of 5 seconds won't generate enough load for that short time frame.The good Interval Time recommendation is 60 seconds. That is to make sure that there is a quiet time between write tests. Nutanix starts flushing the write cache after 30 seconds of inactivity, so a default interval of 5 seconds is not a good setting. The test wants to ensure that the write cache is empty before running the next test to achieve fair results. While this is Nutanix specific, an application would rarely read the same data after it has been written. These settings reflect a realistic ramp up time which can be similarly used on other Enterprise Storage Systems.A good recommendation for a testing file size: 16 GiBThat way the file is big enough to bypass read/write caches and test Storage performance rather than a cache.Example of test results with recommended settingsThe results will greatly vary depending on the underlying hardware configuration and software versions. The below results were achieved on the following setup: VM details: Windows Server 2019 Datacenter Build 17763vCPUs: 8Memory: 8GbDisk size: 100Gb Hardware details:3-node cluster NX-3061 (G3).2xSSDs 800Gb and 4 HDDs 1Tb per node Nutanix AOS version: 5.19.1.5Hypervisor: AHV 20201105.12Storage container settings:Inline compression - onDeduplication - offErasure Coding (EC-x) - off .Figure 4: CrystalDiskMark results. For throughput equivalents of the results, simply multiply the IOPS by the block size (i.e. 8k, 64k, 1M). For example, 1629 IOPS x 1M = 1629MB/sec. Data randomness. Data randomness is quite important because the storage engine of Nutanix will be compressing the file internally if compression is enabled on the storage container level. CrystalDiskMark has two options for laying the file: random (default) All 0x00 (0 Fill)  This can be configured in Settings - Test Data. It is generally recommended to use the random pattern to get realistic results because Nutanix will optimize the NULL byte files and testing on NULL byte files will not reflect the performance of real applications.Figure 5: CrystalDiskMark data pattern The difference can be demonstrated by running a test in each of the modes and then compressing the file into a .zip archive using the built-in Windows compression tool. Even though the compression algorithm is different on the Nutanix storage level, it will still show the difference. So, if the test file is 1Gb in size, we see a 1Gb file in Windows Explorer:Figure 6: The test file sizeIf the file is created with the default random pattern, compressing the file into .zip will result in the same file size:Figure 7: The test file compressed into .zip size with a random data patternHowever, if the file is created with the All 0x00 pattern and then compressed into .zip, the archive will be taking ~1Mb of storage space instead of 1Gb, because the 0s will be mostly eliminated by the compression algorithm:Figure 8: The test file compressed into .zip size with All 0x00 data patternA similar compression happens on the storage side and without true data randomness, the test file will utilise a lot less of the storage space and will likely fit more into the cache, which will result in an unfair test.Unlike the diskspd tool, CrystalDiskMark creates a file with a random data pattern by default. There are no tweaks needed to achieve randomness.  File system software cachingUnlike the default settings of diskspd, CrystalDiksMark is disabling the software caching in its tests. This will show the same amount of IOPS/Throughput on the Nutanix system side as we see in the CrystalDiskMark test.Test example in CrystalDiskMark:Figure 9: CrystalDiskMark vs. Nutanix Storage PerformanceThe results from CrystalDiskMark vs. the Nutanix storage are very close. The reason for this difference is below example shows a single iteration at the moment of the screenshot capture and the CrystalDiskMark shows the average of the whole test.Reads:Figure 10: Nutanix Storage Performance read Writes:Figure 11: Nutanix Storage Performance write NOTE: CrystalDiskMark is an easy to use benchmarking tool which is generally a simplified version of diskspd with GUI. It lacks some control and settings compared to diskspd, so there is less tuning possible. If the settings are correctly set, the results will be realistic.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about NCC Health Check: pcvm_disk_usage_check. ### Assistant: The NCC health check pcvm_disk_usage_check verifies that the amount of disk space used in the Prism Central VM is within limits.. The NCC health check pcvm_disk_usage_check verifies that the amount of disk or system partition usage in the Prism Central (PC) VM is within limits. This check has the following parts: Checking the individual data disk usage (added in NCC 3.5.1): If usage is more than 75% for several hours, a WARNING is returned to identify the disk.If usage is more than 90% for several hours, a FAIL is returned to identify the disk. Checking the overall data disk usage (added in NCC 3.10.1): If overall usage is more than 90% for several hours, a WARNING is returned. Checking the Prism Central VM system root partition usage (added in NCC 3.9.4). Returns only FAIL message if the partition usage exceeds 95%. Checking the Prism Central VM home partition usage (added in NCC 3.9.4): If the usage is more than 75%, a WARNING is returned.If the usage is more than 90%, a FAIL is returned. Checking the Prism Central VM CMSP partition usage (added in NCC 3.10.1): If usage is more than 75%, a WARNING is returned.If the usage is more than 90%, a FAIL is returned. Checking the Prism Central VM Upgrade disk partition usage (added in NCC 4.6.0): If the usage is more than 70%, a FAIL is returned.This check runs every 5 mins.If there are more than 5 failures (30 mins), a critical alert is raised. Please note: If you are running LCM-2.6 or LCM-2.6.0.1, LCM log collection fills up /home directory please refer KB 14671 for workaround. Running the NCC checkRun the NCC check as part of the complete NCC health checks. nutanix@pcvm$ ncc health_checks run_all Or run the pcvm_disk_usage_check check separately. nutanix@pcvm$ ncc health_checks run_all You can also run the checks from the Prism Web Console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every 5 minutes, by default. This check will generate an alert after 5 consecutive failures across scheduled intervals. Sample OutputsFor status: PASS nutanix@pcvm$ ncc health_checks run_all For Status: WARN (on Prism Central VM data disk, e.g. /dev/sdc1) nutanix@pcvm$ ncc health_checks run_all For Status: WARN (on Prism Central VM overall MultiVDisk) nutanix@pcvm$ ncc health_checks run_all For Status: FAIL (on Prism Central VM data disk, e.g. /dev/sdc1) nutanix@pcvm$ ncc health_checks run_all For Status: FAIL (on root partition, i.e. /) nutanix@pcvm$ ncc health_checks run_all For Status: WARN (on Prism Central VM home partition, i.e. /home) nutanix@pcvm$ ncc health_checks run_all For Status: FAIL (on Prism Central VM home partition, i.e. /home) nutanix@pcvm$ ncc health_checks run_all For Status: WARN (on Prism Central VM CMSP partition, i.e. /dev/sde) nutanix@pcvm$ ncc health_checks run_all For Status: FAIL (on Prism Central VM CMSP partition, i.e. /dev/sde) nutanix@pcvm$ ncc health_checks run_all For Status: FAIL (on Prism Central VM upgrade disk partition, i.e. /home/nutanix/upgrade) nutanix@pcvm$ ncc health_checks run_all Check Disk Usage in PC VMFollowing is an example of how to check disk usage on a PC VM. nutanix@pcvm$ ncc health_checks run_all Partition for upgrade disk would show up similar to:  nutanix@pcvm$ ncc health_checks run_all   Output messaging Check ID101059DescriptionCheck that disk space usage on the Prism Central VM is within limits.Causes of failureHigh disk usage in the Prism Central VM.ResolutionsRefer to KB 5228 for further details.ImpactPrism Central VM may run out of storage space to store data.Alert IDA101059Alert TitlePrism Central VM disk usage highAlert MessagePrism Central VM IP disk usage exceeds warning limit x % for disks: disks. Prism Central VM IP overall MultiVDisk usage exceeds warning limit of  2321329924 KB Prism Central VM IP disk usage exceeds critical limit x % for disks: disks. Check ID200316DescriptionChecks if Prism Central system root partition usage is within the threshold to ensure uninterrupted operations.Causes of failureIncreased Prism Central VM system root partition usage due to excessive logging or incomplete maintenance operation.ResolutionsReduce Prism Central VM system root partition usage by removing any known temporary or unneeded files. Refer to KB 5228 for further details.ImpactIf the Prism Central VM system root partition is highly utilised, certain maintenance operations, such as upgrades, may be impacted. If the Prism Central VM system root partition is 100% utilized, services may stop and impact Prism Central cluster management functions.Alert IDA200316Alert TitlePrism Central VM System Root Partition Space Usage HighAlert MessageDisk space usage for root partition mount_path on entity-ip_address has exceeded threshold%. Check ID200317DescriptionChecks if Prism Central home partition usage is within the threshold to ensure uninterrupted operations.Causes of failureIncreased Prism Central VM home partition usage due to excessive logging or incomplete maintenance operation.ResolutionsReduce Prism Central VM home partition usage by removing any known temporary or unneeded files. Refer to KB 5228 for further details.ImpactIf the Prism Central VM home partition is highly utilised, then certain maintenance operations, such as upgrades, may be impacted. If Prism Central VM home partition is 100% utilised, then services may stop and impact cluster storage availability.Alert IDA200317Alert TitlePrism Central VM home partition disk usage highAlert MessagePrism Central VM IP home partition disk usage exceeds warning limit x %. Prism Central VM IP home partition disk usage exceeds critical limit x %. Check ID200328DescriptionCheck that platform disk space usage on the Prism Central VM is within limits.Causes of failureHigh disk usage in the Prism Central VM.ResolutionsRefer to KB 5228 for further details.ImpactPrism Central VM may run out of storage space to store data.Alert IDA200328Alert TitlePrism Central VM platform disk space usage highAlert Smart TitlePrism Central VM svm_ip platform disk space usage highAlert MessagePlatform disk space usage in Prism Central VM svm_ip exceeds percentage_exceed% for disk(s): disk_paths. Check ID200334DescriptionChecks if Prism Central upgrade disk usage is within the threshold to ensure uninterrupted upgrade operations.Causes of failureIncreased Prism Central VM upgrade disk usage due to the presence of multiple Prism Central Installer filesResolutionsReduce Prism Central VM upgrade partition usage by removing the Prism Central Installer files which are not needed. Refer to KB 5228 for further details.ImpactIf the Prism Central VM upgrade disk is highly utilised, then the Prism Central Upgrade would fail due to lack of space in the upgrade disk.Alert IDA200334Alert TitlePrism Central VM upgrade disk usageAlert MessagePrism Central VM <IP> upgrade disk usage exceeds critical limit x% Scenarios that trigger pcvm_disk_usage check Warn/Fail on /home partition Scenario 1The /home directory on long-running PC instances might reach close to its maximum limit as older Prism code is not cleaned up: nutanix@pcvm$ ncc health_checks run_all Nutanix Engineering is aware of the issue and is working on a fix in a future release to further clean up older Prism codes. For the workaround, refer to the corresponding solution section.   Removing older sysstats logsIf you have checked all the usual places but still need to clean up space, you can get the customer’s permission to remove older sysstats logs. Double-check that there are no open support cases with pending RCAs before proceeding, as this data may not yet have been collected. nutanix@pcvm$ ncc health_checks run_all Scenario 2Hyperkube logs (kublet logs) are not being cleaned up after enabling CMSP / microservices on Prism Central. If Cluster Maintenance Utilities (CMU) has been updated to version 2.0.3 using LCM, the included Scavenger version is missing the capability to clean up certain logs related to CMSP microservices. As a result of this issue, users may find that Prism Central services are not starting or that they cannot log in to PC UI. This issue will first fill up the /home/nutanix/data/sys-storage/NFS_.../ directory, after which the kubelet logs will start filling up the root partition in the /tmp folder. nutanix@pcvm$ ncc health_checks run_all We may find hyperkube.ntnx* logs getting generated and not rotated correctly when CMSP is enabled. Check if the directory /home/nutanix/data/sys-storage/NFS.../kubelet/ is excessively filled with these logs.  nutanix@pcvm$ ncc health_checks run_all When working as intended, we should see less than 10G used in this kubelet folder, and the file count for kubelet* would be less than ~15 We may additionally see similar kubelet logs filling up space in /tmp, after space in the /home/nutanix/data/sys-storage/NFS... directory has been exhausted.  nutanix@pcvm$ ncc health_checks run_all Scenario 3Starting PC.2022.6 - a dedicated 30 GB disk is created and mounted for PC upgrades. This Vdisk will be used for downloading and extracting upgrade binaries from consecutive upgrades. If the Prism Central VM upgrade disk is highly utilised, then the Prism Central Upgrade would fail due to lack of space in the upgrade disk. nutanix@pcvm$ ncc health_checks run_all Note: In case several services are enabled on Prism Central, such as msp, karbon, calm, flow, and objects, we would see /home usage to be high as each of the services will generate several logs and configuration files. Nutanix engineers are constantly working towards improving the /home usage. If none of the above scenarios matches and this affects Prism Central upgrade, engage Nutanix Support team to help manually cleanup /home by trimming the logs. Scenario 4Due to the log file rotation issue, Adonis logs directory usage is high. This log directory usage should not be over than 1G. For Scale-Out Prism Central deployments, check the file usage on each of the Prism Central VMs: nutanix@pcvm$ ncc health_checks run_all Scenario 5In certain cases, catalina.out may consume a large amount of space on the Prism Central VM. SSH to the Prism Central and check if /home/nutanix/data/logs/catalina.out is consuming a huge amount of space: nutanix@pcvm$ ncc health_checks run_all. If the check reports a WARN or FAIL status, disk usage is above the threshold and needs investigation. Generally, space utilization can be queried using df -h. The output below shows the mount points as follows: /dev/sdb1 is root partition/dev/sdb3 is home partition/dev/sdc1 is data disk partition nutanix@pcvm$ df -h Data disk usage (/dev/sdXX) or overall multivdisk usage: Verify the number of VMs supported for the particular Prism Central size is within the limit (consult the Prism Central Guide for your version from the Support Portal for the limits). Contact Nutanix Support. While opening a support case, attach the output of the following commands to the case. nutanix@pcvm$ df -h Prism Central VM home partition (/home): Inspect the NCC output to determine which Prism Central VM has high usage, then perform the following: Log in to the Prism Central VM.Use the cd command to change the location to the /home partition.List the contents of the directory by size using the command below: nutanix@pcvm$ df -h Examine the output for any large unused files that can be deleted. Run the du command below to list the usage of each file and sub-directory: nutanix@pcvm$ df -h Examine the output of large sub-directories. You can run the du command for each sub-directory in question to further identify large unused files that can be deleted. Below are some common sub-directories of /home where large unused files are likely to exist: /home/nutanix/software_downloads/ - delete any old versions other than the versions you are currently upgrading./home/nutanix/software_uncompressed/ - delete any old versions other than the versions you are currently upgrading./home/nutanix/data/cores - delete old stack traces that are no longer needed./home/nutanix/data/log_collector/ - delete old NCC Logs with NCC-logs-2018-07-20-11111111111111-1032057545.tar format./home/nutanix/foundation/isos/ - old ISOs./home/nutanix/foundation/tmp/ - temporary files that can be deleted. If the above steps do not resolve the issue or if the issue matches one of the scenarios presented earlier in this article, follow the solution steps outlined below. Prism Central VM root system partition (/):Consider engaging Nutanix Support. Gather the output of the commands below and attach it to the support case: nutanix@pcvm$ df -h Prism Central VM CMSP partition (/dev/sde):Consider engaging Nutanix Support. Gather the output of the commands below and attach it to the support case: nutanix@pcvm$ df -h   Scenario 1As a workaround, remove the directories that do not have registered clusters with the corresponding version. Find the AOS versions of all the registered PEs. You can do so from Prism Central > Hardware > Clusters > AOS version column.List the PE apache console directories at /home/apache/www/console/ nutanix@pcvm$ df -h If any PE apache console directories that do not correspond to registered PE versions are present, they should be safe to clean up. If you require further assistance with the cleanup, consider engaging Nutanix Support. Gather the output of the commands below and attach it to the support case: nutanix@pcvm$ df -h   Removing older sysstats logsIf you have checked in all the usual places but still need to clean up space, you can get the customer’s permission to remove older sysstats logs. Please double-check that there are no open support cases with pending RCAs before proceeding, as this data may not yet have been collected. nutanix@pcvm$ df -h You can use the following command to remove gzipped sysstats logs older than a certain date. In the example below, you will remove sysstats logs from all PCVMs that are older than 3 days (or 4320 minutes). nutanix@pcvm$ df -h Scenario 2If your Prism Central instance matches this scenario, refer to KB-12707 Scenario #2 and open a case with Nutanix Support for assistance with in recovering from the issue.Scenario 3The increase in Prism Central VM, upgrade disk usage, is due to multiple Prism Central Installer files. Reduce Prism Central VM upgrade partition usage by removing the Prism Central Installer files that are not needed.This partition /home/nutanix/upgrade is designed to hold upgrade-related files. So, any other file within this directory can be deleted.The partition usage as listed in df -h: nutanix@pcvm$ df -h To check the contents of the disk: nutanix@pcvm$ df -h To remove the unwanted files, use the below command: nutanix@pcvm$ df -h Scenario 4Nutanix is aware of the issue. The fix for this issue will be made available in a future PC release. For a workaround, engage Nutanix Support. Scenario 5 If you see catalina.out log file is consuming a lot of space, use the following command to restart the prism service on the PCVM.  nutanix@pcvm$ df -h. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about MOVE: Nutanix Move Basic Troubleshooting Guide. ### Assistant: This article describes basic troubleshooting for Nutanix Move (version 3.x & 4.x).. With the release of Nutanix Move 3.0, the Move services are dockerised and all the Move agents now run as Docker Containers. This is a major milestone that allows add/update features without much service disruption as well as provides flexibility to run Move anywhere. If you are running an older version of Move, upgrade to Move 4.x and later. To upgrade, find the latest version and procedure below: Latest Move bundle: here Upgrade Procedure: here Note: If you face issues in booting the VM after migration to Nutanix AHV, please collect logs on AHV cluster using Logbay.. Overview Move File locationsWhich user do we use to SSH into the Move VMLog files locations for troubleshootingBasic CommandsHow to generate Move support bundle from cliHow to configure static IP on Move VMFirewall Ports requirements for MoveTesting Connectivity between Move and ESXi/vCenter/Hyper-VCommon user permissions required on the Windows OS if using the domain administrator or local administrator accountsMove is stuck at 0% while "Seeding Data"Move VM migration failure due to missing snapshotWhat happens if Move upgrade is stuckAre there any logs created on the VM by MoveIf there is a backup schedule, will it affect Move Move File locationsMove will use the below host directories. All of them are created under /opt/xtract-vm/. Files LocationDescription/opt/xtract-vm/certsPath to the certificate key-pairs used by Move/opt/xtract-vm/keyPath to the SSH keys generated and used by Move/opt/xtract-vm/kvstoreKVStore directories used by srcagent and diskwriter/opt/xtract-vm/logsPath to Move service logs Which user do we use to SSH into the Move VMWhen logging in to the Move VM using SSH, use: Username : admin Note: The admin user on Move does not really have all the major rights, so best way is to change the user to root. To do that, on the Move VM, run: Username : admin Enter the password for admin. You will be able to change the mode to root user and have control over the Move VM.  Log files locations for troubleshootingBelow are the log files which will be important for troubleshooting. Logs File LocationDescription of the Log File/opt/xtract-vm/logs/diskreader.logUses NFC mechanism to take a copy of the disk and ship it to the AHV side using "pipe"./opt/xtract-vm/logs/diskwriter.logThis is the receiver of the copy disk through pipe and it writes it to the container mounted on the Move VM./opt/xtract-vm/logs/mgmtserver.logOrchestrator service, exposes REST APIs to source, target side. If this service is working fine, then the UI will load perfectly.opt/xtract-vm/logs/srcagent.logDealing with source side, it prepares the migration by enabling CBT (Changed Block Tracking), shutting down the VM and shipping the last snapshot before the VM finally boots up on AHV side./opt/xtract-vm/logs/tgtagent.logDealing more on the AHV side. Collects cluster information and mounts the needed container to Move during migration and power on the VM on the AHV side. Basic CommandsTo check the status of the services, run the command svcchk as below. Note that the last column will give you the name of the service (for example, bin_srcagent_1, bin_tgtagent_1, bin_diskreader_1). Username : admin Note: If restarting any service/Docker with an active Migration Plan, the Migration Plan will turn into failed state and the target vdisk can get corrupted. In this case, a manual cleanup on AHV and a new Migration Plan are needed.To restart any single service, run the command "docker restart <service name>" as shown below. You can get the individual service names from the svcchk command. Username : admin To restart all the services, run the command svcrestart. It will ask you to confirm. Enter "y" (yes) to continue: Username : admin You can also use svcstop and svcstart commands to stop and start all the container services, respectively. How to generate Move support bundle from cliTo generate the Move support bundle from CLI, run the command "root@move on ~ $ /opt/xtract-vm/bin/support-bundle [--dump-path <directory>]" as shown below. This example dumps the support bundle under the location /opt/xtract-vm/supportdumps. You can select your own directory. If you leave it blank, it will generate the bundle in the /root directory. Username : admin How to configure static IP on Move VMIn case you deleted the Move NIC or want to give a static IP again to the Move VM, follow the procedure of Assigning a Static IP Address to Nutanix Move.  Firewall Ports requirements for MoveVerify the ports between Move VM and ESX and Target AHV is open. If the environment has different subnets, refer to Port Requirements of Move.  Testing connectivity between Move and ESXi/vCenter/Hyper-V Move must be able to communicate with vCenter Server on port 443, ESXi hosts on 902 and 443, and AHV on 9440. You can run the following commands from Move VM to verify the connectivity: Username : admin Testing connectivity on port 902 between Move and ESXi hostThe port connectivity for port 902 could not be tested by nc -zv <ESXi Host IP> 902.Reason: Starting from move 3.0 onwards, iptables is handling the routing for the docker containers residing in the move appliance hence, one needs to run the test through a docker container, please find the steps below: List all running docker images : Username : admin Example: Username : admin Test connectivity from within the docker container named srcagent using the following command : Username : admin Example: Username : admin Common user permissions required on the Windows OS if using the domain administrator or local administrator accounts Any user account on a Windows VM must possess three properties to be used as a login account by the Move: The user must have "Logon as Batch Job" enabled.UAC must be disabled (for local administrator, it is usually disabled).The users must have "Restore files and directories" rights. Move is stuck at 0%, while "Seeding Data"Possible Causes The customer uses third-party solutions that rely on snapshots.Third-party snapshots may cause stale snapshots to reside in the VM folder on the VMware datastore.Old snapshots that have been forgotten. Possible Solutions Leverage the Consolidate option in VMware. This can be done through the Web or vSphere client by right-clicking the VM and selecting "Consolidate".This may fail to work, and the snapshot files may still appear in the VM folder on the datastore.The Consolidate option may be greyed out and not available. For Hyper-V, delete checkpoints Create a new snapshot, and then delete it.Storage vMotion the VM from one datastore to another: The VM must move from one VMware datastore to a different VMware datastore to ensure consolidation. Verify the absence of stale snapshots in the new datastore location.If Migrate option is greyed out: Try powering off the VM to see if the option becomes available. There are many reasons a Storage vMotion may not be possible while the VM is running.Check for registration issues. Follow VMware KB 2044369. Remove the VM from the inventory and re-add it. Ensure you know which datastore the .vmx file lives in before removing it. In some cases, you may need to power the VM off to remove it from inventory.If none of these options works and you need to leverage this KB, it should only be done with the assistance of VMware, as it involves modifying the DB.   Move VM migration failure due to missing snapshotMove migration may fail due to missing snapshots. Consolidate snapshots from the VMware Snapshot Manager to resolve. A common signature would be found in Move mgmtserver.log: Username : admin Check the Source VM Snapshot Manager, and you will be unable to find the snapshot the logs refer to.          Consolidate all snapshots of the source VM from VMware Snapshot Manager. Once finished, recreate a new job in the Move for migration. What happens if Move upgrade is stuck?If the Move upgrade is stuck, check the updater.log on the Move VM. If you cannot figure out the exact reason, collect the logs, and share them with Nutanix Support. Proceed with verification of the status for all the services (the expected state is UP and not crashing) by running the command svcchk. To roll back the Move appliance to a previous version, run the script: Username : admin Are there any logs created on the VM by Move? Yes, logs are created on the VMs when Move triggers VirtIO installation and IP address retention scripts.On Windows, the logs can be found in: Username : admin On Linux, they can be found in: Username : admin If there is a backup schedule, will it affect Move?Yes, it will be difficult to take the number of changes into account on the disk when there is a backup job in progress. So, it would be best to perform the migration outside of backup hours or disable the backup until the migration is completed.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Move: Update of a storage container may fail if this container has a container filesystem whitelist configured. ### Assistant: Update of a storage container may fail after 5 minutes with "stream timeout" or "Http Request to endpoint 127.0.0.1:9080 failed with error.Response status:1" if this container has a container filesystem whitelist configured.. Update of a storage container that has container level filesystem (NFS) whitelist may fail with one of the following messages appearing after 5 minutes: Oops - Server error Or: Oops - Server error Errors observed In Prism Element, when updating the container, an error “stream timeout” or “Http Request to endpoint 127.0.0.1:9080 failed with error. Response status:1” appears. Or: nCLI command takes several minutes to execute, followed by the error: Oops - Server error Nutanix Move appliance may fail to configure AOS filesystem whitelist with the following error: Oops - Server error The following errors can also be found in ~/data/logs/prism_gateway.log on the Prism leader: To identify the Prism leader CVM, see KB 1841. Oops - Server error Workflow of modifying "Filesystem Whitelists" of a container Prism Gateway makes an API call to genesis to update the whitelists of the container when the container "Filesystem Whitelists" is configured to add Move IP aa.bb.cc.dd. The request will be logged to prism_gateway.log. The following example is to add Move IP aa.bb.cc.dd to "Filesystem Whitelists" of a container. Oops - Server error Genesis internally makes a call to modify Firewall. Events are logged to genesis.out Oops - Server error  . Nutanix Engineering is aware of the issue and is working on a fix in a future release. Workaround 1To work around the issue, you need to delete a per-container filesystem whitelist. Note: If you have active external clients that still need access to the container over NFS/SMB (e.g. Move appliance), make sure that you update Global Filesystem with all IP addresses from the container filesystem whitelist before proceeding with the workaround. When a container filesystem whitelist is removed, a global AOS filesystem whitelist will be used and some clients may lose access to the container if they are not whitelisted on the cluster level. Refer to the documentation for more information on Filesystem Whitelists. In Prism Element, select Storage from the pull-down main menu (upper left of screen) and then select the Table and Storage Container tabs.Select the storage container in question and then click the Update button.Find Filesystem Whitelist in the Advanced Settings of the container.Empty Filesystem Whitelist text box and click the Save button.Once the task completes, you can retry the operation that was failing before. In case the steps described in this article do not resolve the issue, consider engaging Nutanix Support for further investigation.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about HDD or SSD disk troubleshooting. ### Assistant: This article describes how to troubleshoot a failed disk and identify which component is causing the failure. If you have received an alert for a failed disk, you can troubleshoot a specific node instead of examining an entire cluster.. When a drive is experiencing recoverable errors, warnings or a complete failure, the Stargate service marks the disk as offline. If the disk is detected to be offline 3 times within the hour, it is removed from the cluster automatically and an alert is generated (KB 4158 or KB 6287). If an alert is generated in Prism, the disk must be replaced. Troubleshooting steps do not need to be performed. If a failed disk is encountered in a Nutanix Clusters on AWS, once the disk is confirmed to have failed proceed to condemn the respective node. Condemning the affected node will replace it with a new bare metal instance of the same type.. Once the disk is replaced, an NCC health check should be performed to ensure optimal cluster health. However, if an alert is not generated or further analysis is required, the steps below can be used to troubleshoot further.Before you begin troubleshooting, verify the type of HBA controller.Caution: Using the SAS3IRCU command against an LSI 3408 or higher HBA can cause NMI events that could lead to storage unavailability. Confirm the HBA controller before using these commands. To determine what type of HBA is used, look for the controller name located in /etc/nutanix/hardware_config.json on the CVM.Example of the output when SAS3008 is used. In this case, the command SAS3IRCU is the correct command to use. Note the "led_address": "sas3ircu:0,1:0" line: "node": { Example of the output when SAS3400/3800 (or newer) is used. In this case, using SAS3IRCU would be ill-advised. Use the storcli command instead. For information on StorCLI refer to KB-10951. Note "led_address": "storcli:0" line. "node": {   Identify the problematic disks Check the Prism Web console for the failed disk. In the Diagram view, you can see red or grey for the missing disk.Check the alerts in the Prism Web console for the disk alerts, or use the following command to check for disks that generate the failure messages. "node": { Check if any nodes are missing mounted disks. The two outputs should match numerically. Check the disks that are mounted on the CVM (Controller VM). "node": { Check the disks that are physical in the CVM. "node": { Check if the status of the disks is all Online and indicated as Normal. "node": { Validate the expected number of disks in the cluster. "node": { The output of the command above should be the sum of the outputs of steps 1c.i and 1c.ii. There are instances where the number can be higher or lower than expected. So, it is an important metric that can be compared to the disks listed in step 1b. Look for extra or missing disks. "node": { Check that all disks are indicated as mounted rw (read-write) and not ro (read-only). "node": { Identify the problems with the disks nodes Orphaned disk ID This is a disk ID that the systems no longer use but was not properly removed. Symptoms include seeing an extra disk ID listed in the output of ncli disk ls. To fix the orphaned disk ID: "node": { Ensure that you validate the disk serial number and that the device is not in the system. Also, ensure that all the disks are populating using lsscsi, mount, df -h, and counting the disks for the full-disk population. Failed disk and/or missing disk A disk becomes failed and/or is no longer visible to the system for several reasons. There are two approaches to solving a disk problem. The top-down approach is the software representation of the disk down to the hardware/physical layer. The bottom-down approach documents the investigation of a disk from a bottom-up perspective since the 'top' layer can be relative to the problem reported by the software layer (Prism, Hades, Stargate, etc.). Check if the disk is visible to the controller as it is the device whose bus the disk resides on. lspci - displays the PCI devices seen by the CVM. NVME device - Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01).SAS3008 controller - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) - LSI.SAS2008 controller - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03).SAS2308 controller (Dell) - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05).MegaRaid LSI 3108 (Dell) - RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 [Invader] (rev 02).LSI SAS3108 (UCS) - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS3108 PCI-Express Fusion-MPT SAS-3 (rev 02). lsiutil - displays the HBA (Host Bus Adapter) cards perspective of the ports and if the ports are in an UP state. If a port is not up, either the device on it does not respond, or the port or connection to the device is bad. The most likely scenario is the device (disk). "node": { lsscsi - lists the SCSI bus devices seen that include any HDD or SSD (except NVME, which does not pass through the SATA controller).sas3ircu - reports slot position and disk state. It is useful for missing disks or verifying that disks are in the correct slot. (Do NOT run the following command on Lenovo HX hardware as it may lead to HBA lockups and resets)   "node": { storcli - reports slot position and disk state. Commonly found on Lenovo Chassis but will be included on NX G8 platforms.  Slot # (should match the physical slot) State is the state of the disk. New disks will be listed as Available (AVL). After bootup and SCSI inquiry healthy disks will show as Ready (RDY). In the example below, a disk was detected but was never marked as Ready (RDY). It remains in Available (AVL) status due to a bad disk. In the next section, we see the related mpt3sas dmesg output showing the SCSI inquiries constantly polling this disk.   (Note that the Serial No has been truncated and replaced by the Firmware Revision LC72 repeated for this platform. This was consistent for the systems in this particular cluster. This does not indicate a problem, simply a unique behaviour.) "node": { Check the CVM's dmesg for LSI mpt3sas messages. We should typically see one entry for each physical slot. (The below example shows SAS address "0x5000c5007286a3f5" is repeatedly checked due to a bad/failed disk. Note how the other addresses are detected once, and the suspect is repeatedly being polled.) "node": { smartctl - if Hades indicate that a disk is checked by smartctl 3 times in an hour, it is automatically failed. "node": { See KB 8094 for troubleshooting with smartctl. Check for offline disks using NCC check disk_online_check. "node": { See KB 1536 for further troubleshooting offline disks. Check if NCC has any errors indicated for disks, mount paths being offline, or failed disks in the system.Confirm if disks are seen from LSI Config Utility: Enable LSI HBA OPROMGo to “Advanced-->PCIe/PCI/PnP Configuration”, change CPU1 SLOT2 PCI-E 3.0 x8 OPRM to “Enabled” as below showing,Press "F4"Enter LSI Config Utility on bootup: Press <Enter> twice to confirm if disks are seen or not: There can be a case where the disk is DOWN in lsiutil, usually after a replacement or an upgrade of the disks. When all the above checks are carried out, and the disk is still not visible, compare the old and new disk "disk caddy or tray". Ensure the type is the same. There can be cases where an incorrect disk type is dispatched, and it does not seat properly in the disk bay hence not being detected by the controller.   Identify the node type or the problematic node Run ncli host ls and find the matching node ID. Specific node slot location, node serial, and node type is important information to document in case of recurring issues. It also helps to track the field issues with the HBA's, node locations, and node types. Identify the failure occurrence Check the Stargate log. The stargate.INFO log for the corresponding period indicates if Stargate saw an issue with a disk and sent it to the Disk Manager (Hades) to be checked or had other errors accessing the disk. Use the disk ID number and serial number to grep for in the Stargate log on the corresponding node the disk is in. The Hades log contains information about the disks it sees and the health of the disks. It also checks which disk is metadata or Curator disk and selects one if one did not already exist in the system or was removed/disappeared from the system. Check the Hades log.Check df -h in /home/nutanix/data/logs/sysstats/df.INFO to see when the disk was last seen as mounted.Check /home/nutanix/data/logs/sysstats/iostat.INFO to see when the device was last seen.Check /home/log/messages for errors on the device, specifically using the device name, for example, sda or sdc.Check dmesg for errors on the controller or device. Run dmesg | less for the current messages in the ring, or look at the logged dmesg output in /var/log. Identify the reasons for disks failure Check when the CVM was last started if the disk's last usage data were not available. Again, reference the Stargate and the Hades logs.Check the Stargate log around the time of disk failure. Stargate sends a disk to Hades to check if it does not respond in a given time and ops timeout against that disk. Different errors and versions represent it differently, so always search by disk ID and disk serial. Check the count of disk failure If a drive failed more than once in this slot and the disk was replaced, it would indicate a potential chassis issue at that point. Check if lsiutil is showing errors. If lsiutil shows errors evenly on multiple slots, it indicates a bad controller. Sample command output for reference: "node": { On a Nutanix Clusters on AWS: "node": { To run the commands in a Hyper-V environment, use winsh on the CVM: "node": { "node": { NOTE: For ID: 191, G-Sense_Error_Rate for Seagate HDD's can be safely ignored unless there is performance degradation. G-Sense_Error_Rate value only indicates HDD adapting to shock or vibration detection. Seagate recommends not to trust these values as this counter dynamically changes the threshold during runtime. "node": {. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Pre-upgrade Check: AOS Upgrade (test_sha_mac_ssh_config). ### Assistant: This pre-upgrade check runs during an AOS upgrade. It confirms that no hosts on your cluster are configured to only accept deprecated SSH ciphers, and flags any such hosts.. test_sha_mac_ssh_config is a pre-upgrade check that runs during AOS upgrades. It checks whether your hypervisor is configured to use the proper SSH ciphers. Starting with AOS 5.15.2 and 5.17.x, we start to enforce the use of sha2 ciphers for SSH and without your host configured to accept this cipher the upgrade will stall on the affected host. You need to correct this problem before starting the upgrade.Error messages generated on UI by this check: Host(s) : [u'x.x.x.x'] configured to only accept deprecated SSH ciphers, Please refer KB 9730 On any CVM, you can run the following command to confirm which SSH ciphers are being accepted by your hypervisor nutanix@cvm$ hostssh 'grep "^MACs" /etc/ssh/sshd_config' The expected value currently for hosts with CVMs running AOS 5.15.2+ and 5.17.1+ are MACs hmac-sha2-256,hmac-sha2-512,hmac-sha1. Manually updating the sshd config: On each host which is reported by the pre-check, do the following SSH to the hypervisorEdit the /etc/ssh/sshd_config file with vi or other text editorFind the line which starts with "MACs" such as this  MACs hmac-sha1,hmac-sha1-96 Change the line to reflect the newer SSH ciphers: MACs hmac-sha1,hmac-sha1-96  . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about How to obtain the SEL output from the IPMI or CLI. ### Assistant: The SEL (System Event Log) output is an indispensable asset in countless scenarios, serving as a vital resource for pinpointing the underlying causes of issues, devising effective solutions, and conducting thorough hardware diagnostics. This article will provide comprehensive guidance on retrieving this information through the user-friendly web GUI and the command line interface.. Frequently, when troubleshooting hardware issues it becomes necessary to obtain the SEL or System Event Log output. Nutanix support has the option to retrieve this information from Insights. However, there are instances where Pulse is not enabled or the command fails to run successfully. This article aims to provide customers with a set of instructions to obtain the SEL output either through the webUI (IPMI) or the command line (CLI).. Navigating to the IPMI from Prism Element:  Prism Element > Hardware > Diagram > select the Node for the IPMI you want to access > IPMI should be hyperlinked You can also obtain the IPMI address from the CLI with the following command:  nutanix@cvm:~$ panacea_cli show_all_ips Note: If you need to update or change your password to the IPMI, click here for instructions how. Option 1: Downloading the SEL output from the WebUI In G5, G6 and G7 IPMI, you will find the SEL output under Server Health > Health Event Log To download the output as a .csv file you will click 'Save' highlighted in red.  In G8 IPMIs, the SEL output can be found by expanding System > Health Event Log To download the output as an excel file you will click 'Export to Excel' highlighted in red   Option 2: Gathering the SEL output from the CLI AHV nutanix@cvm:~$ panacea_cli show_all_ips ESXi nutanix@cvm:~$ panacea_cli show_all_ips Hyper-V nutanix@cvm:~$ panacea_cli show_all_ips. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Pre-check: test_is_nos_upgrade_supported_on_esx. ### Assistant: The pre-check test_is_nos_upgrade_supported_on_esx verifies if the version of ESXi hosts on the cluster is supported on the target AOS version.. test_is_nos_upgrade_supported_on_esx check is a pre-upgrade check that verifies if the version of ESXi hosts on the cluster is supported on the target AOS version. The pre-check may fail if the ESXi version is lower than 6.7.0 and the target AOS version for the upgrade is 5.18 or above for STS or 5.15.3 or above for LTS, with the failure message below: Host x.x.x.x is running on ESXi version 6.5.0 which is lower than supported version 6.7.0. Upgrade host, Please refer KB 9646. Make sure the ESXi version is 6.7.0 or above before upgrading AOS to 5.18 or above for STS or 5.15.3 or above for LTS.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about NCC Health Check: ahv_remote_log_forwarding_check. ### Assistant: NCC 4.1.0 ahv_remote_log_forwarding_check detects issues with AHV host log forwarding to the configured remote server.. The NCC health check ahv_remote_log_forwarding_check detects whether log forwarding to remote server is operating normally.This check runs on only AHV, but also only if the cluster has had additional STIG DoDIN APL enablement performed on supported AOS/AHV versions. By default, and unless specific steps have been taken to enable this functionality on-demand, this check will return a PASS result.When remote log forwarding is needed to forward the AHV host logs to a remote machine, the host itself records the failure or success of sending these records to the remote machine (configured by the user by following the Portal Guide; Acropolis Advanced Administration Guide \ Configuring the Remote Syslog Server Settings). This status is maintained in an rsyslog stats file (/var/log/rsyslog-stats) on each AHV host. The records of success/failure is updated regularly and this health check parses the recent entries in that file to recognise any failures and trigger any alert workflows configured on the cluster such as email, syslog, SNMP trap, Prism Alert notifications.It can be run as part of the complete NCC check by running: nutanix@cvm$ ncc health_checks run_all or individually as: nutanix@cvm$ ncc health_checks run_all You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check runs on each CVM to assess its AHV host. It does not run on Prism Central.This check is scheduled to run every 5 minutes and produces a WARN alert after 3 consecutive failures.Sample Output:Check Status: PASS nutanix@cvm$ ncc health_checks run_all Check Status: WARN nutanix@cvm$ ncc health_checks run_all Output messaging Check ID111088DescriptionCheck log forwarding to remote server.Causes of failureForwarding logs to remote server is failing.ResolutionsCheck failure causes for forwarding the logs to remote server. Refer KB10811.ImpactRemote server will not be receiving log output from cluster.Alert IDA111088Alert TitleForwarding logs to remote server is failing.Alert MessageForwarding logs to remote server is failing on host {host_ip}.  . A warning produced by this check could indicate one or more of the following possible conditions: Network connectivity issue between the AHV host and the remote rsyslog serverAHV host uplink or local network issueWAN/VPN/inter-site routing issue between Nutanix AHV cluster and remote rsyslog serverFirewall/ACL blocking AHV host communication to the remote rsyslog serverRemote rsyslog server offline/powered off, services failed, storage fullRemote rsyslog server unable to handle incoming volume of messages from one or more AHV hosts, or other rsyslog clients in addition to the AHV hosts. From the affected AHV host(s), consider using the following commands/utils to help isolate the issue; Ping the configured remote rsyslog server to expose any basic switching/routing issues.Use NCAT to test a protocol:port specific connection is able to be opened (or blocked by firewall/ACL)  [root@AHV ~]# nc -zvu -w3 <remote_syslog_server_IP_FQDN> <remote_port_usually_514> Investigate the contents of the /var/log/rsyslog-stats log file for any entries around the timestamp reported by this NCC check/alert. [root@AHV ~]# nc -zvu -w3 <remote_syslog_server_IP_FQDN> <remote_port_usually_514> In case the above mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/To speed up the resolution time and minimize possible impact, collect and attach the following information to the support case: A complete NCC report: [root@AHV ~]# nc -zvu -w3 <remote_syslog_server_IP_FQDN> <remote_port_usually_514> A log bundle generated from the cluster. This can be collected through Prism Web Console's health page. Select Actions > Collect Logs. Logs can also be collected through command line using logbay (KB 6691 - NCC - Logbay Quickstart Guide): [root@AHV ~]# nc -zvu -w3 <remote_syslog_server_IP_FQDN> <remote_port_usually_514> A copy of the /var/log/rsyslog-stats log file from any affected AHV host.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about CVM services won't start after the Hardware replacement for VMware NSX Enabled ESXi servers. ### Assistant: Customer has VMware Infrastructure and using the SDN (Software Defined Networking) solution NSX-V for DFW (Distributed Firewall) or Guest Introspection features. When there is hardware replaced like a DIMM or SATADOM, VMware Administrator remove the ESXi host from NSX solution which generally removes the CVM from the exclusion list in the NSX solution. After the hardware part is replaced, we notice the communication between CVM & ESXi via 192.168.x.x network is broken and services can't start with ssh key based messages in the genesis.out log file.. CVM services failed to start with the below symptoms:The /home/nutanix/data/logs/genesis.out will show the following: nutanix@cvm:~$ tail -f genesis.out Running the fix_host_ssh fails to fix this issue: nutanix@cvm:~$ tail -f genesis.out CVM failed to communicate via the eth1 network interface to the ESXi host.  nutanix@cvm:~$ tail -f genesis.out The steps below steps do not resolve the issue: Re-create the vmk1 and check the communicationRe-create vSwitchNutanix but failed to communicateInject the Static route via the arp commandReboot the CVM and enabled/disabled the NIC2 for CVM - tried network services restartReboot the ESXi host. Confirm that CVM is part of the exclusion list:Procedure: Navigate to Exclusion List settings. In NSX 6.4.1 and later, navigate to Networking & Security > Security > Firewall Settings > Exclusion List.In NSX 6.4.0, navigate to Networking & Security > Security > Firewall > Exclusion List. Click Add.Move the VMs (CVM with services issue) you want to exclude to Selected Objects.Click OK. Reference. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Windows Server 2008 R2 VM on non-AHV hypervisor boots into recovery mode when VirtIO is uninstalled. ### Assistant: Uninstalling VirtIO 1.1.6 or higher from Windows Server 2008 R2 VM on non-AHV hypervisor makes Windows boots into recovery mode. Also, Migrating Windows Server 2008 R2 VM by Nutanix Move 4.2.1 or higher from ESXi/Hyper-v makes the source VM boots into recovery mode.. Uninstalling VirtIO 1.1.6 or higher from Windows Server 2008 R2 VM on non-AHV hypervisor makes Windows boots into recovery mode.Also, Migrating Windows Server 2008 R2 VM by Nutanix Move 4.2.1 or higher from ESXi/Hyper-V makes the source VM boots into recovery mode.. Perform the following steps to recover the VM: Press F8 key during the VM powering up to show Windows advanced boot options.Select "Disable Driver Signature Enforcement". Windows boots up normally.Start Registry Editor. Click on the Start button and search for regedit.exe.Delete "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\vioscsi" branch.Reboot Windows and confirm Windows boots up normally.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about On mixed-hypervisor cluster with AHV storage-only nodes, VM management fails with error "Error occurred while getting network list". ### Assistant: VM management in a mixed hypervisor cluster fails to due to a missing entry in /etc/nutanix/hardware_config.json on AHV storage-only nodes. VM create or update operations on mixed ESXi+AHV clusters may fail with the following error: Error occurred while getting network list This issue can typically occur after cluster expansion and adding AHV storage-only nodes. the following error can be found in the /home/nutanix/data/logs/uhura.out log: Error occurred while getting network list The minimal_compute_node field is missing in the /etc/nutanix/hardware_config.json file on the storage-only CVMs: Error occurred while getting network list To check all nodes, run the following command: Error occurred while getting network list. Starting from AOS 6.1 it is possible to select HCI or Storage-only node ензу while expanding a cluster. HCI to Storage-only or Storage-only to HCI node conversion is taken care of by the Expand cluster operation itself. Remove affected nodes from the cluster and re-add them while selecting the correct node type.In case removing the node is not possible due to space constraints, consider engaging Nutanix Support. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Alert - A130348 - VmInPausedState. ### Assistant: NCC 4.2: This Nutanix article provides the information required for troubleshooting the alert "VM is in Paused state" for your Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert "VM is in Paused state" for your Nutanix cluster.Alert OverviewThe "VM is in Paused state" alert is generated when the Acropolis service detects a VM in Paused state. This is an unexpected VM state which should not be seen during daily operations.Sample Alert Warning: VM found in paused state. Potential ImpactVM is not operational.Output Messaging  Check ID130348DescriptionVM found in paused state.Cause of FailureFailed migration or some hypervisor issue.ResolutionsPower cycle the VM to get it out of this state.ImpactVM in paused state most likely means downtime and it's in an irrecoverable state.Alert Smart TitleVM {vm_name} with ID {vm_uuid} has been found in paused stateAlert TitleVM is in Paused stateAlert MessageVM {vm_name} with ID {vm_uuid} has been found in paused state. TroubleshootingPower Off the affected VM and then power it On to recover. Contact Nutanix Support at https://portal.nutanix.com as soon as possible to fully troubleshoot the problem. Collect additional information and attach them to the support case. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB-2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB-2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB-6691. nutanix@cvm$ logbay collect --aggregate=true If the logbay command is not available (NCC versions prior to 3.7.1, AOS 5.6, 5.8), collect the NCC log bundle instead using the following command: nutanix@cvm$ logbay collect --aggregate=true Attaching Files to the CaseAttach the files at the bottom of the support case on the support portal.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB-1294.Requesting AssistanceIf you need further assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the Support Team by calling one of our Global Support Phone Numbers. You can also click the Escalate button in the case and explain the urgency in the comment, and Nutanix Support will be in contact.Closing the CaseIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Pre-upgrade check: Hypervisor Upgrade (test_host_upgrade_versions_compatible). ### Assistant: This pre-upgrade check runs during hypervisor upgrades. It checks if the base and target ESX versions are upgrade compatible.. test_host_upgrade_versions_compatible is an ESXi pre-upgrade check that runs during hypervisor upgrade. It checks whether the base and target ESXi versions are upgrades compatible. The reason for this check is that ESXi introduced a new certificate for VIB signing into the trust store in 2018 to begin the transition from the old certificate (which expired on 31st December 2019). Error messages generated on UI by this check: "Current host version %s and the target host version %s are not compatible for upgrade". VMware recommends upgrading ESXi hosts to a minimum of the builds (listed below), which were released after March 2018 and have the newer signing certificate in the trust store. Doing so prevents future installation and upgrade failures due to the untrusted signing certificate: 6.0 ESXi600-201807001 (Build: 9239799) or later 6.5 U2 (Build: 8294253) or later6.7 GA (Build: 8169922) or later. The upgrade should be retried using a compatible ESXi release (above). Once upgraded to a compatible build, you may upgrade to the target release version. For all ESXi build numbers and versions, refer to VMware KB 2143832.VMware has addressed this issue in VMware KB 76555.For any queries consider engaging Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Nutanix Files - File and folder permissions. ### Assistant: Configure Nutanix Files Admin and default permissions for a home and general purpose shares.. It may be difficult to determine user permissions without access to Active Directory (AD). If there are issues with share access or domain joining because of permissions, and you are certain that there are no issues with connectivity, follow the steps below to verify if the logged-in user has AD user privileges.. Execute the following command on Windows SMB Client to retrieve a list of groups of which the logged-in user is a member. Any user that is a member of the BUILTIN\Administrators group (most of the time, Domain Admins) should be able to access the share. PS C:\Windows\system32> whoami /groups /FO list Sample output: PS C:\Windows\system32> whoami /groups /FO list Default Permissions for Nutanix Files SharesThere are two share types: distributed and standard. In the example output below, nufs-sec-smb1 is a distributed share while nufs-sec-smb2 is a standard share. PS C:\Windows\system32> whoami /groups /FO list Look at the default permissions of these shares (from the client machine). In the example output below, the named directory UserJohn is the TLD (top-level directory) at the root of the distributed share. PS C:\Windows\system32> whoami /groups /FO list PS C:\Windows\system32> whoami /groups /FO list PS C:\Windows\system32> whoami /groups /FO list The same permissions can be checked using icacls or Explorer. In Nutanix Files 3.0 and later, a user or a group can be added as a File Server Admin/Backup Admin in Files user interface. Alternatively, use the ncli fs add-admin-user command.   Get files/directories authorization information.From Nutanix Files version 3.8 onwards there is a new CLI introduced to query files and directories authorization information. PS C:\Windows\system32> whoami /groups /FO list In the following example, a user named "Admin" is unable to access the folder "users" in the share "Eng". Use the following steps in order to validate the permissions or authorization for a user named "Admin" in the folder "users".1. SSH into any FSVM2. List current ACL set on a share and folder. In this example, it is share "Eng" and folder "users". In the output below, AD user "Admin" has set explicit DENY ACL in the "users" folder. PS C:\Windows\system32> whoami /groups /FO list. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Migration of resource heavy VMs might get stuck on AHV clusters. ### Assistant: On AHV clusters, migration of large VMs might fail. In the Prism web console, you can see the migration progress up to 99%, and then the migration stops indefinitely.. VM live migration can be manually initiated or be a result of putting the host into maintenance mode (to perform firmware or hypervisor upgrade, for example). Live migration of the VM that has a high rate of change to its memory contents may take a significant amount of time or look "stuck" and the process eventually fails. This can happen on a cluster where 1G and 10G uplinks are configured. In both cases, this happens when the change rate of the VM memory exceeds the network bandwidth. Prism will generate an alert when the migration task fails. Furthermore, sample errors can be found in the acropolis.out log on the Acropolis leader. Follow the steps from KB 2305 to find the Controller VM (CVM) with the Acropolis leader role. The following error messages are commonly seen in the acropolis.out log on the Acropolis leader on clusters configured with 1G uplinks: 2020-05-08 14:24:03 ERROR enter_maintenance_mode_task.py:887 Failed to evacuate 18657db7-1f76-48e3-a7b9-ea1311423d28: HypervisorError: Src:[operation failed: migration job: Not converging after 50 attempts at 99%.]; Dest:[internal error: qemu unexpectedly closed the monitor: 2020-05-08T20:24:02.501332Z qemu-kvm: load of migration failed: Input/output error 2020-05-08 14:24:03 ERROR enter_maintenance_mode_task.py:887 Failed to evacuate 18657db7-1f76-48e3-a7b9-ea1311423d28: HypervisorError: Src:[operation failed: migration job: Not converging after 50 attempts at 99%.]; Dest:[internal error: qemu unexpectedly closed the monitor: 2020-05-08T20:24:02.501332Z qemu-kvm: load of migration failed: Input/output error The following error messages are commonly seen in the acropolis.out log on the Acropolis leader on clusters configured with 10G uplinks: 2020-05-08 14:24:03 ERROR enter_maintenance_mode_task.py:887 Failed to evacuate 18657db7-1f76-48e3-a7b9-ea1311423d28: HypervisorError: Src:[operation failed: migration job: Not converging after 50 attempts at 99%.]; Dest:[internal error: qemu unexpectedly closed the monitor: 2020-05-08T20:24:02.501332Z qemu-kvm: load of migration failed: Input/output error. Perform the following steps to investigate the issue: Check maximum available NIC bandwidth and load balancing mode: Run the following commands on CVM: nutanix@cvm:~$ manage_ovs show_uplinks In the output of commands mentioned above, check which NICs are assigned to br0 bridge and their speed: nutanix@cvm:~$ manage_ovs show_uplinks Check VM migration progress by running the following command multiple times with a delay of 1-2 minutes on the source AHV host: nutanix@cvm:~$ manage_ovs show_uplinks To identify the network bandwidth utilization, you can check the sar stats for the AHV source host. For example, where VM migration failed on 1st Feb at 08:43 AM, the following command can be run on the sar file. The command is filtering occurrence where physical NIC is transferring more 8Gb per second of data. nutanix@cvm:~$ manage_ovs show_uplinks Starting from AHV 20201105.2159, in /var/log/libvirt/qemu/<vm_uuid>.log on the source host, you can also see the memory dirty rate of the migrated VM which is helpful when troubleshooting failures: nutanix@cvm:~$ manage_ovs show_uplinks For further troubleshooting, consider checking /home/data/logs/acropolis.out on the Acropolis leader. KB 7949 describes the normal VM Live Migration Process and can be used as a reference. Workarounds to consider: Power off memory-intensive VMs and use acli to start it on the target AHV host instead of live migrating the VM. Run the following command on any CVM in the cluster to start VM on a specific AHV host: nutanix@cvm:~$ manage_ovs show_uplinks Schedule the VM migration or maintenance at a time when the VM is as idle as possible.Use faster uplinks if available.Verify that the network uplinks are not degraded. Note: Use of LACP load balancing mode may not improve the situation as a single TCP connection can only use the bandwidth of a single NIC.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Lazan service restarts unexpectedly due to non-ASCII characters in VM disk names. ### Assistant: Lazan service restarts frequently due to non-ASCII characters being present in the VM name/VMDK files on the cluster after upgrading to AOS 6.5.x or 6.0.x. Lazan service restarts frequently due to non-ASCII characters in the VM name/VMDK files on the cluster after upgrading to AOS 6.5.x or 6.0.xThe following alert is generated: One or more cluster services have restarted within 15 minutes in the PCVM/CVM, the total restarts being >= 10. The following are the service(s) that have restarted : ['lazan']. The Following CRITICAL error signature can be seen in the /home/nutanix/data/logs/lazan.out logs: One or more cluster services have restarted within 15 minutes in the PCVM/CVM, the total restarts being >= 10. The following are the service(s) that have restarted : ['lazan'].. This issue is resolved in: AOS 6.5.X family (LTS): AOS 6.5.3 Please upgrade AOS to versions specified above or newer.Engage Nutanix Support at https://portal.nutanix.com/ to recover the cluster.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about LCM Log Collection Utility. ### Assistant: Overview and instructions on using LCM Log Collection Utility..  PurposeLCM (Life Cycle Manager) Log Collection Utility provides an independent script designed to collect all the important log files necessary to debug any LCM issue. NCC log_collector bundle is significant in size and contains multiple files that are irrelevant to LCM. In contrast, Log Collection Utility collects only the files that are relevant to LCM workflow thereby reducing the log bundle size and speeding up the process of log collection.The utility is available in LCM version 2.2 or later by default. It could also be used on earlier versions of LCM.Functionality Collects LCM log files.Detects if the node is stuck in Phoenix and collects kernel logs.Collects logs for the services with which LCM interacts (Example: Foundation, Prism, etc.).The utility state is independent of such of the LCM hence it is capable of collecting logs even if the framework is down.Parses LCM configurations from Zookeeper nodes. CaveatsThe script cannot pull logs from the node booted into Phoenix when the node's IP address is not reachable over the network. In that case, apply the respective CVM (Controller VM) IP address to the Phoenix instance using IPMI following KB 5346 before running the script. Once the network is restored, /tmp/phoenix.log should also be collected from phoenix for troubleshooting purposes.Files and information collected Node level information LCM leader Logs from CVM LCM leader Command output from Phoenix LCM leader . For LCM 2.3.1 or later (automatic log collection) LCM 2.3.1 automatically triggers the LCM log collector utility when any LCM operation fails.Log bundle will be created in /home/nutanix/data/log_collector/.For LCM 2.3.1.1 and later, LCM prints Log location along with the failure message as below. Log into the CVM IP to fetch the LCM log bundle and upload it to the Support case for review. Operation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, ip: [xx.xx.x.xx] due to Multistage upgrade encountered an error: ('The read operation timed out',). For LCM 2.2.x or later (manual log collection) It can be used to trigger a new log collection.Run the following command from a CVM to begin log collection. Operation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, ip: [xx.xx.x.xx] due to Multistage upgrade encountered an error: ('The read operation timed out',). Sometimes you might  see below error message : Operation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, ip: [xx.xx.x.xx] due to Multistage upgrade encountered an error: ('The read operation timed out',). If you get the above error, please look for the file python lcm_log_collector.pyc instead and run the command shown below: Operation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, ip: [xx.xx.x.xx] due to Multistage upgrade encountered an error: ('The read operation timed out',). Log bundle will be created in /home/nutanix directory on the node from which the script is executed, with the name in the following format: Operation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, ip: [xx.xx.x.xx] due to Multistage upgrade encountered an error: ('The read operation timed out',). For LCM 2.2.x, log bundle will be created in /home/nutanix/data/logs/lcm_logs/. For LCM 2.3 and later, the log bundle will be created in /home/nutanix/data/log_collector/.  For Uploading the logsOnce the logs are collected refer to KB-1294 for different methods to upload the LCM logs to the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about NCC Health Check: hwclock_check. ### Assistant: The NCC health check hwclock_check verifies hardware clock functionality and any time differences.. The NCC Health Check hwclock_check checks if the hardware clock is healthy. It checks the date from the real-time clock (hardware clock) on the hypervisor against the date of the Controller VM (CVM).Time differences in the hardware clock and system clock can cause problems. The hardware clock operational check ensures that the hardware clock is functional. Running the NCC CheckYou can run this check as part of the complete NCC Health Checks: nutanix@cvm$ ncc health_checks run_all Or you can run this check separately: nutanix@cvm$ ncc health_checks run_all You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every 10 minutes, by default. This check will generate an alert after 5 consecutive failures across scheduled intervals. Sample output For Status: FAIL nutanix@cvm$ ncc health_checks run_all For Status: PASS nutanix@cvm$ ncc health_checks run_all Output messaging DescriptionCheck if the Hardware clock has failedCauses of failureThe RTC clock on the host has failed, or the RTC battery has died.ResolutionsReplace the node. Refer to the Nutanix documentation for instructions.Alert TitleHardware Clock FailureAlert MessageHardware clock in host host_ip has failed. Note: This hardware-related check executes on the below hardware Nutanix NXDell XCHPELenovo HXNutanix SXFujitsu XFLenovo SRPoweredgeIntelIBMHyveVoyager TDC. This NCC check or alert for the hardware clock can be raised for the following reasons: Network failure between the NTP server and the host.The system clock and hardware clock on the host are incorrect.Incorrect time on the NTP server.The RTC is powered by the same battery as the CMOS. This battery may have expired, or the RTC clock on the host has failed. If this check fails, consider engaging Nutanix Support. Additionally, gather the output of ncc health_checks run_all and attach it to the support case. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about PC Cluster Runway/Scenarios not working on NCC 3.7.1/3.7.1.x. ### Assistant: Due to ENG-225071, cluster runway or new scenarios may stop working when you upgrade to NCC 3.7.1 or 3.7.1.x prior to upgrading PC. You can work around the issue by reinstalling NCC.. Scenario#1 Due to a known issue, Prism Central or Cluster Runway scenarios may stop working. This happens when a customer upgrades to NCC 3.7.1 or NCC 3.7.1.1 in advance before upgrading Prism Central to PC 5.10.3.x. You may notice the following symptoms: Cluster Runway is showing N/a instead of data.Cluster Runway shows, "We are currently collecting usage data. Cluster usage data will be displayed here in 24 hours." for an extended period (longer than 24 hours)."RPC error kTransportError raised" is seen when creating a new scenario. Scenario#2 The same issue is reported on pc.2022.1, where the Neuron server is continuously crashing, because of which the Capacity planning feature does not work.As per recent instances, the issue may get resolved by upgrading PC to latest version i.e 2022.1.0.2 and a fix for this is also tracked in above mentioned tickets.. Scenario#1: Long-term fixes for this issue are included in AOS 5.10.4, AOS 5.11, and NCC 3.7.2. Upgrade to AOS 5.10.4, 5.11 or later, and to NCC 3.7.2 or later.A workaround for this issue is to reinstall NCC manually on Prism Central and then wait for 24 hours for data to populate. Instructions for this are available on the portal; for example, this link for 5.10 https://portal.nutanix.com/page/documents/details?targetId=Acropolis-Upgrade-Guide-v5_18:ncc-ncc-install-t.html. Scenario#2: The issue is resolved in pc.2022.9, pc.2022.6.0.1, pc. 2023.1.0.1. Please upgrade to those releases for the fix.  . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Support during a critical upcoming upgrade/maintenance. ### Assistant: Support during a critical upcoming upgrade/maintenance. Can Nutanix Support join a customer in a call to guide them and provide coverage during critical upcoming activities? Nutanix SREs are often expected to engage in customer remote sessions for installation, upgrades or configuration activities during their maintenance window. While Nutanix Support would always like to be available for our customers, we are not equipped to handle requests related to installation, upgrades or configuration. If we remain on a remote session for the duration of such an activity, then we become unavailable to address critical break/fix issues. Nutanix Support's primary responsibility is addressing and troubleshooting customer break/fix issues, not handling other maintenance activities.. Nutanix prides itself on providing robust tools and documentation to facilitate maintenance activities such as installation, upgrades, etc. We highly recommend leveraging these resources. The Acropolis Upgrade Guide outlines the steps and resources needed for such activities. Use the Life Cycle Manager (LCM) to upgrade all or some of the software and firmware components. In the event LCM cannot be used, use the Legacy 1-click method from the Web Console. Ensure that all the prerequisites of maintenance activity are met. It is possible that during the maintenance, you exhaust all the documentation (both KB articles and guides) and still require assistance from a Nutanix SRE. These instances may be, but are not limited to, the following situations: An installation/upgrade is stalledAn installation/upgrade completes but one or more nodes do not come upIntermittent service crashesInter/intracluster connectivity issuesNetwork/storage unavailability or instability In the event assistance is still needed, the best way to get support is to raise a case via the Support Portal.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Alert - A111066 - Failed to send alert Emails. ### Assistant: This Nutanix article provides the information required for troubleshooting the alert: "Alert E-mail Failure" for your Nutanix cluster.. Alert Overview The Alert E-mail Failure alert can be generated due to: Errors in email-alerts scriptError while executing email-alerts scriptErrors while creating an email file for the alert Check ID111066DescriptionFailed to send alert emails.Causes of failureAlert is raised if a Controller VM is unable to send an alert email.ResolutionsEngage Nutanix Support.ImpactAlerts notification emails will not be sent.Alert TitleAlert E-mail FailureAlert MessagesUnable to send alert E-mail from Controller VM {service_vm_external_ip} due to “E-mail client error”. Capture alert manager service logs, email-alerts.log, and send-email.log from all the CVMs in the cluster. Open case with Nutanix Support and attach the log files to the case.Log files location: ~/data/logs/alert_manager.out~/data/logs/email-alerts.log~/data/logs/send-email.log  To collect these logs using the Logbay tool and upload them to the support case: logbay collect -t alerts,email --dst=ftp://nutanix -c <case number> For more information about the log collection tool Logbay, please refer to KB-6691.Note: If the alert persists in Prism UI. Please try disabling and re-enabling in the Health page in Prism.In Prism select Health from menu. To find the alert select the Checks tab on right hand side of page. Then search for "Alert E-mail Failure". Click 'Turn Check Off,' then 'Turn Check On' to clear its status.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Frozen I/O State in Microsoft SQL Server During application-consistent snapshots. ### Assistant: MSSQL databases can remain frozen indefinitely during application-consistent snapshots.. During the creation of application-consistent snapshots, there is a possibility of encountering a problem that can cause Microsoft SQL Server (MSSQL) databases to become frozen. This issue specifically occurs during VSS-based backups, where the MSSQL databases can remain frozen indefinitely.  In MSSQL Logs, the error logs will show as below (The logs can be obtained by following this Microsoft article) 2023-05-18 23:48:52.79 spid77 I/O is frozen on database [DATABASE_NAME]. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup. Obtain the Microsoft SQL version the UVM is running to confirm if this KB applies.Method 1:Connect to the instance of SQL Server, and then run the following query: 2023-05-18 23:48:52.79 spid77 I/O is frozen on database [DATABASE_NAME]. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup. The result of running the query above would be the following: 2023-05-18 23:48:52.79 spid77 I/O is frozen on database [DATABASE_NAME]. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup. Method 2:Another way of checking the version is by connecting to the instance of SQL Server, and then run the following query in SQL Server Management Studio (SSMS): 2023-05-18 23:48:52.79 spid77 I/O is frozen on database [DATABASE_NAME]. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup. The result of running the query above would be the following: 2023-05-18 23:48:52.79 spid77 I/O is frozen on database [DATABASE_NAME]. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.. This is a known issue (reported in MS KB-4523102) detected by Microsoft on SQL databases that contains memory-optimized filegroup and VSS based backups. Microsoft has released the following cumulative updates for SQL Server: Cumulative Update 2 for SQL Server 2019 (Build version: 15.0.4013.40)Cumulative Update 19 for SQL Server 2017 (Build version: 14.0.3281.6)Cumulative Update 10 for SQL Server 2016 SP2 (Build version: 13.0.5492.2). ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about NDB might get an alert about Nutanix cluster connectivity during a cluster upgrade. ### Assistant: When a software or firmware upgrade is performed on the Nutanix cluster, NDB can generate an alert about the cluster connectivity.. During a software or firmware upgrade on the Nutanix cluster, NDB can generate an alert about the cluster connectivity status: Health check failed for cluster (name:<Cluster_Name>), Please make sure the cluster is in valid state The era_server.log on the NDB server will have the following entry: Health check failed for cluster (name:<Cluster_Name>), Please make sure the cluster is in valid state  . If the alert is generated during an upgrade activity, it is expected and can be safely ignored as long as the cluster connectivity status goes back to normal when the upgrade is completed.To check the cluster connectivity status: NDB -> Administration -> Nutanix clusters.The reason why it happens is because during most of the upgrades, the CVMs reboot in a rolling manner and the virtual IP address gets re-hosted from the Prism leader CVM to a different CVM. That can take a few seconds, so the virtual IP can be briefly unavailable, which causes the API checks coming from the NDB to fail and it is enough to generate the alert in NDB.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about NCC Health Check: stale_recovery_points_check. ### Assistant: The NCC health check stale_recovery_points_check checks if there are any recovery point entries in the Insights database whose underlying snapshots have been deleted.. The NCC health check stale_recovery_points_check checks if there are any recovery point entries in the Insights database whose underlying snapshots have been deleted.It can be run as part of the complete NCC check by running  nutanix@cvm:~$ ncc health_checks run_all or individually as: nutanix@cvm:~$ ncc health_checks run_all You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every 12 hours, by default.This check will generate an alert after 2 failure. Sample outputFor status: PASS nutanix@cvm:~$ ncc health_checks run_all For Status: ERROR nutanix@cvm:~$ ncc health_checks run_all   Output messaging Check ID110405DescriptionChecks if there are any recovery point entries in the Insights database whose underlying snapshots have been deleted.Causes of failureThe snapshot was deleted without removing the associated stale entries in the Insights database.ResolutionsCheck the running status of the Polaris service.ImpactUpcoming replications might be impacted by the stale remnant entries in the Insights database.  . If this NCC check fails, engage Nutanix Support.Additionally, gather the following command output and attach it to the support case:   nutanix@cvm:~$ ncc health_checks run_all  . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Lose local attach vDisks after adding a NIC. ### Assistant: After adding the second NIC to Windows Server, non-bootable local attach vDisks disappear from within guest OS. After adding the second NIC to Windows Server, non-bootable local attach vDisks disappear from within guest OS. Only drive C is present.Migrating or Rebooting the VM does not solve the issue. . Open disk management in windows and check the disks should be marked as offline.To bring the disks online, right click on each vdisk and click "Online". Now you should be able to see and use the vdisks within guest OS.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about OVA UI may take longer to load due to unreachable Prism Element cluster registered to Prism Central. ### Assistant: When connected clusters are unreachable the OVA UI may take longer to load. OVA UI may take longer to load if one or more registered Prism Element clusters are unreachable. The /home/nutanix/data/logs/aplos.out log file on Prism Central will contain the following error: 2023-01-06 02:18:40,257Z ERROR resource.py:231 Traceback (most recent call last): You may also notice the following RPC timeouts in the logs:  2023-01-06 02:18:40,257Z ERROR resource.py:231 Traceback (most recent call last):  . Prism Central tries connecting to every connected Prism Element cluster to check compatibility.To find the unreachable Prism Element UUID, you can refer to the below logs: 2023-01-06 01:45:22,964Z WARNING fanout_protobuf_rpc.py:230 Error # 5 sending RPC [500] to http://127.0.0.1:9444/v3/fanout_proxy?remote_cluster_uuid=0005eea4-b61d-36d5-408d-7cc255078170&url_path=remote_rpc_requ If the Prism Element cluster is deleted without properly unregistering, refer to KB 4944 to clean the Prism Element cluster details.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Enabling Jumbo MTU on AHV for UVMs. ### Assistant: This article describes how to enable jumbo MTU on AHV for UVMs.. Nutanix Controller VMs (CVMs) use the standard Ethernet MTU (maximum transmission unit) of 1,500 bytes for all the network interfaces by default. The standard 1,500-byte MTU delivers excellent performance and stability. Nutanix does not support configuring the MTU on a CVM's network interfaces to higher values.You can enable jumbo frames (MTU of 9,000 bytes) on the physical network interfaces of AHV, ESXi, or Hyper-V hosts and user VMs if the applications on your user VMs require them. If you use jumbo frames on hypervisor hosts, enable them end to end in the desired network and consider both the physical and virtual network infrastructure impacted by the change.. PreparationMake sure that the following changes are implemented before making any changes: The physical L2 switches must be configured to support a 9000-byte Ethernet payload and any additional network overhead such as VLAN tags and VXLAN encapsulation. This means the switching infrastructure is often configured to support a maximum MTU of 9216 bytes, larger than the endpoint MTU.Any routed L3 interfaces between networks must also be configured to support the larger MTU. Consult the documentation of your network vendor for this configuration. Only after the entire physical network is ready to handle jumbo frames between all endpoints can we configure our Nutanix AHV hosts. Check the current network configurationTo check the current host network configuration, connect to the CVM you will be making changes on and run the following command: nutanix@cvm$ manage_ovs show_uplinks Sample output: nutanix@cvm$ manage_ovs show_uplinks One of the main items we want to check at this step is the bond_mode. If balance-slb or balance-tcp (LACP) modes are used, restore this configuration after making the MTU changes. Next, connect to the AHV host and verify the current MTU value of the physical interfaces. These typically show up as ethX, but may vary depending on the hardware in use. nutanix@cvm$ manage_ovs show_uplinks   Change MTU (AOS 5.19 and newer) Refer to the Prism Web Console Guide for your appropriate AOS version for instructions on how to update your Cluster Virtual Switch's MTU. Login Prism > Settings > Network Configuration > Virtual Switch > Edit Virtual Switch Set the MTU and Save the Update (MTU on Nutanix Host must be set to a value in 1500 ~ 9000 inclusive range) Change MTU (AOS 5.18 and older)Perform the following steps on every host in the cluster that needs these changes. Connect to the host using the IPMI console and log in to the AHV host as root.Log on to the Controller VM via SSH to the management address. nutanix@cvm$ manage_ovs show_uplinks Put the node and CVM in maintenance mode. Refer to Putting a node into maintenance mode chapter from AHV Administration Guide for more details.Run the following command to enable jumbo frames on the interfaces. nutanix@cvm$ manage_ovs show_uplinks where <bond mode> equals one of the following modes: active-backup, balance-slb or balance-tcp. The current bond_mode can be found in the output of the manage_ovs show_uplinks command. If --bond_mode parameter is not specified, then the bond mode will be reset to active-passive, which may affect network connectivity. Note: In the above command the syntax "--mtu 9000" or "--mtu=9000" are both valid.If LACP is used then a few more additional parameters should be added: nutanix@cvm$ manage_ovs show_uplinks Example: nutanix@cvm$ manage_ovs show_uplinks If a bridge other than br0 is used for UVMs, the configuration should be similar to the following example: nutanix@cvm$ manage_ovs show_uplinks The above example applies to configuration with and without backplane network segmentation, as it only changes MTU of physical NICs without touching interface br0, or interface br1-backplane if configured. Note: --require_link=false is only required if any physical NIC is down while configuring, which you can check with manage_ovs show_interfaces. WARNINGS: On clusters running any AOS release between 5.10 and 5.10.3.2 DO NOT run "update_uplinks" with a single interface specified as this may impact cluster availability. If you have a node with a single interface in bridge, upgrade AOS to 5.10.4 or newer, or engage Nutanix Support if you want to change the bond configuration.Updating uplinks using manage_ovs will delete and recreate the bond with the default configuration.The use of the allssh manage_ovs update_uplinks command may lead to a cluster outage. Only use it if a cluster is not in production and has no user VMs running. If balance-slb or balance-tcp (LACP) modes were used, then refer to the following KB articles to restore the configuration. Otherwise, if LACP is not enabled, go directly to step 7: balance-slb: KB 5135balance-tcp (LACP): KB 3263 Gracefully shut down the Controller VM on this host: nutanix@cvm$ manage_ovs show_uplinks Reboot the AHV host. nutanix@cvm$ manage_ovs show_uplinks Once the configuration is completed, make sure both host and CVM are accessible via network.If all tests are successfully completed, remove CVM and node from maintenance mode. Refer to Exiting a node from the maintenance mode chapter from AHV Administration Guide for more details.Create a VM network. nutanix@cvm$ manage_ovs show_uplinks Enable jumbo MTU inside UVM guest OS.Test the reachability of the jumbo frame between UVMs. Linux: nutanix@cvm$ manage_ovs show_uplinks Windows: nutanix@cvm$ manage_ovs show_uplinks Note: "-M" and "-f" are to set df-bit (don't fragment), and 8972 is ICMP data size, which equals 9000B (MTU size) - 20B (IP header) - 8B (ICMP header). There are another 14 bytes/18 bytes for Ethernet header (6B source MAC + 6B destination MAC + 2B ethertype + 4B VLAN tag (VLAN tag is optional)), which are not part of MTU size. Therefore, the maximum frame size allowed after the above configuration is 9014B/9018B (with VLAN tag). . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Open Source Support. ### Assistant: This article describes how Nutanix supports open source repositories.. This article describes how Nutanix supports open source repositories.. OverviewNutanix is supportive of open-source technologies and leverages a community-supported model. Nutanix welcomes pull requests and responds to issues on a best-effort basis. Nutanix maintains public GitHub repositories for initiatives that help customers integrate the Nutanix platform with other third-party products. Support for these initiatives is handled directly via the GitHub repository. Issues and enhancement requests can be submitted in the Issues tab of each repository. Search for and review existing open issues before submitting a new issue. RepositoriesThe Github repositories locations: https://github.com/nutanixhttps://github.com/nutanix-cloud-native Repositories include: Ansible. Nutanix ansible collection nutanix.ncp is the official Nutanix ansible collection to automate Nutanix Cloud Platform (NCP). Location: https://github.com/nutanix/nutanix.ansibleCluster-API provider for Nutanix AHV. The Cluster API brings declarative, Kubernetes-style APIs to cluster creation, configuration and management. Cluster API Provider for Nutanix Cloud Infrastructure is a concrete implementation of Cluster API for Nutanix Cloud Infrastructure. Location: https://github.com/nutanix-cloud-native/cluster-api-provider-nutanixKubeflow. The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Location: https://www.kubeflow.org/docs/distributions/nutanix/install-kubeflowTerraform. An infrastructure-as-code tool for building, changing and versioning infrastructure safely and efficiently. Location: https://github.com/nutanix/terraform-provider-nutanixPacker Plugin for Nutanix AHV. The Nutanix Packer builder is able to create Nutanix AHV virtual machine images and export them. Location: https://github.com/nutanix-cloud-native/packer-plugin-nutanix The complete list of repositories: https://github.com/orgs/nutanix/repositorieshttps://github.com/orgs/nutanix-cloud-native/repositories Reporting an IssueTo report an issue: Navigate to the specific GitHub repository and click on Issues. For example: Click New issue. Select Bug report or Feature request, then click the corresponding Get started button to the right. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about NCC Health Check: check_cvm_ssh_security. ### Assistant: The NCC health check check_cvm_ssh_security checks whether the cluster uses Nutanix SSH security settings, such as lockdown and SSH security level.. The NCC health check check_cvm_ssh_security checks whether the cluster uses Nutanix SSH security settings, such as lockdown and SSH security level. This check was introduced in NCC 4.6.4.Running the NCC checkYou can run the check as part of the full NCC health check, as shown below. nutanix@cvm$ ncc health_checks run_all You can also run this check independently, as shown below. nutanix@cvm$ ncc health_checks run_all You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check runs on Controller VMs only. It does not run on Prism Central.This check is scheduled to run every 24 hours, by default.This check will raise an NCC alert when a cluster does not use our SSH security settings, such as lockdown and SSH security level. Sample Pass output:For Status: PASS nutanix@cvm$ ncc health_checks run_all If the check result is a PASS, Password-based remote login is not enabled. Everything is good.For Status: INFO nutanix@cvm$ ncc health_checks run_all If the check result is a INFO, Password-based login is enabled. It is recommended to not use password-based ssh login access to the cluster.Output messaging  Check ID6219DescriptionCheck the strength of SSH security of the CVM.Causes of failureThe SSH security of the CVM is weak.ResolutionsChange the SSH security setting of the CVM.ImpactThe CVM's security might get compromised.ScheduleThis check is scheduled to run every 24 hours by default.Alert IDA6219Alert TitleThe cluster is using password based ssh access.Alert Smart TitleThe cluster is using password based ssh access for the cvm {cvm_ip} Alert MessageThe cluster is using password based ssh access for the cvm {cvm_ip} . Nutanix recommends using key-based ssh access to the CVM instead of password-based ssh access for better security. Refer to the Controlling Cluster Access section in the AOS Security Guide to set up key-based SSH access.Please perform the following actions to clear the INFO message in NCC: Click the gear icon in the main menu and then select Cluster Lockdown in the Settings page.Disable remote login access by unchecking the Enable Remote Login with Password box. Refer Controlling Cluster Access.Click the New Public Key button and add Public Key for SSH access. Refer Controlling Cluster Access. Follow KB 1895 on how to set up password-less SSH Key pair. Nutanix recommends avoiding ssh access to the CVM appliances as much as possible. Key-based SSH access is preferred instead of password-based access in the rare cases where one has to login to the CVM.Adding rsa/ecdsa public key will allow the admin and nutanix users to access CVMs and hosts using the ssh key pair.Note: Password-based SSH access will be deprecated in 2024.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Under what conditions do Curator scans run?. ### Assistant: Lists the scenarios in which Curator Scheduled and Triggered scans are run.. There are two main types of Curator scans. Scheduled Scans Curator Full ScanCurator Partial Scan Triggered Scans. Scheduled Scans Scheduled scans run periodically on the cluster for the general maintenance of the system. There are two types of scheduled scans. Partial Scans Partial scans are responsible for performing the following tasks: Information Life Cycle Management (ILM)Snapshot chain severingDelete data that was marked to_remove but not dedupedCorrect block awarenessDelete or replicate over-replicated or under-replicated data By default, partial scans run 1 hour (3600 seconds) after the last partial scan completes. Full Scans In addition to all the Partial scan functions, Full scans also perform on-disk deduplication. As only full scans perform on-disk deduplication, you might have to perform multiple scans to ensure that deduped data is deleted completely. Full scans run on their own timer and run 6 hours (21600 seconds) after the last Full scan completes. When this timer appears and if another scan is running, the timer waits for the scan to complete before running another Full scan. Triggered Scans Triggered scans respond to a situation in the cluster where Curator is urgently required.The most common triggered scans are as follows: ILM - If the hot tier gets full, an ILM scan is triggered to drain some data to the cold tier.Disk Failure - If a disk fails, a scan is triggered to replicate the data having less replicas.Node Failure - This scan is similar to a disk failure scan, but runs to replicate data from all disks having had replica on the failed node.User - Manually triggered by the SRE team in certain situations. This can be a Full or Partial scan. These scans are all partial scans (unless the user manually triggers a full scan). They reset the clock on the next periodic partial scan (or full scan if it is a user triggered scan).. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Alert - A1120 - CuratorJobRunningTooLong. ### Assistant: Investigating CuratorJobRunningTooLong issues on a Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert CuratorJobRunningTooLong for your Nutanix cluster. Alert Overview The Curator Job Running Too Long alert A1120 is generated if the active Curator jobs have been running for too long. An alert will be raised if a Curator scan runs for more than 21600 seconds (6 hours) and then after every 12600 seconds (3.5 hours).  Sample Alert For status: INFO Block Serial Number: 19SMXXXXXXXX  Output messaging Check ID130015DescriptionThe Curator job has been running for too long.Causes of failureVariousResolutionsContact Nutanix support if this issue persists.ImpactBackground cluster maintenance tasks might get affected in the future. The root cause should be addressed as soon as possible.Alert IDA1120Alert TitleCurator Job Running Too LongAlert MessageCurator job name with id execution_id has been running for a long time i.e. elapsed_time_secs seconds.. Troubleshooting Curator scan times depend on the cluster size (number of nodes), global metadata size, and metadata scan speed in the Map phase of the Map/Reduce job, among other things.For more information on local vs global metadata, see Scalable metadata chapter in the Nutanix bible. In order to check metadata size, run the following command:  nutanix@CVM:~$ nodetool -h 0 ring Example output: nutanix@CVM:~$ nodetool -h 0 ring One possible reason for high metadata usage can be using the Deduplication feature for workloads not suitable for it. For more information on Deduplication Best Practices, see the corresponding chapter in Prism Web Console Guide.Another reason for long-running Curator Jobs, or failing Curator scans, might be the RPO settings on clusters having dense nodes.Check both the frequency and retention policy for the snapshots, if data protection features are in use: 1 hour vs 6 hours vs 24 hours RPO,for 1 hour RPO, if supported, the 12/1/1/1 retention should not be exceeded.  In case if cluster contains dense nodes, run the NCC dense_node_configuration_checks and refer to KB 7196 for more details. Resolving the issue If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support.Collect additional information and attach them to the support case. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691.  nutanix@CVM:~$ nodetool -h 0 ring Attaching Files to the Case When viewing the support case on the support portal, use the Reply option and upload the files from there.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Upload of images larger that 16 Gb on AOS 5.20.2 or newer failing with "Image upload stream was interrupted" error. ### Assistant: Upload of images larger that 16 Gb on AOS 5.20.2 or newer failing with "Image upload stream was interrupted" error. Image uploads using Prism Element may fail with an "Image upload stream was interrupted" error if all of the following conditions apply: Cluster is running AOS 5.20.2 or newer.The image file size is larger than 16 Gb. The following stack trace may be found in /home/nutanix/data/logs/prism_gateway.log: ERROR 2021-12-09 15:53:53,207Z http-nio-127.0.0.1-9081-exec-7 [] commands.acropolis.UploadImage.doExecute:405 java.net.SocketTimeoutException  . This issue is resolved in: AOS 5.20.X family (LTS): AOS 5.20.4AOS 6.5.X family (LTS): AOS 6.5.1 Please upgrade AOS to versions specified above or newer.WorkaroundUse From URL method. Prepare a web server and share the image file via HTTP protocol.Select "From URL" as "Image Source" of "Image Configuration" and specify the URL to a disk file.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Configuration Steps for Okta Integration with Prism Central. ### Assistant: Okta is a supported Identity Provider that can be integrated with Prism Central for user authentication. This KB documents the steps required to configure Okta integration from both the PC and Okta Administration side.. Nutanix supports several different Identity Providers for integration with Prism Central (PC) including: Active Directory Federated Services (ADFS)Azure ADFSPingOneShibbolethKeycloakOkta This document details specifically what needs to be configured on both the Prism Central and Okta Administration side in order to successfully integrate both for user authentication.. From the Okta Administration side: Create a new SAML 2.0 application Configure the following SAML settings: Single Sign On URL: https://<PC_IP>:9440/api/iam/authn/callback Audience URI (SP Entity ID): https://<PC_IP>:9440/api/iam/authn NOTE: Whether to use the FQDN or PC IP address for above configuration, can be decided by checking the metadata config file from following Prism Page Check the following entityID and Location fields in metadata xml from PC, highlighted in following screenshot. Audience URI should match with entityID URI and Single Sign On URL should match with Location URL in okta settings. Attribute Statements: Note: If you intend to assign Groups to the PC Application, then configure the Group Attribute Statement as follows: Assign Users and/or Groups to the PC Application in Okta View the Identity Provider metadata from the PC SAML application and save it to your local machine as an .xml file. From the Prism Central side: The Microservices Infrastructure (CMSP) needs to be enabled from the Prism Central Settings -> Manage Prism Central -> Prism Central on Microservices Infrastructure More detailed information on Microservices enablement can be found on the Nutanix Support Portal at the following link: https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-cmsp-enable-pc-t.html. The Okta Identity Provider can be added from Prism Central Settings > Authentication > Identity Providers Note: If you intend to perform Group Role Mapping in PC, then configure the Group Attribute Name as follows: Assign individual user role mappings from Prism Central Settings > Role Mapping or group role mappings from Administration > Roles Note: The format of the username must match Okta's expected "Application username format" The user can now authenticate to Prism Central with Okta credentials. Using Okta Native SAML group:We can also use okta SAML group and assign the required users to this group to enable okta authentication for them.For this we need to do following additional configuration both on Okta and Nutanix side.Configuration from Okta side:1. In the Okta Admin dashboard, go to Directory → GroupsClick on "Add Group" to create a new group and name it.2. Assign people to that groupAfter assigning required people click on "Done"3. Then go the "PC application" on okta side and assign that application to this group.4. Under the Sign On Section of the app, following values need to be given for group attributes.And save the changes for the application settings.Configuration from PC side:1. Modify IDP configuration in PC to specify the 'Group' attribute2. Navigate on PC to Administration → RolesYou can create a custom role or just use one of the built-in ones.3. Then go to "manage assignment" option of that role and create select Okta user_group(SAML) from drop down option.NOTE: Make sure you give exact name for the group as given on okta side, the SAML user/group names does not get auto-populated as PC does not have Okta credentials to query and search user/group from Okta.4. Then you should see the okta group listed under "user_group" in this role information.After completing all the above configuration any user assigned to the group on Okta side should be able to login to PC using Okta.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Register for my.nutanix.com and Support Portal access. ### Assistant: Registering for my.nutanix.com Portal account and how to gain access to the Support Portal. The following article describes how to register for the my.nutanix.com Portal and access the Support Portal.. Please perform the following procedure to register for the my.nutanix.com Portal. Once successful you can then perform the additional steps to access the Support Portal.  In the address bar of a web browser, type: https://my.nutanix.com and press EnterClick Sign up nowType the values of First Name, Last Name, Email, Password in the fields and click Submit Note: Follow the specified password complexity requirements when you are creating the password A confirmation page will be displayed and you will receive an email from mynutanix@nutanix.com after you have successfully completed the registration process. The following is an example of the email: Hi First Name, Click the link provided in the email to confirm the registration process. A message briefly appears in the browser confirming your email address and the my.nutanix.com Portal opens in the browser.Type the Email and Password that you used to register and click the arrow in the Password field. The my.nutanix.com Portal Welcome Back page appears. For access to the Support Portal, continue with the following steps: Select Support Portal and click the arrow in the Support Portal dialog box. Note: You must first activate the account before you can browse the Support Portal and create support casesIn the Activation Required dialog box that appears, enter the Business Email and a valid Nutanix Serial Number or Service Tag and click Activate. The Activation Complete! screen appears after you have successfully completed the activationClick the box next to I have read and agreed to the above terms and conditions to accept the terms. The Support Portal page will appear If you are still experiencing access issues after following the steps above please contact portal-accounts@nutanix.com.Note: For access to the Partner Portal, there are 2 ways. if your account does not have any assets or SW registration ID Go to https://www.nutanix.com/partner-signup Fill out the Request Portal Access form and submit If your account  has assets or SW registration ID Click Activate on the Partner Portal tileFill out the Request Portal Access form and submit Refer to the Partner team for any assistance with Partner Portal access. For help please send an email to partnerhelp@nutanix.com. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about NCC Health Check: ilo_bootdrive_raid_check. ### Assistant: NCC 3.10.0. The NCC health check ilo_bootdrive_raid_check checks the iLO boot drive RAID status.. The NCC health check ilo_bootdrive_raid_check checks the iLO boot drive RAID status. This check does the following: Check the logical drive status.If the status is not okay, check available number of physical drives.If only one physical drive is available, generate a WARN alert with the working drive's information. The non-working drive can then be easily found and replaced.If both physical drives are available, check their health. If health is not okay, generate a WARN alert with the failed drive's information. The failed drive can then be replaced.If the drives are healthy, this check returns PASS. This check runs only on the following: iLO-based nodesAOS 5.11.1.5, 5.11.2, 5.16 or higherWith RAID boot drive Running the NCC checkIt can be run as part of the complete NCC check by running: nutanix@cvm$ ncc health_checks run_all Or individually as: nutanix@cvm$ ncc health_checks run_all You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every hour by default. This check will generate an alert after 1 failure. Sample output For status: PASS nutanix@cvm$ ncc health_checks run_all For status: WARN nutanix@cvm$ ncc health_checks run_all nutanix@cvm$ ncc health_checks run_all For status: ERR nutanix@cvm$ ncc health_checks run_all nutanix@cvm$ ncc health_checks run_all Output messaging Check ID106066DescriptionChecks the status of HPE Integrated Lights-Out(iLO) boot drive.Causes of failureHypervisor RAID Boot Device is not reachable or is faultyResolutionsCheck the status of the RAID device. Refer KB-9064 to resolve this issue. If the problem persists, please engage Nutanix Support.ImpactCluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert Smart TitleHypervisor RAID Boot Device failed on ip_address serial number SerialNumberAlert TitleHypervisor boot drive RAID status check.Alert MessageWorking Boot Drive Info: LogicalDriveNumber: ldrive, SerialNumber: serialnumber in Bay: drive_bay, Model: model on Host IP x.x.x.x Failed Boot Drive Info: LogicalDriveNumber: ldrive, SerialNumber: serialnumber in Bay: drive_bay, Model: model on Host IP x.x.x.x Note: This hardware-related check executes on the below hardware: HPE. . Refer to the Hypervisor Boot Drive Replacement procedure (given below) and follow the instructions mentioned there. DX4200 Gen10 24LFF - Hypervisor Boot Drive Replacement (Failed Boot Device) for DX4200 Gen10 24LFF with Mirrored RAID Hypervisor Boot DrivesDL380 Gen10 - Hypervisor Boot Drive Replacement (Failed Boot Device) for DL380 Gen10 with Mirrored RAID Hypervisor Boot DrivesDX NS204i - Hypervisor Boot Device Replacement (Failed Boot Device) for HPE DX NS204i Boot Device Replacement drives can be obtained from HPE.HPE NS204i-p NVMe OS Boot Device detects Warning intermittently because the fan speed is low. On this occasion, increasing fan speed to cool down the temperature of NVMe resolves this WARN.HPE ProLiant DL560 Gen10, HPE ProLiant DL580 Gen10, HPE ProLiant DL360 Gen10, HPE ProLiant DL380 Gen10, and HPE ProLiant DX380 Gen10 Plus - Intermittent Server Shutdown Due to Thermal Issues with HPE NS204i-p NVMe OS Boot DeviceThe status ERR for hpe_bootdrive_raid_check is a false positive and resolved after upgrading NCC to 4.2.0 and beyond.When the status ERR for ilo_bootdrive_raid_check "ILO boot drive RAID info Not avaliable" happens, please reset the iLO on the node and try the check again. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about "Cluster Memory Usage" widget may report inconsistent values on AHV cluster. ### Assistant: On the AHV cluster running AOS 5.15.1 or newer, "Cluster Memory Usage" widget may report inconsistent values.. On the AHV cluster running AOS version 5.15.1 or newer, "Cluster Memory Usage" widget may report inconsistent values. Without an actual change in memory usage on a cluster, you may see the "Cluster Memory Usage" widget showing inconsistent values: . This issue is resolved in: AOS 5.15.X family (LTS): AOS 5.15.4AOS 5.19.X family (STS): AOS 5.19.1 Upgrade AOS to the version specified above or newer. If upgrading to the previously mentioned versions doesn't resolve your issue, consider opening a case with Nutanix  support . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Alert - A160180 - PC is not AZ paired. ### Assistant: Troubleshooting and resolving alert "Remote file server of data protection policy is registered to a Prism Central which is not in an Availability Zone pair".. This Nutanix article provides the information required for troubleshooting the alert Host-limit-check for your Nutanix Files cluster. Alert overview The PC-is-not-AZ-paired alert is generated when the number of hosts exceeds the maximum of 32. Sample alert Block Serial Number: 23SMXXXXXXXX Output messaging Check ID160180DescriptionNewly Registered File Server is a Participant in an Existing Data Protection Policy with a Remote File Server Registered to a Prism Central, which isn't in an Availability Zone Pair with the Local Prism CentralCause of failureThe newly registered file server is part of an existing data protection policy with a remote file server registered to Prism Central, which isn't in an Availability Zone Pair with the Local Prism CentralResolutionsAdd an Availability Zone pair between local Prism Central and Prism Central of the remote file server. Alternatively, you can unregister the newly registered Prism Element and register it to another Prism Central in an Availability Zone pair with the Prism Central of the Remote File Server. If you have any issues, please refer to KB article 14865.ImpactThe remote file server is not visible on the local Prism Central. This means that the data protection policy cannot be managed appropriately. Also, DR workflow cannot be accomplished.Alert IDA160180 Alert TitleRemote file server of data protection policy is registered to a Prism Central which is not in an Availability Zone pairAlert MessageRemote file server {remote_fs_name} for data protection policy {policy_uuid} with file server {fs_name} is registered to Prism Central {remote_pc_name} ({remote_pc_uuid}), which is not in an Availability Zone pair with the local Prism Central  . Resolving the issueThis alert ensures that Availability Zones are maintained through File Server Migrations. If a File Server, or a File Server Remote DR site, moves to a new Prism Central instance, and the new Prism Central is not included in the Availability Zones, this alert will trigger. Add an Availability Zone pair between local Prism Central and Prism Central of the remote file server. Alternatively, you can unregister the newly registered Prism Element and register it to another Prism Central in an Availability Zone pair with the Prism Central of the Remote File Server.  For information about Availability Zones and their management, refer to the Nutanix Disaster Recovery Guide. If you need assistance or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support. Collect additional information and attach them to the support case. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094. CVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM, ideally run this on the Minerva leader as there were issues seen otherwise, to get the Minerva leader IP, on the AOS cluster, run nutanix@cvm:~$ afs info.get_leader Once you are on the Minerva leader CVM, run: nutanix@cvm:~$ afs info.get_leader For example: nutanix@cvm:~$ afs info.get_leader Attaching files to the caseTo attach files to the case, follow KB 1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Alert - A160171 - FileServerTieredFileRestoreFailed. ### Assistant: Investigating FileServerTieredFileRestoreFailed issues on a Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert FileServerTieredFileRestoreFailed for your Nutanix cluster. After deleting a file server or a share, there could be files tiered referring to objects in the object-store. These objects could become stale if there are no references to them. Nutanix recommends expiring these objects after the retention period configured for the object store profile. Alert Overview The FileServerTieredFileRestoreFailed alert is generated on share delete and FS delete operation. Sample Alert Block Serial Number: 16SMXXXXXXXX Output messaging Check ID160171DescriptionTiered File Restore Failed.Causes of failureTiered file restore failed unexpectedly.ResolutionsRefer to KB article 14507. Contact Nutanix support if issue still persists or assistance needed.ImpactTiered file is in inconsistent state.Alert IDA160171Alert TitleTiered File Restore FailedAlert Message{message]. Troubleshooting The message from the alert will provide one of two reasons for failing to restore the file. Scenario 1: "No object store configuration was found for profile." This means the profile was somehow deleted or not accessible.Scenario 2: When object was not found because it was deleted manually from object store or it was removed from object store as part of garbage cleaning. Resolving the issue Scenario 1: Contact Nutanix Support for assistance. See "Collecting Additional Information" below.Scenario 2: Attempt to undo the share-restore and try again with a newer snapshot. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094. CVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM. Ideally, run this on the Minerva leader as there were issues seen otherwise. To get the Minerva leader IP on the AOS cluster, run: nutanix@CVM$ afs info.get_leader Once you are on the Minerva leader CVM, run: nutanix@CVM$ afs info.get_leader For example: nutanix@CVM$ afs info.get_leader Attaching files to the case To attach files to the case, follow KB 1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Alert - A160170 - FileServerCAShareConfigCheck. ### Assistant: Investigating FileServerCAShareConfigCheck issues on a Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert FileServerCAShareConfigCheck for your Nutanix Files cluster. Alert overview The FileServerCAShareConfigCheck alert is generated if Continuous Availability (CA) is enabled for SMB standard share or nested share. Sample alert Block Serial Number: 16SMXXXXXXXX Output messaging Check ID160170DescriptionCheck if CA is configured on SMB standard or nested share on a File Server.Cause of failureContinous Availability is enabled for SMB standard share or nested shareResolutionsDisable Continous Availability feature on SMB standard or nested sharesImpactEnabling Continous Availability on an SMB standard / nested share could result in performance issues.Alert IDA160170Alert TitleFile Server CA Share Config CheckAlert MessageMisconfig of Continous Availability detected on a File Server SMB standard share / nested share.. Use of Continuous Availability (CA) should be limited to distributed shares. If it is enabled on Standard/General shares, performance might be impacted. TroubleshootingCheck to see if any shares have CA enabled and if they are Standard (General) shares. nutanix@FSVM$ afs share.list|grep 'Continuous\|Share type\|Share path\|Share name' Resolving the issue If you have any Standard shares with CA enabled, disable CA using: nutanix@FSVM$ afs share.list|grep 'Continuous\|Share type\|Share path\|Share name' If there are any concerns, or additional assistance is needed, contact Nutanix Support. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094. CVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM. Ideally, run this on the Minerva leader as there were issues seen otherwise. To get the Minerva leader IP on the AOS cluster, run: nutanix@FSVM$ afs share.list|grep 'Continuous\|Share type\|Share path\|Share name' Once you are on the Minerva leader CVM, run: nutanix@FSVM$ afs share.list|grep 'Continuous\|Share type\|Share path\|Share name' For example: nutanix@FSVM$ afs share.list|grep 'Continuous\|Share type\|Share path\|Share name' Attaching files to the case To attach files to the case, follow KB 1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. Requesting assistanceIf you need assistance from Nutanix Support, add a comment in the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the support team by calling one of our Global Support Phone Numbers. You can also press the Escalate button in the case and explain the urgency in the comment, and then Nutanix Support will be in contact. Closing the caseIf this KB resolves your issue and you want to close the case, click the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about NCC Health Check: check_cvm_panics. ### Assistant: The NCC health check check_cvm_panics checks the CVM serial console log for panics.. The NCC health check check_cvm_panics checks the CVM (Controller VM) serial console log for panics and warns the user if the CVM has rebooted due to a panic recently. If a CVM panics, the panic output from the kernel is captured in the serial console log in the host in the NutanixCvmConsole event log. Running the NCC check This check will run as part of the full NCC health check: nutanix@cvm$ ncc health_checks run_all It can also be run as a standalone check: nutanix@cvm$ ncc health_checks run_all You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is not scheduled to run on an interval. This check does not generate an alert. Sample output nutanix@cvm$ ncc health_checks run_all Output messaging Check ID106469DescriptionCheck if any CVMs have panicked.Causes of failureAt least one CVM panicked recently.ResolutionsFollow the instructions in KB 4379.ImpactStorage may have been unavailable while the CVM was restarting.. Determine current cluster state. nutanix@CVM:~$ cs | grep -v UP SSH to cluster and run confirm uptime. nutanix@CVM:~$ cs | grep -v UP SSH to the CVM which generated the alert and run the command last reboot to find the precise time of the panic.This alert references a brief event, rather than an ongoing state.  After this alert is triggered, the cluster usually returns to a healthy state.If the check fails, and the cluster is NOT in a healthy state (i.e. CVM down, services down, etc), contact Nutanix Support for additional diagnosis.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Alert - A160176 - ConfigChangeNotifyFailed. ### Assistant: Investigating ConfigChangeNotifyFailed issues on a Nutanix cluster. This Nutanix article provides the information required for troubleshooting the alert ConfigChangeNotifyFailed for your Nutanix Files cluster." Alert Overview The ConfigChangeNotifyFailed alert is generated when there is a discrepancy between the File Servers and Prism Central configuration.  Sample Alert Block Serial Number: 23SMXXXXXXXX Output messaging Check ID160176DescriptionFailed To Notify Config Change To File ServerCause of failureFile Server is down or not reachable while config change happens on PC/PEResolutionsCheck if File server is up and all the services are running. If File server is up, check network connectivity between File Server and PC/PE controller VM. If the failure persists, then refer to KB article 14745 to push the config change using afs cli.ImpactNAAlert IDA160176Alert TitleFailed To Notify Config Change To File ServerAlert MessageFailed to generate PCConfigChangeNotifyFailure alert due to 'reason'.. Troubleshooting Check if the Files Server is up. nutanix@NTNX-A-CVM:~$ ncli fs ls | grep "File server status" Check if all service are up and running on the FSVMs. nutanix@NTNX-A-CVM:~$ ncli fs ls | grep "File server status" Check the connectivity between Prism Central and Prism Element, including the various required ports.  This information can be found on the Nutanix Portal.Run NCC on Prism Element to confirm that no "Prism Central is unreachable" alerts are raised.  Resolving the Issue If Prism Central is reachable you can push the Files configuration to Prism Central to resolve this issue.For syncing config changes from Prism Element to Prism Central nutanix@NTNX-A-CVM:~$ ncli fs ls | grep "File server status" For syncing changes from Prism Element to Files nutanix@NTNX-A-CVM:~$ ncli fs ls | grep "File server status"   Collecting Additional Information Attaching Files to the Case If this alert automatically generates a support case, add the following sections: "If you need further assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the Support Team by calling one of our Global Support Phone Numbers. You can also click the Escalate button in the case and explain the urgency in the comment, and Nutanix Support will be in contact." "If this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case." Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094. CVM logs stored in ~/data/logs/minerva_cvm* NVM logs stored within the NVM at ~/data/logs/minerva* To collect the file server logs, run the following command from the CVM, ideally run this on the Minerva leader as there were issues seen otherwise, to get the Minerva leader IP, on the AOS cluster, run nutanix@NTNX-A-CVM:~$ ncli fs ls | grep "File server status" Once you are on the Minerva leader CVM, run: nutanix@NTNX-A-CVM:~$ ncli fs ls | grep "File server status" For example: nutanix@NTNX-A-CVM:~$ ncli fs ls | grep "File server status" To attach files to the case, follow KB 1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. Requesting Assistance If you need assistance from Nutanix Support, add a comment in the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the support team by calling one of our Global Support Phone Numbers. You can also press the Escalate button in the case and explain the urgency in the comment, and then Nutanix Support will be in contact. Closing the Case If this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about cpupower frequency-info loads acpi-cpufreq driver instead of intel_pstate. ### Assistant: This article describes an issue where the command cpupower frequency-info loads acpi-cpufreq driver instead of intel_pstate.. This issue affects the following components combination: Hardware: Intel Sapphire Rapids-based (SPR) nodes, including NX-G9AHV: 5.10.170-2.el7.nutanix.20220304.420.x86_64AOS: 6.5.3.1 SymptomsIn BIOS, the recommended setting for Hardware P-States is Disable by default on G9 SPR node. In this case, cpupower frequency-info would use the acpi-cpufreq driver instead of the expected intel_pstate. [root@host]# cpupower frequency-info The turbostat command will continue to work when explicitly run. This issue is caused by the intel_pstate driver being unable to retrieve CPU frequency values on SPR nodes. It requires support for SPR CPU set on AHV.. cpupower frequency-info using the acpi-cpufreq driver is a known behavior on SPR nodes. Nutanix Engineering is aware of this behavior and is working on a fix to use the intel_pstate driver in a future release.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Objects - Manual steps to configure emails for Objects Alerts. ### Assistant: Below are the steps to configure email notification for Objects Alerts. Currently, Prism Central does not show Objects Alerts and one can use the following workaround until Objects Alerts are integrated with Prism Central.. Customers using Nutanix Objects no longer receive notifications for Objects Alerts on Prism Central. They have to launch the Objects page from Prism Central and navigate to the Alerts page to view and take appropriate action based on alert severity. Below are steps to enable email notifications for Objects Alerts until they are integrated with Prism Central in future releases.Note: review the steps thoroughly before applying the workaround. If you have questions or concerns, contact Nutanix Support for assistance.. To set up email notifications, update the Alertmanager configuration presented to the Alertmanager Pod(s) as a Kubernetes secret on MSP cluster. Refer to KB-8170 for steps to access Objects cluster.Overall steps to configure email notifications: Update the Alertmanager configuration with an appropriate template.Base64 encode the updated Alertmanager configuration.Update the YAML template containing the Alertmanager secret using the Base64 encoded value of the new configuration.Apply the secret on the MSP cluster. Update the Alertmanager configuration with an appropriate template Refer to the below YAML file and change the highlighted parameters as desired. The highlighted parameters will vary based on your environment. Save it as alertmanager.yaml. global: Base64 encode the alertmanager.yaml global: Generate secret from the YAML. Reference the below YAML file and change the highlighted parameter with the base64 encoded string. Save it as alertmanager-secret.yaml. global: Update the secret. Copy over the alertmanager-secret.yaml to the MSP master and apply the configuration using the kubectl command: global: Verify if template is applied successfully to alertmanager container: global: global: After applying the secret successfully, you should start receiving emails for Objects Alerts based on template configuration.          Fig 1: Email notification for Objects Alert  Check if the emails can be sent:  Review alertmanager logs to see if there are any issues sending emails: global: If there are issues similar to the above, test sending email to the SMTP server manually using telnet or netcat as described in KB 1638. You can compare sending emails from PC or PE cluster if they are registered with the same SMTP server. Template examplesBelow are a few examples of templates that can be used to configure email notifications based on your requirements. Refer Prometheus documentation for more details. Receive an email for custom Alerts:  global: Receive customized email with formatting: global: Receive all email for all except few Alerts: global: Receive email for all Alerts: global: . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Files upgrades and FSM requirements. ### Assistant: Files requires upgrades of FSM to minimum versions.. Nutanix Files requires a component in AOS known as Files Server Module (FSM). The version of FSM required varies depending on the desired version of Files. Normally, this requirement is enforced during a Nutanix Files upgrade via Life Cycle Manager (LCM) or 1-click. This requirement can be circumvented if Nutanix Files 4.0.0.2 or higher is downloaded from the Nutanix Portal, and then uploaded manually to the Controller VMs (CVMs) for upgrade.. Nutanix Files 4.0.0.2 requires a minimum FSM version 2.1.1. Upgrade FSM to version 2.1.1 and above prior to upgrading. Nutanix encourages the use of LCM for Files upgrades, which will enforce the proper FSM requirement. To determine the version of FSM currently running, check LCM in Prism Element. Should FSM show a version of 2.1.0 or earlier, then LCM is required to update FSM.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about Alert - A160185 - File Server Found Stale Reverse Data Protection Policy During Failover. ### Assistant: Alert for handling Stale Replication policies. This Nutanix article provides the information required for troubleshooting the alert Stale File Server DB entries found for your Nutanix Files cluster.Alert Overview If entries in the File Server database are found to be stale, it raises an alert to have the entries verified and cleaned.  Sample Alert Block Serial Number: 23SMXXXXXXXX Output messaging Check ID160185DescriptionChecks if stale DB File server entry is present or notCauses of failureFile Server DB entries are found to be stale.ResolutionsFor additional detail, please refer to KB-15033.ImpactFile Server share access could be potentially impactedAlert IDA160185Alert TitleStale File Server DB entries foundAlert Smart TitleFile Server {file_server_name} has stale DB entriesAlert Message{alert_msg}. Troubleshooting    Should this alert trigger, a stale File Server Database entry has been detected.  There is no additional troubleshooting to be done.  Please proceed with the resolution.   Resolving the Issue Perform failback:  From the target PC (Prism Central to which the target File Server of the protection policy is registered), click on Files ->Data Protection -> Protected File Servers -> FailbackCheck whether or not the stale reverse replication policy (whose UUID is given in the alert) is still listed in the Prism Central UI (Files -> Data Protection -> Policies)If it is still present, delete itPerform failover again Should the repeated failover have any issues, or if you have any additional concerns, please contact Nutanix Support Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094. CVM logs stored in ~/data/logs/minerva_cvm* NVM logs stored within the NVM at ~/data/logs/minerva* To collect the file server logs, run the following command from the CVM, ideally run this on the Minerva leader as there were issues seen otherwise, to get the Minerva leader IP, on the AOS cluster, run nutanix@cvm:~$ afs info.get_leader Once you are on the Minerva leader CVM, run: nutanix@cvm:~$ afs info.get_leader For example: nutanix@cvm:~$ afs info.get_leader Attaching Files to the Case To attach files to the case, follow KB 1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about NCC Health Check: notifications_dropped_check. ### Assistant: The NCC Health Check notifications_dropped_check identifies if any alert notifications from any system components have been dropped by Cluster Health and/or Alert Manager due to various possible downstream service or configuration issues. The check aims to proactively expose when alerts can not be delivered successfully and thus a genuine issue on the cluster that requires attention may otherwise go unnoticed.. The NCC Health Check notifications_dropped_check identifies if any alert notifications from any system components have been dropped by Cluster Health and/or Alert Manager due to various possible downstream service or configuration issues within the last 24 hours. The check aims to proactively expose when alerts can not be delivered successfully and thus, a genuine problem on the cluster that requires attention may otherwise go unnoticed.When the queue of alert notifications waiting to be delivered becomes >= 1000, it may suggest that services such as cluster_health and alert_manager cannot process and deliver the alerts, preventing emails, SNMP traps, and Rsyslog messages from being sent. Notifications that arrive when the queue depth is already at 1000 are dropped, and an email notification is sent to the cluster's configured alert email address(es) to inform that alerts and notifications were dropped. Two reasons why notifications might be dropping :  Alert Manager service is down.The RPC queue is full.   Running the NCC CheckRun this check as part of the complete NCC Health Checks. nutanix@CVM:~$ ncc health_checks run_all Or run this check separately: nutanix@CVM:~$ ncc health_checks run_all You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check does not generate an alert. Sample outputFor Status: PASS nutanix@CVM:~$ ncc health_checks run_all For Status: FAIL nutanix@CVM:~$ ncc health_checks run_all For Status: INFO nutanix@CVM:~$ ncc health_checks run_all Or: nutanix@CVM:~$ ncc health_checks run_all Example email: nutanix@CVM:~$ ncc health_checks run_all Output messaging Check ID111052DescriptionSome alert notifications were droppedCauses of failureOne or more cluster services are not healthy.ResolutionsContact Nutanix Support.ScheduleThis check is scheduled to run every day by default. Note: This NCC check has multiple enhancements in NCC v4.1.0 - Please upgrade your cluster to NCC v4.1.0 or above and re-run the check.. If this alert is seen and/or emails are received, ensure the NCC is up to date and running the latest version (at least NCC-3.9.2.1 or higher). Due to an upstream change in how alert retries are handled and logged, this health check is over-sensitive in NCC 3.9.0/3.9.1 and may be triggered unnecessarily. After the upgrade, re-run the check and confirm the PASS result. If you continue to see a non-PASS result, you may have a genuine issue, or the now-stale trigger in the JSON state file needs to be cleaned up (per step #5 below.) Use the following steps to confirm and resolve.Check the Alert page in the Prism web console to verify what alert events are accounted for. If you see there are no events/notifications, but the check still fails, check : nutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json If we look at the .json file, we can see that timestamps for dropped notifications increment even though there are no alerts in Prism. This is because we keep retrying to send stuck notifications: nutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json Investigate why notifications are being dropped. One of the reasons could be that you might be using "admin" account to access some APIs. You should create a separate user with admin privileges instead. Refer to 5.11 Security Guide - Creating A User Account​​​Confirm all CVMs and Cluster Services are up and stable: nutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json Validate that the configured SMTP server is reachable and can relay the alert notification emails from the cluster's configured source email address to the destination email address(es).Validate the configured SMTP server security settings. Check older health_server.logs also to make sure the error has not rolled off: nutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json The notifications_dropped_check health check might still complain about the same issue based on data on a CVM which was previously but not currently the Cluster Health service leader, as it is looking for any indication of issues by way of a non-zero value in the JSON state file, despite current Cluster Health leadership. Drops might increment only on the leader, so we have to clear the stats on the old leader where the problem may have previously occurred. SCP the file from the CVM where the values are 0 and replace it. To replace this for all CVMs, you can use this script: nutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json Example: nutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json Run the NCC health check again and ensure the check results in a PASS status. If the above solution does not help you or none of the CVMs notifications_dropped_state.json file has the value of 0 to scp, collect the following information, and consider engaging Nutanix Support. An output of the following NCC health check: nutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json List of alerts: nutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json An output of the following command from the PE cluster or the PC VM depending on where the alert was received: nutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.
### Human: Can you tell about NCC Health Check: host_nic_error_check. ### Assistant: The NCC health check host_nic_error_check checks whether the NICs on the host are experiencing errors.. The NCC health check host_nic_error_check checks whether the NICs on the host are experiencing errors. It checks if each NIC has fewer than 1% errors during a span of execution of 1 hour (3600 seconds). Running the NCC Check You can run this check as part of the complete NCC health checks: nutanix@cvm$ ncc health_checks run_all Or you can run this check individually: nutanix@cvm$ ncc health_checks run_all You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. Sample Output For status: PASS nutanix@cvm$ ncc health_checks run_all For status: FAIL nutanix@cvm$ ncc health_checks run_all nutanix@cvm$ ncc health_checks run_all For status: WARN (changed in NCC-4.6.2) nutanix@cvm$ ncc health_checks run_all Output messaging Check ID6011DescriptionChecks whether the NICs on the hypervisor are experiencing errors.Causes of failureNIC may be failing.ResolutionsCheck NIC health; if required, replace the NIC as soon as possible. Refer to the Nutanix documentation for instructions.ImpactCluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert TitleNIC packet receive an error rate highAlert Message nic_errors packet receive errors on NIC nic_name out of nic_packets in the last {period} seconds ScheduleThis check is scheduled to run every hour by default.Number of failures to alertThis check will generate an alert after 1 failure. Note: Check_id 6011 does not generate an alert when the check fails prior to NCC-4.6.2  Check ID6013DescriptionChecks whether the NICs on the hypervisor are experiencing rx_crc errors.Causes of failureNIC or its peripherals may be failing.ResolutionsReplace the NIC, cable, SFP+ adapter, or switch port. Refer to the Nutanix documentation for instructions.ImpactCluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert TitleNIC RX_CRC Error Rate HighAlert MessageNIC nic_name in host host_ip has encountered many error_type.ScheduleThis check is scheduled to run every hour by default.Number of failures to alertThis check will generate an alert after 1 failure. Check ID6014DescriptionChecks whether the NICs on the hypervisor are experiencing rx_missed errors.Causes of failureRX buffer size or transient condition.ResolutionsRX buffer size might be too small to absorb current inbound traffic. This can be a transient condition. If the alert reappears frequently, contact Nutanix support.ImpactCluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert TitleNIC RX_Missed Error Rate HighAlert MessageNIC nic_name in host host_ip has encountered many error_type.ScheduleThis check is scheduled to run every hour by default.Number of failures to alertThis check will generate an alert after 1 failure. This hardware-related check executes on the below hardware: Nutanix NXDell XCHPELenovo HXNutanix SXFujitsu XFLenovo SRPowerEdgeIntelIBMHyveVoyager TDC. Ensure that you are running the latest version of NCC when investigating NIC errors, whether you have found them through NCC or Health alerts in Prism. For information on upgrading NCC, refer to KB-2871. Note: ixgben version 1.7.1 is the minimum version required. If required, the drivers can be upgraded to ixgben version 1.7.17, the latest qualified driver. What is a CRC error?The sending host computes a cyclic redundancy check (CRC) of the entire Ethernet frame and puts this value in the Ethernet frame's FCS (frame check sequence) section after the user payload. The intermediate switch and the destination host check this computed value against the value they compute to determine if the frame has been corrupted in transit. Possible causes of NIC errors: Link flap (interface continually goes up and down)Cable disconnect/connectFaulty external switch portMisconfiguration of the external switch portFaulty NIC portFaulty cableFaulty SFP+ module Faulty cables and/or SFP+ modules are the most common cause of these errors. If the problem occurs regularly on a particular interface, you should perform testing in a controlled fashion during a change window to isolate the faulty component by swapping cables, modules, and switch ports as needed to isolate the issue. How do I determine if the Nutanix node is experiencing NIC errors? Log in to the hypervisor and use the ethtool command to retrieve the RX counts of the NIC interface reported in the alert. Continue checking the counters to determine if they are increasing. ESXi [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" Or: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" Depending on the driver version, the output of "ethtool -S" may differ. Some old versions do not have details like the above. In this case, you can try the command below (ESXi only): [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" We want to avoid any CRC errors, which could greatly impact usability. Collect all errors on all node interfaces and from the upstream switch. For each node, run the following command: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" As per the upstream switch, it depends on the vendor. For most, the command "show interface <interface>" should work from a privileged console. [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" Look for the following output: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" Note: For Mellanox, use: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" AHV [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" Depending on the driver version, the output of "ethtool -S" may differ. Some old versions do not have details like the above. In this case, you can try this command: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" For enhanced usage of ethtool, you can incorporate additional counters to effectively detect network issues    [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" By running this command, you can retrieve information about various counters related to network errors, faults, checksums, and frame integrity. This can assist you in identifying potential network issues that require further investigation or troubleshooting.Hyper-V [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" The two big error counters are rx_crc_errors and rx_over_errors (and, in conjunction, rx_missed_errors/rx_fifo_errors). rx_crc_errors are caused either by faults in layer 1 (in the past, we have seen failed twinax cables and incorrect types of fibre being used), or issues with jumbo frames on the network. In an environment with 10 Gig switches that use cut-through forwarding (Cisco Nexus, Arista, and Cisco devices using IOS default to Store and Forward switching), any packets that come into the switch will get forwarded out the destination interface once the switch has read the destination MAC address. If that packet has an MTU over what is configured on the interface, it will cut off the packet at the designated MTU, causing the server to receive a malformed packet, which will throw a CRC error. rx_over_errors are caused when the hardware receive buffer on the physical NIC is full, and some of the received packets have to be dropped at the physical NIC layer. In most cases, the values reported by this counter will equal rx_missed_errors and rx_fifo_errors. The packet drops can happen during high bursts of traffic.Note: rx_crc_errors_phy is the counter used on AHV for some Mellanox cards to count CRC errors. From NCC-4.3.0 onwards, the counter is added to the check.  For NCC versions prior to 4.3.0, this counter should be checked manually using the steps mentioned in the solution section of this KB article. Should I be concerned that my NIC port shows any NIC errors?As indicated above, NIC errors can be triggered by several scenarios and can usually be ignored. However, continuously increasing NIC errors typically point to a failing physical layer component. The Nutanix alert is triggered when the error rate is considered excessive. This alert should be investigated whenever it is raised, and the source should be corrected. How does the packet error handling between cut-through and traditional store-and-forward switches differ?Traditional store-and-forward switches store each Ethernet frame and perform various checks such as valid frame length and checksum check. If the frame is valid, it is switched to the destination port. In the succeeding example, if the non-Nutanix node generates a corrupted Ethernet frame, switch port 3 drops it. This behavior differs with cut-through (also known as low latency) switches, which do not store the entire frame before forwarding it. The switch usually only needs to look at the first 12 bytes of a packet to decide. The downside of cut-through processing is that a corrupted frame is forwarded to the destination. Cut-through forwarding relies on the destination (host NIC) to perform the error check and discarding, if necessary. Example setup for scenarios 1 and 2 below: Nutanix node A ------- NIC port 0 -------- Switchport 1Nutanix node B ------- NIC port 0 -------- Switchport 2Non-Nutanix node --- NIC port 0 -------- Switchport 3 The following two scenarios are the most common for NIC errors. Scenario 1. NIC errors are only increasing on node A. Nutanix Node B is not experiencing increasing NIC errors. Perform basic physical layer checks between Nutanix node A and switch port 1. Make sure the cables are properly connected.Check cables for physical damage, pinched fibre, and bend radius issues.If the cable is Twinax, make sure that it is supported. Nutanix ships lengths of 1, 3, and 5 meters Twinax cables. 7-meter cables might work, but the signal can be marginal and varies between switch vendors. Some switch vendors may be able to drive a signal 7 meters while others may be marginal. Use a 5-meter cable if in doubt.If the cable is optical, ensure the proper optics are used. Currently, we only support multimode short-reach optics (850 nm), and 1310 nm optics are not supported. Determine if the switch port causes the NIC errors. If the NIC error is incrementing on Nutanix node A, check switch port 1 for tx output or similar transmission errors. Transmit output errors are usually a good indication that the switch port is the source of the NIC errors.Further isolation of the NIC errors. If the external switch port does not show any NIC TX errors, then the physical layer media (switch port transceiver, cable) could be causing the NIC RX errors. Proceed if the NIC RX errors continue to increase. If using Twinax 10GE cabling, replace the cable. Proceed to step 3d if the NIC errors continue to increase.If using optical links, start by replacing the multimode cable. Proceed to the next step if the RX CRC errors continue to increase.If using optical links, replace the optical transceiver associated with the switch port. Proceed to the next step if the RX CRC errors continue to increase.Reboot the host.Replace the host NIC. Scenario 2. NIC errors are increasing on all Nutanix nodes at the same time. This scenario usually indicates the use of a cut-through Ethernet switch being used. These switches are unable to perform frame discarding at the switch input port. The first step in troubleshooting this scenario is to confirm that the cut-through switch is being used. Check the external cut-through switch for incrementing error counters. Typically, a single switch port logs RX input errors, and the remaining ports report TX output errors. In this case, the port reporting RX input errors is attached to the node generating the illegal (CRC, length) frames.show tech from the switch or show interface for all the enabled interfaces on the switch. Look for any interfaces that are flagging RX errors.Run ethtool every minute for 5 minutes. Take rx_crc rate of increase of errors on the Nutanix interfaces.Note the Nutanix nodes that are not flagging these errors. This is to find out where the good nodes are connected to. A network diagram would be helpful. General Troubleshooting Remediation Steps for CRC Errors: There are a few different ways of looking at things.If you have a layer 1 issue, you will see rx_crc_errors, not on every but some nodes. [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" This points to this node sending out malformed packets, and we should see a related CRC error on its upstream switch port. This is because the packets coming out of this host will never be sent back to the port from which they were sent. If you see many rx_crc_errors on all nodes, verify that there is no "noisy neighbor" on the network.This could be that a host is broadcasting or multicasting out traffic at a larger MTU size. This process will require more network troubleshooting that may fall outside of Nutanix Support. The general rule is that the rx_over_errors should be 1 in every 5,000 rx_packets at the most. Use the following command on an ESXi host to determine the total number of packets and the number of rx_over_packets: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" Remediation Steps for Missed Errors: Check the current NCC version if you do not see any performance impact. Make sure that it is upgraded to the latest compatible version. Previous versions of NCC were aggressive in reporting the NIC error rates.If there is a need for flow control, enabling the Ethernet flow control feature will allow the physical NIC to signal the switch to slow down the packet transmission rate.More information about this feature can be found in Wiki - Ethernet flow control. Flow control should only be enabled if it falls within the SOP for your network environment and if a considerable amount of congestion is encountered on the upstream switch. The impact on the network is as follows: Flow control operates at a lower layer than TCP or IP and thus is independent of them. In other words, flow control can be used regardless of the higher-level protocols. An important side-effect is that neither TCP nor IP knows what Ethernet's flow control is doing. They operate assuming that there is no flow control other than what they may or may not provide themselves.Flow control functions between two directly connected network devices and flow control frames are never forwarded between links. Thus, two computers connected via a switch will never send pause frames to each other but could send pause frames to the switch itself (and vice versa: the switch can send pause frames to the two computers).Pause frames have a limited duration; they will automatically "expire" after a certain time. The expiration time is set by the device that transmits the pause frame.A paused link is not a discriminator of protocols; it will prevent data from passing across the link other than more pause frames.Another solution is to consider enabling load balancing at the network layer so both interfaces are used and also have additional buffer space. Check KB-5135 for details on how to configure AHV. Receive Length Error What is rx_length_error?Each error captured by a driver is explained in Intel 82599 datasheet. Number of packets with receive length errors. A length error occurs if an incoming packet length field in the MAC header doesn't match the packet length. To enable the receive length error count, the HLREG.RXLNGTHERREN bit needs to be set to 1b. This register counts all packets received, regardless of L2 filtering and receiving enablement. For rx_length_error, in most cases, the switch port sends BPDU packets without changing the length value properly.Use the two options below to overcome the error: Portfast IOS set STP portfast: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" NXOS set port type edge: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" Enable BPDU guard and BPDU filter (in Cisco/Arista switch). Highly recommended global BPDU Filter/Guard IOS: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" NXOS: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" Enable BPDU Filter/Guard on a port: [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" HP Switches (Verify the HP Switch Model before running the below commands): [root@esxi]# ethtool -S <vmnic> | egrep "rx_errors|rx_crc_errors|rx_missed_errors" Also, consider disabling Jumbo Frames on the Switch ports the Nutanix Nodes are connected to (HP switches can come with Jumbo Frames enabled by default). According to the AHV Networking Best Practice Guide, enabling jumbo frames on AHV is not recommended. Note: If you see this alert for a vUSB device, upgrade NCC to the latest version to fix the false issue. If the steps mentioned above do not resolve the issue, consider engaging Nutanix Support for further investigation. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know.