Wednesday, July 23, 2014

Nutanix NOS 4 Awesomeness

This week I was able to deploy a Nutanix 1350 at a clients location for for a small VDI deployment. during the deployment which consisted of unboxing the Nutanix block, plugging the network ports, and laying down the software using Nutanix's foundation software; I noticed now much simpler everything has gotten. it only took about 6 hours from start to finish tho deploy the Nutanix and copy an Windows server VM over for vCenter. Try doing that with 3 servers and a SAN.

The Nutanix ships with KVM out of the box but supports Hyper-v, ESX, and KVM. One of the first things I noticed with NOS 4 was the increased performance when moving files around on the data-store, cloning a 40GB VM took under 1 minute from hitting the go button to the new clone flashing to the vmware start screen in the console. keep in mind the the 1000 series is the at the bottom of the product line in regards to specs. Performance improvements have been added to NOS 4.0, increasing overall system performance in 20% compared to NOS 3.5.

some of the NOS 4 changes are:

Hybrid On-Disk De-Duplication
De-duplication allows the sharing of guest VM data on premium storage tiers (RAM and Flash).

Shadow Clones (Official Support)
Shadow Clones is finally out of tech-preview. Shadow Clones intelligently analyze the I/O access pattern at the storage layer to identify files shared in read only mode (ie: Linked Clone Replica).
 
Tunable Fault Tolerance (RF-3)

Smart Pathing (CVM/AutoPathing 2.0)
The new and improved CVM AutoPathing 2.0 prevents performance loss during rolling upgrades minimizing I/O timeout by pre-emptively redirecting NFS traffic to other CVMs. Failover traffic is automatically load-balanced with the rest of the cluster based on node load.

Availability Domains (Failure Domain Awareness)
Also known as ‘Block Fault Tolerance’ or ‘Rack-able Unit Fault Tolerance’ the availability domain feature adds the concept of block awareness to Nutanix cluster deployments.

Snapshot Browser
The new snapshot browser functionality allow administrator to see and restore point-in-time array-based snapshots from a VM or a group of VMs in a local or remote protection domain.

Snapshot Scheduling via PRISM

One-Click NOS Upgrade
Nutanix one-click upgrade automatically indicates when a new NOS version is available and it will auto-download the binaries if the auto-download option is enabled. With a single-click to upgrade all nodes in a cluster Nutanix will use a highly parallel process and reboot one CVM at a time using a rolling upgrade mechanism. The entire cluster upgrade can be fully monitored by the administrator.

Cluster Health
Nutanix Cluster Health is a great asset in maintaining availability for Tier 1 workloads. Cluster Health gives the ability to monitor and visually see the overall health of cluster nodes, VMs and disks from a variety of different views. With the ability to set availability requirements at the workload level, Cluster Health will visually dissect what’s important and give you guidance on how to take corrective action.
(click on the image to enlarge)

Prism Central (Multi-Cluster UI)
Nutanix now provides a single UI to monitor multiple clusters in the same or different datacenters. Prism Central avoid administrations from having to sign individually to every cluster and provide aggregated cluster health, alerts and historical data.

PowerShell Support and Automation Kit
One of the big new things for workflow automation in Nutanix NOS 4.0 are the addition of PowerShell cmdlets to interact with the Nutanix API’s.

Sources:
http://myvirtualcloud.net/?p=6218
http://nutanix.com

Monday, April 28, 2014

NFS Disconnects bug vSphere 5.5 U1

Intermittent NFS APDs on ESXi 5.5 U1 

It looks like there is a bug in ESXi 5.5 U1 with disconnecting NFS datastores. Here is a summary of the issue:



When running ESXi 5.5 Update 1, the ESXi host frequently loses connectivity to NFS storage and APDs to NFS volumes are observed. You experience these symptoms:

  • Intermittent APDs for NFS datastores are reported, with consequent potential blue screen errors for Windows virtual machine guests and read-only filesystems in Linux virtual machines
  • Note: NFS volumes include VSA datastores.
  • For the duration of the APD condition and after, the array still responds to ping and netcat tests are also successful, and there is no evidence to indicate a physical network or a NFS storage array issue.
  • The NFS storage array logs and traces also do not indicate any evident issue, other hosts not running ESXi 5.5 U1 continue to work and can read and write to the NFS share without issue.
Looks Like it affects all NFS appliance and SAN makes and models 

VeeamON 2017 Announcements

VeeamOn 2017 has turned out to be pretty good. There were lots of updates from Veeam and partners that should keep me busy for quite awhile...