Sunday, January 4, 2015

Virtual Disk Provisioning

 In VMware vSphere you can choose one of three formats when creating a virtual hard drive:
  • Thin Provisioned
  • Thick Provisioned Lazy Zeroed
  • Thick Provisioned Eager Zeroed

 

Thin Provisioned

                It shows the actual data on the disk or allocating just "the exact required amount" of server space at the time it is required.

        
       Advantages:

               Fastest to provision
               Allows disk space to be overcommitted to VMs

       Disadvantages:

           Slowest performance due to metadata allocation overhead and additional overhead during initial write operations
           Overcommitment of storage can lead to application disruption or downtime if resources are actually used
           Does not support clustering features

  

Thick Provisioned

               The description "thick provision" simply means that all the space that is required for the virtual disk files is reserved when the VM is created. The phrase "zeroed out" means that blocks on the physical storage device are formatted with zeros to overwrite any older data.


Thick Provisioned Lazy Zeroed

                Space is allocated at the time the VMDK is created but the underlying physical blocks are not zeroed(not formatted completely).At the initial access to each block, vSphere first zeroes out the block, then writes the data.

                If you have an 80GB VMDK and only 10GB worth of data, only 10GB worth of blocks are used and the rest is left as-is until needed.If you're using thin provisioning at the storage level lazy zeroed is what it needed.

Advantages:

  • Faster to provision than Thick Provisioned Eager Zeroed
  • Better performance than Thin Provisioned
Disadvantages:

  • Slightly slower to provision than Thin Provisioned
  • Slower performance than Thick Provisioned Eager Zero
  • Does not support clustering features

 

 Thick Provisioned Eager Zeroed

 Space allocated and all underlying blocks zeroed(formatted) at the time the VMDK is created.

 If you create an 80 GB thick provisioned eager zeroed VMDK, vSphere allocates 80 GB and writes 80 GB of zeroes.

Advantages:

  • Best performance
  • Overwriting allocated disk space with zeroes reduces possible security risks
  • Supports clustering features such as Microsoft Cluster Server (MSCS) and VMware Fault Tolerance
Disadvantages:

  • Longest time to provision
Eager is the right mode to use when you're not thin provisioning LUNs and you don't mind waiting a bit longer for the VMDK to be created.


Saturday, January 3, 2015

vMotion



                 vMotion is a feature that allows a running machine to be moved from one physical host to another physical host without having to power off the VM.

vMotion process

VM (vm1)-blade-5 (source)
VM (vm1)-blade-7 (Destination)

          1. The source Host (blade5) begins transfer the active memory pages to the destination host (blade 7) across a VMkernel interface. This is called pre-copy.
         During this the VM still service clients on the source (blade 5)
         The ongoing changes are written to a memory bitmap on source (blade 5)

         2. After entire contents of RAM transferred to the target (blade 7), then VM1 on the source host (blade 5) is quiesced. Still in memory not in service.
After that memory bitmap file is transferred to the target.

        3. The target Host (blade 7) reads the address in the memory bitmap file and request the contents of those address from the source (blade 5).

       4. After the contents of the memory referred to in the memory bitmap file have been transferred to the target host, the VMstarts on the Host.
 


vMotion Requirement.

a)      Shared storage for the VM files (a VMFS or NFS data store) that is accessible by both the source and target ESXi host.
b)      A Gigabit Ethernet or faster network interface card (NIC) with a VMkernel port defined and enabled for vMotion on each ESXi host.
c)      Both the source & destination host must be configured with identical virtual switches.
d)     Port group naming on both source and destination should be same and case sensitive.
e)      Processors of both hosts must be compatible.