Category Archives: VMware

Enabling Flash for IE11 on Windows 2012 R2

The VMware vSphere web client requires Flash 11.5. I’ve noticed some issues with certain versions of Chrome and IE11 on Windows 2012 R2 not being able to open the web client due to either a missing or incorrect Flash version. Luckily, enabling the built in flash player for IE11 is an easy process.

Continue reading

VMware EUC Access Point Single Line PEM

VMware has greatly increased the usefulness of the EUC Access Point with the 2.7.2 version. Included in this version is the ability to handle the Horizon View security server functionality, airwatch functionality including per-app VPN and identity manager reverse proxy.

During the Horizon View setup pieces, there is a requirement to install your SSL certificate and key in a single line. With many guides on the internet showing different ways of accomplishing this, the one way that worked successfully for me each time is the method in the Deploying and Configuring Access Point guide on the VMware website.

To convert from pkcs12 to pem:

openssl pkcs12 -in mycaservercert.pfx -nokeys -out mycaservercert.pem
openssl pkcs12 -in mycaservercert.pfx -nodes -nocerts -out mycaservercert.pem
openssl rsa -in mycaservercertkey.pem -check -out mycaservercertkeyrsa.pem

Open the pem files and remove any unnecessary lines and then run the following command for both the RSA key and the certificate chain.

awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' cert-name.pem

Each of these outputs will then be uploaded in JSON format to the EUC access point

Using postman, do a put using the following link to your access point appliance. Replace “string” with the single line output from the above awk command for the RSA file and the certificate chain.

https://FQDN or IP of EUC Access Point:9443/rest/v1/config/certs/ssl

{
  "privateKeyPem": "string",
  "certChainPem": "string"
}

If all works successfully, you will see the new settings displayed in the bottom output section and status 200 ok displayed as well

Debug Logging in Profile Unity 6

In Liquidware Labs Profile Unity 5.7 and later, detailed logging of the login and logoff process have been added to greatly enhance the ability to troubleshoot profile loading and saving issues.  These logs are useful for everything from troubleshooting to quantifying how much time Profile Unity adds to the logon process.

There are two locations to enable logging.  The first location is for the login process logging and the second is for log off logging.  Login logging will show you information such as the loading of portability settings, how long each portability file takes, how long profile unity takes to load the profile and other various login details including any errors.  The log off logging will show overall time to save profile, each individual portability setting and of course any errors.

To enable login logging we need to edit LwL.ProfileUnity.Client.exe.config in the client.net.zip file in the netlogon share.  \\domain\NETLOGON\ProfileUnity\client.net.zip

The settings are in the User Settings section.

<UserSettings>
    <LWL.ProfileUnity.Client.Properties.Settings>
        <setting name="LogLevel" serializeAs="String"> 
            <value>Fatal</value>
        </setting>
        <setting name="LogPath" serializeAs="String"> 
            <value/>
        </setting>

Edit the Log Level to be Debug and change Log Path to a location that the desktops can write to.  By default it writes to %temp%.  If these are non-persistant desktops and you want the logs after the desktop has been refreshed or recomposed, remember to redirect them to a location that persists across sessions.

<UserSettings>
    <LWL.ProfileUnity.Client.Properties.Settings>
        <setting name="LogLevel" serializeAs="String"> 
            <value>Debug</value>
        </setting>
        <setting name="LogPath" serializeAs="String"> 
            <value>\\server\share </value>
        </setting>

Once your desired changes are complete, save the file back into client.net.zip

The next step is to increment the version in  \\domain\NETLOGON\ProfileUnity\client.net.vbs.  To do this, edit the file and search for ClientDotNet_Version= and increment the minor version.  ex. 6.0.421-<Date>-<Time> and change it to 6.0.422-<Date>-<Time>.  This tells Profile Unity that there is a new version of the client and to reload the client on the next desktop refresh.

The last step for login logging is to refresh the desktops where you desire this logging.  In VMware View, you can refresh individual desktops or entire pools.  During the next login after the refresh, log files should appear in the location defined above.  The log files will be in a sub-folder titled ProfileUnity.

 

For log off logging, we need to make similar changes, but to a folder on the desktop(s) where you want to log the log off process.  Edit C:\Program Files\ProfileUnity\Client.NET\LwL.ProfileUnity.Client.exe.config.  The settings to change are the same lines as above, but for non-persistant desktops, you will need to redirect to a directory outside of the desktop as the logs will disappear once the machine is refreshed after the log off.  This is the only change needed to log the log off process.

Horizon View Linked Clone Disk Provisioning

There’s a few older articles floating around discussing disk provisioning types for linked clone view desktops.  Since these are a few years old, let’s revisit this topic and look at disk configurations for  linked clone non persistent VDI desktops with Horizon View 6.  We start with two parent images.  Both Windows 8.1, one thick provisioned and one thin provisioned.

Browsing the datastores, we see each VM and a delta file for each snapshot the parent has.  If we check the VMDK files, we will see one marked thin and one not.

Thin:

# cat Test-Thin-000002.vmdk

# Disk DescriptorFile

....

createType="vmfs"

# Extent description
RW 83886080 VMFS "Test-Thin-000002-flat.vmdk"

....

# The Disk Data Base
#DDB

....
ddb.thinProvisioned = "1"

....

Thick:

cat Test3_Thick-000001.vmdk

# Disk DescriptorFile

....

createType="vmfsSparse"

.....

# Extent description
RW 83886080 VMFSSPARSE "Test3_Thick-000001-delta.vmdk"

 

We can further determine that the Thick provisioned disk is Eager Zero by seeing no blocks marked to be zeroed as the disk grows:

# vmkfstools -D
Test3_Thick-000001.vmdk
....
Addr <4, 296, 186>, gen 936, links 1, type reg, flags 0, uid 0, gid 0, mode 600
len 338, nb 0 tbz 0, cow 0, newSinceEpoch 0, zla 4305, bs 8192

 

Moving on to the replica.  In each case the replica is cloned from the parent image.  Since there is a snapshot involved, VAAI is not used as the snapshot gets committed during the cloning process.

The replica contains a snapshot to make the linked clones as well.

As you can see the replica is thin provisioned.  This looks the same for both Parent images:

# cat replica-dbdde63e-fe3e-4f05-841a-f8c285a02238_2.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=ba2d9019
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 83886080 VMFS "replica-dbdde63e-fe3e-4f05-841a-f8c285a02238_2-flat.vmdk"

# The Disk Data Base
....
ddb.thinProvisioned = "1"
....

For the linked clones themselves, they contain three drives.  The disposable and system drives are both thin provisioned:

Disposable

# cat test1-vdm-disposab
le-e749d41a-6575-4fdd-828c-003047a21860.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=f1fd4130
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 8388608 VMFS "test1-vdm-disposable-e749d41a-6575-4fdd-828c-003047a21860-flat.vmdk"

# The Disk Data Base
#DDB

....
ddb.thinProvisioned = "1"
.... 

For the system drive, you can see that this is a linked clone by the Parent File line below
System:

# cat test1_2.vmdk
# Disk DescriptorFile
....
parentCID=ba2d9019
isNativeSnapshot="no"
createType="seSparse"
parentFileNameHint="/vmfs/volumes/53dfd772-167f05bb-608a-0025b58a000f/replica-dbdde63e-fe3e-4f05-841a-f8c285a02238/replica-dbdde63e-fe3e-4f05-841a-f8c285a02238_2.vmdk"
# Extent description
RW 83886080 SESPARSE "test1_2-sesparse.vmdk"
.... 

The Internal disk is Thick lazy zeroed:

Internal:

# cat test1-vdm-disposab
le-e749d41a-6575-4fdd-828c-003047a21860.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=f1fd4130
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 8388608 VMFS "test1-vdm-disposable-e749d41a-6575-4fdd-828c-003047a21860-flat.vmdk"

# The Disk Data Base
#DDB

....
ddb.thinProvisioned = "1"
....

For the Parent images on an All Flash Array, thick or thin provisioning shouldn’t matter performance or space wise.  The only difference I’ve noticed is the thick provisioned parent takes around 1-2 minutes longer to clone to the replica, but once the replica is up and running its the same configuration and performance from that point on.  In terms of space savings for thick vs thin, DeDupe, if working properly, will render this a moot point.

Some people make the argument that thin provisioning adds additional meta data, non contiguous blocks etc… which affects performance, but as seen above, this only affects the parent image, not the replica or linked clones.

Thin provisioned parent images win for me. I like saving the extra minute or two, per pool, during recompose operations with the same performance.

 

XtremIO VAAI Reclaiming Deleted Storage

Reclaiming deleted storage on luns is a straight fotward task.  XtremIO fully supports UNMAP

In vSphere 5.1:

SSH into ESXi

cd /vmfs/volumes/<volume_name>

vmkfstools -y <percentage_to_reclaim>

Example:    vmkfstools -y 99

In vSphere 5.5:

Couple of notes about running this in vSphere 5.5.  If -n is not specified, the default number of blocks to UNMAP is 200 per iteration.   Unlike in vSphere 5.1, the UNMAP can be run from any directory.  You do not need to be in the volume directory to perform this task.

SSH into ESXi

esxcli storage vmfs unmap -l <volume_name> -n <blocks>

or

esxcli storage vmfs unmap -u <volume_UUID> -n <blocks>

Example:    esxcli storage vmfs unmap -l <volume_name> -n 20000

 

Additional Reference Material:

VMware KB2014849 (vSphere 5.1)

VMware KB2057513 (vSphere 5.5)