Category Archives: Horizon View

VMware EUC Access Point Single Line PEM

VMware has greatly increased the usefulness of the EUC Access Point with the 2.7.2 version. Included in this version is the ability to handle the Horizon View security server functionality, airwatch functionality including per-app VPN and identity manager reverse proxy.

During the Horizon View setup pieces, there is a requirement to install your SSL certificate and key in a single line. With many guides on the internet showing different ways of accomplishing this, the one way that worked successfully for me each time is the method in the Deploying and Configuring Access Point guide on the VMware website.

To convert from pkcs12 to pem:

openssl pkcs12 -in mycaservercert.pfx -nokeys -out mycaservercert.pem
openssl pkcs12 -in mycaservercert.pfx -nodes -nocerts -out mycaservercert.pem
openssl rsa -in mycaservercertkey.pem -check -out mycaservercertkeyrsa.pem

Open the pem files and remove any unnecessary lines and then run the following command for both the RSA key and the certificate chain.

awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' cert-name.pem

Each of these outputs will then be uploaded in JSON format to the EUC access point

Using postman, do a put using the following link to your access point appliance. Replace “string” with the single line output from the above awk command for the RSA file and the certificate chain.

https://FQDN or IP of EUC Access Point:9443/rest/v1/config/certs/ssl

{
  "privateKeyPem": "string",
  "certChainPem": "string"
}

If all works successfully, you will see the new settings displayed in the bottom output section and status 200 ok displayed as well

The pod is not ready to enable enhanced message security

As part of the Horizon View 6.1 upgrade, there is a new feature to enable enhanced security on the JMS traffic between Security server and Connection Server. For upgrades to 6.1, this feature needs to be enabled. For new installs, its enabled by default.

To enable this after upgrading from a prior version, the setting can be changed in the View Admin page under global settings, security, edit settings. There is a drop down to change from the current version to Enhanced.

When attempting to change this setting, the following message popped up:

The pod is not ready to enable enhanced message security. Click "OK" to force enabling the enhanced mode, or "Cancel" to cancel the operation

Continue reading

Horizon View Client DPI Scaling

I recently ran across an issue with the Horizon View client, on a Surface Pro 3, where the icons and text were so small they were unreadable. I tried the usual fix of checking the disable display scaling on high DPI monitors checkbox, but to no avail. This check box has helped in the past, but isn’t fixing the issues with VMware Horizon View client.

VMware has a registry key that can be added to fix this issue once and for all. This key only works with the 3.4 version of the Horizon View client and enables an experimental DPI Scaling feature.

KEY: HKCU\software\vmware, inc.\vmware vdm\client
Value:  DWORD:  EnableSessionDPIScaling 1

1 = on
0 = off

With this key added, when you connect to a desktop, the screen scales properly.

Setting Windows 7 Best Performance Settings in Parent Image

Windows 7 (along with other versions of windows desktop operating systems) offer the ability to adjust visual affects to help the performance of the operating system.  These settings can be changed in Advanced System Settings –> Advanced tab –> Performance settings and also via the registry.

When setting via the registry, it requires a logoff and re-login for the changes to take affect (or API calls), which in the non persistent VDI world causes some issues.  If setting via group policy, there’s a few keys to adjust in HKCU, but these keys need to be loaded at logon, or the theme and/or windows explorer need to be reloaded.  Luckily there’s a much more elegant way to set these in a parent image without changing the default user.

Continue reading

Debug Logging in Profile Unity 6

In Liquidware Labs Profile Unity 5.7 and later, detailed logging of the login and logoff process have been added to greatly enhance the ability to troubleshoot profile loading and saving issues.  These logs are useful for everything from troubleshooting to quantifying how much time Profile Unity adds to the logon process.

There are two locations to enable logging.  The first location is for the login process logging and the second is for log off logging.  Login logging will show you information such as the loading of portability settings, how long each portability file takes, how long profile unity takes to load the profile and other various login details including any errors.  The log off logging will show overall time to save profile, each individual portability setting and of course any errors.

To enable login logging we need to edit LwL.ProfileUnity.Client.exe.config in the client.net.zip file in the netlogon share.  \\domain\NETLOGON\ProfileUnity\client.net.zip

The settings are in the User Settings section.

<UserSettings>
    <LWL.ProfileUnity.Client.Properties.Settings>
        <setting name="LogLevel" serializeAs="String"> 
            <value>Fatal</value>
        </setting>
        <setting name="LogPath" serializeAs="String"> 
            <value/>
        </setting>

Edit the Log Level to be Debug and change Log Path to a location that the desktops can write to.  By default it writes to %temp%.  If these are non-persistant desktops and you want the logs after the desktop has been refreshed or recomposed, remember to redirect them to a location that persists across sessions.

<UserSettings>
    <LWL.ProfileUnity.Client.Properties.Settings>
        <setting name="LogLevel" serializeAs="String"> 
            <value>Debug</value>
        </setting>
        <setting name="LogPath" serializeAs="String"> 
            <value>\\server\share </value>
        </setting>

Once your desired changes are complete, save the file back into client.net.zip

The next step is to increment the version in  \\domain\NETLOGON\ProfileUnity\client.net.vbs.  To do this, edit the file and search for ClientDotNet_Version= and increment the minor version.  ex. 6.0.421-<Date>-<Time> and change it to 6.0.422-<Date>-<Time>.  This tells Profile Unity that there is a new version of the client and to reload the client on the next desktop refresh.

The last step for login logging is to refresh the desktops where you desire this logging.  In VMware View, you can refresh individual desktops or entire pools.  During the next login after the refresh, log files should appear in the location defined above.  The log files will be in a sub-folder titled ProfileUnity.

 

For log off logging, we need to make similar changes, but to a folder on the desktop(s) where you want to log the log off process.  Edit C:\Program Files\ProfileUnity\Client.NET\LwL.ProfileUnity.Client.exe.config.  The settings to change are the same lines as above, but for non-persistant desktops, you will need to redirect to a directory outside of the desktop as the logs will disappear once the machine is refreshed after the log off.  This is the only change needed to log the log off process.

Horizon View Linked Clone Disk Provisioning

There’s a few older articles floating around discussing disk provisioning types for linked clone view desktops.  Since these are a few years old, let’s revisit this topic and look at disk configurations for  linked clone non persistent VDI desktops with Horizon View 6.  We start with two parent images.  Both Windows 8.1, one thick provisioned and one thin provisioned.

Browsing the datastores, we see each VM and a delta file for each snapshot the parent has.  If we check the VMDK files, we will see one marked thin and one not.

Thin:

# cat Test-Thin-000002.vmdk

# Disk DescriptorFile

....

createType="vmfs"

# Extent description
RW 83886080 VMFS "Test-Thin-000002-flat.vmdk"

....

# The Disk Data Base
#DDB

....
ddb.thinProvisioned = "1"

....

Thick:

cat Test3_Thick-000001.vmdk

# Disk DescriptorFile

....

createType="vmfsSparse"

.....

# Extent description
RW 83886080 VMFSSPARSE "Test3_Thick-000001-delta.vmdk"

 

We can further determine that the Thick provisioned disk is Eager Zero by seeing no blocks marked to be zeroed as the disk grows:

# vmkfstools -D
Test3_Thick-000001.vmdk
....
Addr <4, 296, 186>, gen 936, links 1, type reg, flags 0, uid 0, gid 0, mode 600
len 338, nb 0 tbz 0, cow 0, newSinceEpoch 0, zla 4305, bs 8192

 

Moving on to the replica.  In each case the replica is cloned from the parent image.  Since there is a snapshot involved, VAAI is not used as the snapshot gets committed during the cloning process.

The replica contains a snapshot to make the linked clones as well.

As you can see the replica is thin provisioned.  This looks the same for both Parent images:

# cat replica-dbdde63e-fe3e-4f05-841a-f8c285a02238_2.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=ba2d9019
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 83886080 VMFS "replica-dbdde63e-fe3e-4f05-841a-f8c285a02238_2-flat.vmdk"

# The Disk Data Base
....
ddb.thinProvisioned = "1"
....

For the linked clones themselves, they contain three drives.  The disposable and system drives are both thin provisioned:

Disposable

# cat test1-vdm-disposab
le-e749d41a-6575-4fdd-828c-003047a21860.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=f1fd4130
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 8388608 VMFS "test1-vdm-disposable-e749d41a-6575-4fdd-828c-003047a21860-flat.vmdk"

# The Disk Data Base
#DDB

....
ddb.thinProvisioned = "1"
.... 

For the system drive, you can see that this is a linked clone by the Parent File line below
System:

# cat test1_2.vmdk
# Disk DescriptorFile
....
parentCID=ba2d9019
isNativeSnapshot="no"
createType="seSparse"
parentFileNameHint="/vmfs/volumes/53dfd772-167f05bb-608a-0025b58a000f/replica-dbdde63e-fe3e-4f05-841a-f8c285a02238/replica-dbdde63e-fe3e-4f05-841a-f8c285a02238_2.vmdk"
# Extent description
RW 83886080 SESPARSE "test1_2-sesparse.vmdk"
.... 

The Internal disk is Thick lazy zeroed:

Internal:

# cat test1-vdm-disposab
le-e749d41a-6575-4fdd-828c-003047a21860.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=f1fd4130
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 8388608 VMFS "test1-vdm-disposable-e749d41a-6575-4fdd-828c-003047a21860-flat.vmdk"

# The Disk Data Base
#DDB

....
ddb.thinProvisioned = "1"
....

For the Parent images on an All Flash Array, thick or thin provisioning shouldn’t matter performance or space wise.  The only difference I’ve noticed is the thick provisioned parent takes around 1-2 minutes longer to clone to the replica, but once the replica is up and running its the same configuration and performance from that point on.  In terms of space savings for thick vs thin, DeDupe, if working properly, will render this a moot point.

Some people make the argument that thin provisioning adds additional meta data, non contiguous blocks etc… which affects performance, but as seen above, this only affects the parent image, not the replica or linked clones.

Thin provisioned parent images win for me. I like saving the extra minute or two, per pool, during recompose operations with the same performance.