Profile Unity Startup and Logoff Scripts in Windows 7

In Profile Unity, the process of loading a users profile and saving the profile are triggered by scripts. These scripts are normally configured in Group Policy. The startup script is configured under Computer configuration and the Logoff script is under User configuration.

I started noticing issues in Profile Unity 6.0.5 with profiles loading properly as well as saving properly. Digging into the issue, it was discovered the Liquidware Labs has added a new method of triggering the Startup and Logoff process for Windows 7 and later.

Both the old and the new method are configured in the same place in Group Policy, its just the file called has changed.

For Startup, the previous method was

Script Name:
%systemroot%\system32\wscript.exe

Script Parameter:
\\[domain name]\netlogon\ProfileUnity\startup.vbs //b

The new method is

Script Name:
\\[domain name]\netlogon\ProfileUnity\LwL.ProfileUnity.Client.Startup.exe

No parameters needed

For Logoff, the previous method was

Script Name:
%systemroot%\system32\wscript.exe

Script Parameter:
\\[domain name]\netlogon\ProfileUnity\logoff.vbs //b

The new method is

Script Name:
\\[domain name]\netlogon\ProfileUnity\LwL.ProfileUnity.Client.Logoff.exe

No parameters needed

Using this new executable, as opposed to the VBscript, has resolved the random profile loading issues at login and the issue with saving profiles at logoff that were previously being experienced.

For further reference, this change is documented in the Install guide, but as of 6.0.5 Patch 2 it does not appear to be mentioned in the release notes.

New Powershell REST API Module for XtremIO

An EMC SE has recently published to Github a powershell module for making REST API calls to XtremIO arrays. The module greatly simplifies leveraging these API’s.

Manually this task includes setting the username and password, converting them to the proper format, setting the headers and than finally making the REST API calls.

An example of this look like the below code. A few pieces are left out from this example but this is the basics.

$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($pass)
$password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
$EncodedAuthorization = [System.Text.Encoding]::UTF8.GetBytes($username + ':' + $password)
$EncodedPassword = [System.Convert]::ToBase64String($EncodedAuthorization)
$headers = @{"Authorization"="Basic $($EncodedPassword)"}

$baseUri = "https://" + $ip.IPAddressToString + ":443/api/json/types" 
$xCluster = (Invoke-WebRequest -Uri $baseURI/clusters/1 -Headers $headers).content
$xCluster.content

With this new powershell module, this gets simplified to be straightforward one-line commands from powershell

Get-XtremClusterStatus
Get-XtremVolumes
etc...

Make sure to take careful note of the task to import the certificates from the XMS for each array you want to query. If this step isn’t completed correctly, you will receive errors back from powershell that may leave you scratching your head.

It is also requires a minimum of powershell v4

Check out the Git page for more examples and to download the module at the link below

xtremlib

Whitelisting and Blacklisting Sites in Chrome Via GPO

The Google Chrome browser has Group Policy extensions available for managing computer and user settings for the chrome browser via group policy.

These settings include enabling/disabling default browser prompts and settings, controlling password manager, chrome apps settings and numerous other items.

The ones we’ll look at today are whitelisting and blacklisting websites via GPO.

To start, make sure you have the Chrome admx and adml files downloaded. They can be downloaded from Google:

http://dl.google.com/dl/edgedl/chrome/policy/policy_templates.zip

This zip contains HTML listings of the policy settings, linux templates and windows templates. The windows templates come in two flavors. adm and admx. For the admx template:

Copy chrome.admx to SYSVOL\domain\Policies\PolicyDefinitions\ 
Also copy the appropriate adml language file to the subfolder for your language

Chrome processes policies in the order of Machine –> User –> Chrome

When you launch Group Policy Management Console, and edit a policy, expand Administrative templates under either user or computer configuration and you’ll now see a folder titled Google. When you expand this folder, there are two options. Google Chrome or Google Chrome default settings. The defualt settings allow you to set default settings but allow end user over riding of these policy settings.

The other option enforces the settings defined in the policy with no ability to override.

In the Google Chrome policy, there are two options related to white listing and black listing of sites. They are “Block access to a list of URLs” and “Allow access to a list of URLs”. Both of these settings are available at the user and computer/machine level.

These settings take a list of urls or can take the wildcard *

To block all sites and only whitelist the ones you want, set “Block access to a list of URLs” to enabled and add * to the list.

Next, go to “Allow access to a set of URLs” click enable and add the sites you want to the list.

Example: 
 https://www.google.com
https://translate.google.com
etc...

In the background, these are setting registry values at the following locations:

SOFTWARE\Policies\Google\Chrome\URLBlacklist 
SOFTWARE\Policies\Google\Chrome\URLWhitelist

These are added as strings in numerical order.

1 REG_SZ https://www.google.com 
2 REG_SZ https://translate.google.com

Once you have the settings how you like them, close the editor,complete any other GPO related tasks such as security filtering and attach to the appropriate OU.

Now you have Chrome filtered to only allow whitelisted sites or whatever combination of whitelisted and blacklisted sites you desire.

Adjusting MTU Size on Windows 7 / Windows 2008

I recently ran into an issue with ping and trace route succeeding but RPC failing to certain servers. I noticed some interesting items in packet captures which prompted me to try pinging with different MTU sizes.

To start, lets look at the MTU size currently configured.

 #netsh interface ipv4 show interface
Idx     Met         MTU          State                Name
---  ----------  ----------  ------------  ---------------------------
 10           5        1500  connected  Local Area Connection

We can see that the connected interface has an MTU of 1500 bytes.

Since a normal ping only sends 32 bytes of data from windows, we send pings with a configured size

 
#ping -l 1500 192.168.1.1

Pinging 192.168.1.1 with 1500 bytes of data:
Request timed out
Request timed out

To figure out the packet size that will work, we just keep lowering the MTU until we find the highest MTU that responds

#ping -l 1450 192.168.1.1

Pinging 192.168.1.1 with 1450 bytes of data:
Reply from 192.168.1.1: bytes=1450 time<1ms TTL=64
Reply from 192.168.1.1: bytes=1450 time<1ms TTL=64

Now all that is left is re-configuring the MTU size on the interface to this new MTU size. For the MTU size, we use the number from the previous step plus 28. This additional 28 bytes is to account for IP and ICMP headers. For the example above, we will use 1450 bytes plus 28 bytes

#netsh interface ipv4 set subinterface 10 mtu=1478 store=persistent
Ok.

After this, RPC communications were restored.

Checking DFS-R Backlog

When using DFS-R, the replication groups occasionally get out of sync. Microsoft includes built-in tools to help check the backlog of files and state of DFS-R

To check the backlog, open a command prompt as administrator and run the following command: Continue reading

Debug Logging in Profile Unity 6

In Liquidware Labs Profile Unity 5.7 and later, detailed logging of the login and logoff process have been added to greatly enhance the ability to troubleshoot profile loading and saving issues.  These logs are useful for everything from troubleshooting to quantifying how much time Profile Unity adds to the logon process.

There are two locations to enable logging.  The first location is for the login process logging and the second is for log off logging.  Login logging will show you information such as the loading of portability settings, how long each portability file takes, how long profile unity takes to load the profile and other various login details including any errors.  The log off logging will show overall time to save profile, each individual portability setting and of course any errors.

To enable login logging we need to edit LwL.ProfileUnity.Client.exe.config in the client.net.zip file in the netlogon share.  \\domain\NETLOGON\ProfileUnity\client.net.zip

The settings are in the User Settings section.

<UserSettings>
    <LWL.ProfileUnity.Client.Properties.Settings>
        <setting name="LogLevel" serializeAs="String"> 
            <value>Fatal</value>
        </setting>
        <setting name="LogPath" serializeAs="String"> 
            <value/>
        </setting>

Edit the Log Level to be Debug and change Log Path to a location that the desktops can write to.  By default it writes to %temp%.  If these are non-persistant desktops and you want the logs after the desktop has been refreshed or recomposed, remember to redirect them to a location that persists across sessions.

<UserSettings>
    <LWL.ProfileUnity.Client.Properties.Settings>
        <setting name="LogLevel" serializeAs="String"> 
            <value>Debug</value>
        </setting>
        <setting name="LogPath" serializeAs="String"> 
            <value>\\server\share </value>
        </setting>

Once your desired changes are complete, save the file back into client.net.zip

The next step is to increment the version in  \\domain\NETLOGON\ProfileUnity\client.net.vbs.  To do this, edit the file and search for ClientDotNet_Version= and increment the minor version.  ex. 6.0.421-<Date>-<Time> and change it to 6.0.422-<Date>-<Time>.  This tells Profile Unity that there is a new version of the client and to reload the client on the next desktop refresh.

The last step for login logging is to refresh the desktops where you desire this logging.  In VMware View, you can refresh individual desktops or entire pools.  During the next login after the refresh, log files should appear in the location defined above.  The log files will be in a sub-folder titled ProfileUnity.

 

For log off logging, we need to make similar changes, but to a folder on the desktop(s) where you want to log the log off process.  Edit C:\Program Files\ProfileUnity\Client.NET\LwL.ProfileUnity.Client.exe.config.  The settings to change are the same lines as above, but for non-persistant desktops, you will need to redirect to a directory outside of the desktop as the logs will disappear once the machine is refreshed after the log off.  This is the only change needed to log the log off process.

Horizon View Linked Clone Disk Provisioning

There’s a few older articles floating around discussing disk provisioning types for linked clone view desktops.  Since these are a few years old, let’s revisit this topic and look at disk configurations for  linked clone non persistent VDI desktops with Horizon View 6.  We start with two parent images.  Both Windows 8.1, one thick provisioned and one thin provisioned.

Browsing the datastores, we see each VM and a delta file for each snapshot the parent has.  If we check the VMDK files, we will see one marked thin and one not.

Thin:

# cat Test-Thin-000002.vmdk

# Disk DescriptorFile

....

createType="vmfs"

# Extent description
RW 83886080 VMFS "Test-Thin-000002-flat.vmdk"

....

# The Disk Data Base
#DDB

....
ddb.thinProvisioned = "1"

....

Thick:

cat Test3_Thick-000001.vmdk

# Disk DescriptorFile

....

createType="vmfsSparse"

.....

# Extent description
RW 83886080 VMFSSPARSE "Test3_Thick-000001-delta.vmdk"

 

We can further determine that the Thick provisioned disk is Eager Zero by seeing no blocks marked to be zeroed as the disk grows:

# vmkfstools -D
Test3_Thick-000001.vmdk
....
Addr <4, 296, 186>, gen 936, links 1, type reg, flags 0, uid 0, gid 0, mode 600
len 338, nb 0 tbz 0, cow 0, newSinceEpoch 0, zla 4305, bs 8192

 

Moving on to the replica.  In each case the replica is cloned from the parent image.  Since there is a snapshot involved, VAAI is not used as the snapshot gets committed during the cloning process.

The replica contains a snapshot to make the linked clones as well.

As you can see the replica is thin provisioned.  This looks the same for both Parent images:

# cat replica-dbdde63e-fe3e-4f05-841a-f8c285a02238_2.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=ba2d9019
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 83886080 VMFS "replica-dbdde63e-fe3e-4f05-841a-f8c285a02238_2-flat.vmdk"

# The Disk Data Base
....
ddb.thinProvisioned = "1"
....

For the linked clones themselves, they contain three drives.  The disposable and system drives are both thin provisioned:

Disposable

# cat test1-vdm-disposab
le-e749d41a-6575-4fdd-828c-003047a21860.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=f1fd4130
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 8388608 VMFS "test1-vdm-disposable-e749d41a-6575-4fdd-828c-003047a21860-flat.vmdk"

# The Disk Data Base
#DDB

....
ddb.thinProvisioned = "1"
.... 

For the system drive, you can see that this is a linked clone by the Parent File line below
System:

# cat test1_2.vmdk
# Disk DescriptorFile
....
parentCID=ba2d9019
isNativeSnapshot="no"
createType="seSparse"
parentFileNameHint="/vmfs/volumes/53dfd772-167f05bb-608a-0025b58a000f/replica-dbdde63e-fe3e-4f05-841a-f8c285a02238/replica-dbdde63e-fe3e-4f05-841a-f8c285a02238_2.vmdk"
# Extent description
RW 83886080 SESPARSE "test1_2-sesparse.vmdk"
.... 

The Internal disk is Thick lazy zeroed:

Internal:

# cat test1-vdm-disposab
le-e749d41a-6575-4fdd-828c-003047a21860.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=f1fd4130
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 8388608 VMFS "test1-vdm-disposable-e749d41a-6575-4fdd-828c-003047a21860-flat.vmdk"

# The Disk Data Base
#DDB

....
ddb.thinProvisioned = "1"
....

For the Parent images on an All Flash Array, thick or thin provisioning shouldn’t matter performance or space wise.  The only difference I’ve noticed is the thick provisioned parent takes around 1-2 minutes longer to clone to the replica, but once the replica is up and running its the same configuration and performance from that point on.  In terms of space savings for thick vs thin, DeDupe, if working properly, will render this a moot point.

Some people make the argument that thin provisioning adds additional meta data, non contiguous blocks etc… which affects performance, but as seen above, this only affects the parent image, not the replica or linked clones.

Thin provisioned parent images win for me. I like saving the extra minute or two, per pool, during recompose operations with the same performance.

 

XtremIO VAAI Reclaiming Deleted Storage

Reclaiming deleted storage on luns is a straight fotward task.  XtremIO fully supports UNMAP

In vSphere 5.1:

SSH into ESXi

cd /vmfs/volumes/<volume_name>

vmkfstools -y <percentage_to_reclaim>

Example:    vmkfstools -y 99

In vSphere 5.5:

Couple of notes about running this in vSphere 5.5.  If -n is not specified, the default number of blocks to UNMAP is 200 per iteration.   Unlike in vSphere 5.1, the UNMAP can be run from any directory.  You do not need to be in the volume directory to perform this task.

SSH into ESXi

esxcli storage vmfs unmap -l <volume_name> -n <blocks>

or

esxcli storage vmfs unmap -u <volume_UUID> -n <blocks>

Example:    esxcli storage vmfs unmap -l <volume_name> -n 20000

 

Additional Reference Material:

VMware KB2014849 (vSphere 5.1)

VMware KB2057513 (vSphere 5.5)

Horizon View Security Server – Prepare for Upgrade or Reinstallation

If you run into an issue during the upgrade of a security server where the option to prepare for upgrade or reinstallation is greyed out in the View admin page or you receive an error during the security server upgrade similar to:

“Unable to connect to the server <ServerName> on TCP port 8009.  Please check that the specified Connection Server is running and that this TCP port is not being blocked by a firewall”

On both the security server and the paired connection server, open the Windows Firewall with Advanced Security MMC console and click on Connection Security Rules and delete the VMware View Security Server QM Pairing line.

Once this has been deleted from both servers, you should be able to set your Security Server Pairing Password and pair the servers up again.

 

New Logo

New Logo…  New Theme…  Wordpress spam still sucks…