Replicating your VDisk stores with DFS-R

DFS-R_title_smallIn this blogpost I want to share some experiences and practical tips when using DFS-R to sync local VDisk stores between multiple Provisioning Servers. Lets begin with a quick intro :

If you are streaming your VDisks from the Provisioning Servers local storage, you often want to replicate the stores to other Provisioning servers to provide HA for the VDisks and target connections. I’m a big fan of running VDisks from local storage because :

– No CIFS layer between the Provisioning Servers and the VDisks, increasing performance and eliminating bottlenecks
– No CIFS single point of failure
– No expensive clustered file system needed to provide HA for the VDisks

Caching on the device local harddrive or in the device RAM is the best option when using local storage for your VDisk store, this way you can easily load balance and failover between Provisioning Servers. Of course there are some down sides when placing the VDisks on local storage :

– Need to sync VDisk stores between Provisioning Servers, resulting in higher network utilization during sync
– Double storage space needed
– Not an ideal solution when using a lot of private mode VDisks (VDisk is continuous in use and cannot sync). Luckily we now have the Personal VDisk option in XenDesktop so IMHO private VDisks aren’t really necessary anymore in a SBC or VDI deployment.

Because one size doesn’t fit all, you can always mix storage types for storing VDisks depending on your needs, but for Standard mode images combined with caching on the device hard drive or device RAM using the local storage of the Provisioning Server is a good option.

Since Provisioning Server version 6 Citrix added functionality regarding versioning, you can now also easily see if the Provisioning servers are in sync with each other, but you have to configure the replication mechanism yourself. I have worked with a lot of different replication solutions to replicate the VDisks between provisioning servers, from manual copy to scripts using robocopy and rsync running both scheduled and manual. Lately I use DFS-R more and more to get the job done. Because DFS-R provides a 2-way (full mesh) replication mechanism, it’s a great way to keep your VDisk folders in sync, but there are some caveats to deal with when using DFS-R. Below I will give you some practical tips and a scenario you can run into when using DFS-R :

Last Writer wins
This is one of the most important things to deal with, DFS-R uses the last writer wins mechanism to decide which file overrules others. It’s a fairly easy mechanism based on time stamps : whoever changes the file last wins the battle and will synchronize to the other members, it will overwrite existing (outdated) files!
If you hold this mechanism against the mechanism how Provisioning Server works you will quickly run into the following trap :

Imagine you’re environment looks like the image below.


Step 1:
Because you want to update the image, you connect to the Provisioning console on PVS01 and create a new VDisk version, this will create a maintenance (.avhd) file on PVS01. Because this file is initially very small it will quickly replicate to the other Provisioning servers.

Step 2:
You spin up the maintenance VM, at this point you don’t know from which Provisioning server the maintenance VM will boot (decided based on load balancing rules), so let’s say it boots from PVS03.
You make changes to the maintenance image and shut it down.
Now the fun part is going to start! Based on the changes you made and the size of the .avhd file, it can take some time to replicate the updated file to the other Provisioning Servers.

Step 3:
In the meantime, still connected to PVS01, you promote the VDisk to test or production.
When you promote the VDisk, the SOAP service will mount the VDisk and make changes to it for KMS activation etc.

Step 4:
You boot a test or production VM from the new version, and you don’t see you’re changes, further more they are lost!
What happened? Well you ran into the last writer wins mechanism trap of DFS-R :
The promote takes place on the Provisioning Server which you are connecting to, so in the example this is PVS01. PVS01 doesn’t have the updated .avhd from PVS03 yet, so you promoted the empty .avhd file created when you clicked on new version.
Because the promote action updates the time stamp of the .avhd file, it will replicate this file to the other Provisioning server (again quick because it’s empty) overwriting the one with your updates.

Here are 2 options how you can work around this behaviour :

Option 1 :
After you make changes wait till the replication is finished (watch the replication tab in the Provisioning console) promote the version when every Provisioning Server is in sync.

Option 2 :
If you can’t wait connect the console to the Provisioning Server where the update took place, promote the new version there, so you are sure you promote the right .avhd file

Below I will give some other practical tips when using DFS-R to replicate your VDisk stores.

1. Ensure you have always enough free space left for staging files and make your staging quotas big enough to replicate the whole VDisk (1,5 time the VDisk size for example)

2. Create multiple VDisk stores for your VDisks, this allows you to create multiple DFS-R replication folders, replication works better with multiple smaller folders then a very large one

3. Watch the event viewers for DFS-R related messages, DFS-R logs very informative events to the event log, keep an eye on high watermark events and other events related to replication issues

4. Check the DFS-R backlog to see what’s happening in the background and to check that there are no files stuck in the queue, you can use the dfsrdiag tool to watch the backlog, for example :

dfsrdiag backlog /receivingmember:PVS03 /rfname:PVS_Store_01 /rgname:PVS_Store_01 /sendingmember:PVS01

5. Exclude lok files from being replicated, they should not be the same on every Provisioning Server

6. Plan big DFS-R replica traffic during off-peak hours, when DFS-R is replicating booting up your targets will be slower, you can also limit the bandwidth used for DFS-R replica traffic

7. Before you start check your Active Directory scheme and domain functional level, if you want to use DFS-R your Active Directory scheme must be up-to-date and support the DFS-R replication objects. Also note that only DFS-R replication is necessary, no domain name spaces are needed.

I can be very short here, my conclusion is that DFS-R can be a very nice and convenient way to keep your VDisk stores in sync, but you must understand how DFS-R replica works and how it behaves when combined with Provisioning Server. Hopefully this blog post gave you a better understanding when using DFS-R in combination with Provisioning Server and keep above points in mind when you consider using DFS-R as the replication mechanism for your VDisk stores.

Please note that the information in this blog is provided as is without warranty of any kind.

Using ThreadLocker to control runaway applications

threadlockerLately I was troubleshooting some performance issues in a XenApp 6.5 environment, the customer was observing 2 issues 1: degraded performance in user sessions and 2: scalability problems.  The root cause of this issues was fairly easy to detect when looking at the performance history of the XenApp servers :  A (cr)application was hitting the CPU constantly which causes CPU spikes and overall performance degradation. The environment wasn’t using any form of CPU management other than the default CPU Dynamic Fair Share Scheduling (DFSS) feature in 2008 R2. While DFSS does a great job in equally distributing CPU resources amongst user sessions, the problem with this application is that it’s running away in every user session resulting in overall performance degradation of other applications and lower scalability of the XenApp farm.

While the software vendor is looking at the issue (I will not call the vendor by name because I don’t want to “name and shame”) a community tool developed by Andrew Morgan named ThreadLocker came to mind. ThreadLocker is a tool that can change the CPU affinity and priority of processes on the fly.

In this blog post I will show you how ThreadLocker is helping to improve the overall performance and scalability by changing the CPU affinity of the process that causes the CPU spikes on the fly. Beneath is a snapshot of the CPU usage before ThreadLocker was running, this spikes are caused by the application running in different user sessions.


The task list reveals the application running away with the CPU resources :


So I installed ThreadLocker and configured the affinity of the process to CPU core 0 and 1, I didn’t changed the priority because this caused the application to hang completely  :


ThreadLocker is very light weight and runs as a background service so it does its job in every user session, nothing to configure at a per user basis. After ThreadLocker is running it changes the affinity of the process to CPU core 0 and 1 at a custom interval  :


After ThreadLocker is active notice CPU core 2 and 3 on the right when the users start using the application :


ThreadLocker will also write its activity in the event logs when the verbose logging option is switched on, in this way you can check how often ThreadLocker switches the affinity.

Conclusion :

In this blog post I showed you a good use case for ThreadLocker, with this tool you can tie processes to certain CPU cores to free others. In this way the run away application cannot consume all CPU resources on the XenApp servers so other applications have always free CPU resources to use. Please note that in this use case it’s not a permanent solution for this issue, by binding the process to less CPU cores the application itself has less resources to consume and will perform slower, but that weight is much lower than the performance and scalability impact in the whole environment. We will use ThreadLocker here till the software vendor has fixed the issue and ThreadLocker will be in my Toolbox when I come around this kind of situation again.

You can download and read more about ThreadLocker here.

Please note that the information in this blog is provided as is without warranty of any kind.

Spinning up your Provisioning Services Environment

boot_headerSpinning up your Provisioning Services Environment

Just a quick blog before Christmas about options to spin up your Citrix Provisioning environment. As you might know you have different options when it comes to spinning up your Provisioning target devices. Which one to choose depends on your (network) setup and how much control you as a Citrix consultant\engineer\architect have in the customers environment. One of the most common boot scenario is using A: PXE or B: DHCP both in combination with TFTP.

With PXE you don’t have to configure the DHCP options to provide the TFTP server to the targets, you can create a kind of redundant configuration by setting up multiple PXE and TFTP servers, but please note that this is not a real HA configuration because there is no logic involved which controls the way a TFTP server is provided to the targets. For example a broken or unavailable TFTP server can be provided to your targets, you can compare this a little with the way how DNS round robin works.

With the DHCP options you can only provide one TFTP server to your targets, to enable HA here you can configure a load balancer (NetScaler for example) in front of the TFTP servers, this is more HA then option A because you can configure the load balancer to check the health of the TFTP servers and bypass TFTP servers that are currently in down state. But you will need to add multiple nodes in your load balancing configuration so you don’t have a single point of failure.

With both option A and B TFTP is used to deliver the bootstrap to your targets, but what if we can’t use PXE or TFTP because of network restrains or just because we want to eliminate the whole PXE and TFTP dependency… Yes we also have the option to create a bootable disk or ISO with the bootstrap embedded, it contains a list of the Provisioning Servers to provide HA for your VDISK.
To create the ISO we use the Provisioning Services Boot Device Manager which is part of the Provisioning Server installation.


After configuring the Provisioning Servers, burn the ISO using the Citrix ISO Image Recorder :


Ok now we have the ISO what should we do with it?
We need to keep in mind that this ISO is now the crucial part for spinning up the targets, without it they simple won’t boot.

The following scenarios are possible to provide the ISO to your targets :

1: When using physical targets
You can burn the ISO to CD\DVD and put it permanently in the drive and boot from there,  another alternative is to create a bootable USB drive and put it in the back of the server.

2: When using virtual targets
Here we can leverage the Hypervisor to provide the ISO, because we need to ensure the availability of the ISO we don’t want to put it on a remote ISO (CIFS\NFS) share, because this would be a single point of failure again. If possible you can use the local storage of the Hypervisor to store the ISO.

With VMware ESX and Hyper-V it’s easy just put the ISO on the local disk or Data Store and attach the ISO to the VM, you can also create a template after this so if you use XenDesktop\PVS to automatically create VM’s for you the ISO is already connected when the VM is turned on.

But what about XenServer? We only have the option to create a CIFS or NFS based ISO repository from XenCenter and that’s something we don’t want in this case. Beneath is a procedure which you can use to create a local ISO repository in XenServer, so you can attach the ISO from there :

– Connect with WinSCP to the XenServer host
– Create the following folder :  /var/opt/xen/local_iso
– Connect to the XenServer console to access the command line interface
– Create the Local ISO Repository with the following command :

“xe sr-create name-label=”Local ISO” type=iso \device-config:location=/var/opt/xen/local_iso/ \device-config:legacy_mode=true content-type=iso”

– Copy the PVS boot ISO to the /var/opt/xen/local_iso folder
– Check the SR in XenCenter :


And the content of the SR :


* Please note : Do not use the Local ISO repository to store other big ISO’s because it has limited space available.

– Finally attach the ISO to your VM’s and\or templates and check the result :


Conclusion :

Using a bootable ISO is a great way to overcome network related issues that can be the case when using TFTP and\or PXE. DHCP will always play an important role in your Provisioning Services environment, use split scopes or clustering there to guaranty the uptime of your Provisioning environment. Provisioning Services comes with a lot of “moving parts” compared to MCS, but with this boot option you can at least eliminate a few of them. It’s nice to see PVS is still alive in Excalibur so there is room for MCS to grow, the power is choice!

From here I wish everybody a merry Christmas and a happy new year!

Please note that the information in this blog is provided as is without warranty of any kind.

A Deeper look into Workspace Control and its challenges

Table of contents :

1: Introduction
2: The Workspace Control process in UML
3: Workspace Control and Auto launch usability
4: Special note : Using Workspace Control with Receiver for Web
5: Special note : Using Workspace Control with Linux Receiver
6: Special note : Using Workspace Control with Native Receiver and Mobile Receivers
7: Conclusion

1: Introduction :

In this blog post I will dive a little deeper into the Citrix Workspace Control functionality, I will show you how the Workspace Control process works and what kind of challenges it can bring to the table. For those not familiar with Workspace Control, it’s the mechanism in Webinterface and Storefront that lets users (automatically) reconnect to existing sessions when they are roaming. To start with some basics : Workspace Control can reconnect two types of sessions, Active sessions and Disconnected sessions.

The Workspace Control process is triggered in the following 2 scenarios :

– The Automatic reconnect process when a user logs on to Webinterface or Storefront
– The Manual reconnect process when a user clicks on the reconnect button in Webinterface or Storefront

Workspace Control isn’t triggered in the following scenarios :

– When a user clicks on an application or desktop manually
– When Auto launch is enabled
– When you connect straight to a XenApp server, without using Webinterface or Storefront
– When Workspace Control is disabled, either by admin or user
– When points apply that you will find in the following UML flow diagram in the following chapter

2: The Workspace Control process in UML

Below you will find an UML diagram that will show you the Workspace Control process based on the Automatic and Manual reconnect process. (click to en large)

3: Workspace Control and Auto launch usability

Auto launch is a feature you can configure in Webinterface or Receiver for Web to launch a specific Application or Desktop after a user logs on, this functionality is especially useful when you have only one published desktop.  In this way the user doesn’t have to click the desktop icon, but instead it launches directly after the user authenticates. The downside of auto launch however is that it disables the Workspace Control functionality at logon. This isn’t necessarily a problem when using XenDesktop, but for XenApp this can result in multiple sessions being launched by the user and this is something you often want to prevent, it’s getting more worse when you have session limits in place because this can even prevent the user to login when there is already an active session. To overcome this I often use one of the below two methods, please leave a comment when you use other methods to solve this challenge for XenApp.

1: Restrict users to a single session, Enable Workspace Control and Disable Auto launch
In this scenario the user logs on to Webinterface or Storefront, when there is no active\disconnected session the user clicks on the Desktop or Application and launches his session. When the user logs on from another device, Workspace Control kicks in and reconnects the session immediately after the user authenticates. The user can also choose to manually disconnect and reconnect the session through Workspace Control. In this scenario Workspace Control in combination with Session limits is used to control the number of sessions on the XenApp farm.

2: Allow multiple sessions, Enable Auto launch and use RES session guard
If you have RES Workspace Manager in place, you can leverage the RES session guard feature to notify users of already active sessions and let them disconnect them on the fly. If you educate the user how to use this, session guard can be a real handy feature to simulate a Workspace Control alike functionality. Plus you can also define a group of users (through a dummy  administrative role) that can login multiple times. This is especially handy when you have shared accounts or a couple of users that needs to logon multiple times concurrently. The only down side of this method is that the user needs to login twice when an active session is detected, one time to disconnect the session and one time to reconnect to the disconnected session. The plus side is that you can enable Auto launch to immediately launch the Desktop after the user authenticates, because RES session guard will control the amount of sessions and you don’t have to use Workspace Control in combination with Session limits anymore.

Below is a printscreen of the RES Session guard in action when multiple sessions are detected :

Session Guard detects multiple sessions by creating a lock file (guard.lock) inside the homefolder , the above message is displayed when the file is detected, after the user logs off the lock file is removed.

4: Special note : Using Workspace Control with Receiver for Web

From Storefront version 1.2 the Auto launch feature is enabled by default in Receiver for Web when there is only one published Desktop available for the user, this also means that Workspace Control is disabled by default, but the most important thing to note is that since the separation of Apps and Desktops in Receiver for Web, the Workspace Control functionality is completely disabled in the Desktops section (see below screenshot for clarification). This makes sense when using XenDesktop, but you may still want to use Workspace control for your published XenApp desktops. To enable Workspace Control for published XenApp desktops, you can put a keyword in the published XenApp desktop description field (KEYWORDS:TreatAsApp) so Receiver for Web will treat it as Application instead of Desktop, thus enabling Workspace Control again.

Workspace Control (for the Apps section) can be configured by editing the web.config file in the inetpub directory of the Storefront server, this is a little step backwards because with Webinterface you could do this through the GUI.

5: Special note : Using Workspace Control with Linux Receiver

Linux Receivers found in linux based Thinclients often use the PNAgent site to request and launch published applications and desktops, the key component used here is the PNABrowse utility. PNABrowse is often scripted by the Thinclient manufacturer and embedded in the customized Thinclient OS, like the HP ThinPro and HP Smart Client series. PNABrowse uses the –WR parameter to leverage Workspace Control and connect to active and Disconnected sessions. You can control Workspace Control behavoir for this clients by editing this parameter in the following way :

-WR = Connect to Active and Disconnected sessions
-Wr = Connect to Disconnected sessions only
– Remove –WR = don’t use Workspace Control

There will be a new Linux Receiver released by the end of the year that can connect natively to Storefront without using Legacy PNAgent.

6: Special note : Using Workspace Control with Native Receiver and Mobile Receivers

For the Windows Native Receiver and Mobile Receivers connecting to Storefront, there is currently no good way to configure Workspace Control, it’s always enabled by default. Native Receivers are a good launch platform for published applications and data, but it lacks the option to fine tune them for different access scenarios like Workspace Control or Auto launch, you have to use the Receiver for Web or Webinterface if you want to control this. Also the native Receivers doesn’t make a difference between applications and desktops yet (only apps and data), so Workspace Control will be enabled for both by default which can be a problem. Hopefully Citrix will add more options to control this behavoir in future releases of Storefront.

7: Conclusion

The main goal of Workspace Control is to reconnect published applications when users are roaming, but it’s also often used in combination with session limits to control the number of user sessions and optimize the user experience. It’s clear that Citrix is moving away from Workspace Control when using published Desktops and rather use Auto launch instead, but for XenApp this can be a challenge. You can still leverage Workspace Control in newer versions of StoreFront by treating the published XenApp desktop the same as an published application. Maybe the new Flexcast Management Architecture (FMA), which is part of the upcoming Excalibur release, will change the way how we deliver and reconnect to XenApp published desktops so Workspace Control isn’t necessary anymore to control the user sessions, in the mean time you can either use Workspace Control in combination with session limits or RES session guard to optimize the user experience and control the amount of user sessions in your XenApp farm.

Please note that the information in this blog is provided as is without warranty of any kind.

Cloud Gateway a Wrap-Up so far Part 2

Cloud Gateway a Wrap-Up so far Part 2

Table of Contents :

1 : Introduction
2 : Cloud Gateway architecture and components
3 : Cloud Gateway Mobile Experience (MDX) Technology
4 : Access Gateway : ICA Proxy, Clientless VPN and Secure Browse
5 : Native Receiver VS Receiver for Web
6 : ShareFile
7 : Conclusion

1 : Introduction

A few months ago I wrote a wrap-up about Citrix Cloud gateway and the upcoming 2.0 release, it’s one of the posts on my blog that gets the most hits so I thought I should write a follow-up. If you are new to Cloud Gateway you should read my previous wrap-up first to get a better understanding of the architecture. In this post I’m going through the new features of Cloud Gateway 2.0 and things I came across when setting up a demo environment.

2: Cloud Gateway architecture and components

Cloud Gateway consists of the following key components :

Appcontroller for User provisioning and SSO to Web (internal & external), SaaS and Native mobile apps
Citrix Receivers for connecting to Cloud Gateway

Optional components are :

Storefront Services for connecting to XenApp & XenDesktop back-end and providing access through Receiver for Web \ HTML5
Access Gateway Enterprise (Netscaler VPX/MPX) for external access to Cloud Gateway
Merchandising Services for controlled plugin distribution
– Integration with ShareFile infrastructure (Follow-Me-Data)

All optional components are included in the current Cloud Gateway Enterprise edition, except for the ShareFile subscription fee and the Access Gateway Platform license (universal licenses are included).

Citrix made a clever move by making Storefront services an optional component of Cloud Gateway, in this way they can sell Cloud Gateway as a separate stand-alone product, but on the other hand offer tight integration through Storefront for existing Citrix customers. I think the majority will use Cloud Gateway to extend their current Citrix back-end so Storefront will play a key role in most environments. In the use case that Storefront is not used, Citrix Receivers connects straight to Appcontroller or indirect through Access Gateway to Appcontroller.
Because I think a picture speaks 1000 words, I made a basic diagram of the Cloud Gateway components including ShareFile :

3 : Cloud Gateway Mobile Experience (MDX) Technology

Cloud Gateway MDX is the new marketing term for the features of Cloud Gateway, you can compare it with the marketing term HDX which stands for all the user experience optimizations around the ICA protocol. Let’s translate the MDX features into some more technical descriptions :

Feature Description
MDX App vault Sandboxed container controlled by Citrix Receiver which can be   remotely wiped
MDX Web Connect Embedded (mobile) web browser for secure browse connections
MDX Micro VPN Client side rewrite technology through Netscaler (Secure Browse)
MDX Policy Orchestration Management of (native) mobile apps through Appcontroller and   provides smart access like features

This MDX features gives IT control and security over their apps and data, but at the same time gives the users the freedom to control their own mobile device. I think this is best for both worlds and it prevents that users are going to work around the system.

MDX Ready Program
Citrix will initiate a MDX Ready partner program to validate apps for use with MDX, Citrix self will release an MDX native e-mail app which runs inside the App vault container and doesn’t expose itself to other apps on the mobile device improving security.

4: Access Gateway : ICA Proxy ,Clientless VPN and Secure Browse

Ok now things are getting very interesting, lets start with Access Gateway…

Access Gateway support
Cloud Gateway MDX features are only available in conjunction with Access Gateway Enterprise (Netscaler), there is no support for Access Gateway VPX (STD\ADV), in fact Citrix will shortly announce that Access Gateway VPX will go end-of-life. I wrote a blog about the future of Access Gateway a while ago, I wrote down my thoughts about why Citrix would support 2 products with almost identical features and why I think Access Gateway Enterprise (Netscaler) is the better product. At first I thought they should keep Access Gateway VPX as a replacement for Secure Gateway to provide basic connections and provide it for free, but then they would still need to support 2 products. If we look at the VPX appliances, we have a Netscaler VPX and the Access Gateway VPX, if we look at the MPX appliances we have the Netscaler MPX and Access Gateway Enterprise MPX, did you spot the outsider? Yep the big difference is that Access Gateway VPX is a very different product and the rest is Netscaler only licensed differently, if we look closer there is one edition missing :

Access Gateway Enterprise VPX
To replace Access Gateway VPX, there will be a Access Gateway Enterprise VPX edition (Finally!), this is a stripped down Netscaler which only gives you access to Access Gateway features. Citrix will also provide trade-ups and will adjust the pricing accordingly. So at the end we will have the following Access Gateway appliances left :

CAGEE Physical Appliance (MPX) CAGEE Virtual Appliance (VPX)
MPX 5500 Access Gateway Enterprise VPX
MPX 7500/9500
MPX 9700/10500/12500/15500

If you have a Netscaler MPX\VPX appliance you can enable the Access Gateway component and use it besides all other Netscaler functionality.

Access Gateway Policies and Profiles
The power of Access Gateway Enterprise IMHO is its flexible architecture, Access Gateway Enterprise fits like a glove in a lot of different environments and different access scenarios, because of its modular design. Since the Cloud Gateway release there are a lot of Access Gateway policies and profiles needed for different access scenarios and Receiver types, it will take some time to create and configuring them manually. To address this Citrix added a wizard (since version which will create a baseline of policies and profiles for you. After the baseline is set you can still configure and adjust everything  according your needs, so the wizard helps you with the baseline but there will be no compromise on the flexibility afterwards.

ICA Proxy
ICA Proxy will enable you to do SSO pass-through to Webinterface, it doesn’t use Clientless VPN so you can use it on a VIP in basic mode. ICA Proxy provides the same functionality as Secure Gateway, which means basic ICA connections only. It is not documented anywhere but I can confirm that ICA proxy still works with Receiver for Web so if you want to provide basic access to Xenapp or XenDesktop you can use it just like with Webinterface. If you want more Cloud Gateway functionality you need Smart Access functionality like CVPN :

Clientless VPN
Clientless VPN (CVPN) has been around in Access Gateway for a while now, this Server side rewriting technology is great for providing access to OWA and other webapps behind Access Gateway. But the downside of CVPN is that not every webapp supports it, I can remember spending a lot of time in troubleshooting rewrite policies to intranet applications, but if it works it works pretty well and you don’t have to leverage a full VPN tunnel to allow access to webapps securely. CVPN in a Cloud Gateway setup is only used for traffic to Receiver for Web and traffic to Appcontroller, besides that CVPN is turned off so external Web\SaaS apps are not rewritten by Access Gateway but instead opened directly by Receiver (after SSO is done by Appcontroller), for internal webapps there is a new feature in Cloud Gateway :

Secure Browse
Secure Browse is one of the features I’m most excited about, instead of leveraging CVPN or a Full VPN tunnel, the Receiver for IPAD uses a secure channel between Receiver and Access Gateway called Secure Browse (or MDX Micro VPN). Secure Browse provides secure session based access to internal webapps behind Access Gateway. Another cool thing about Secure Browse is that it’s using an embedded webbrowser (MDX Web Connect) to render both internal and external webapps controlled by Appcontroller. Web Connect is totally controlled by Citrix Receiver and doesn’t expose critical data on the mobile device, Secure Browse will support any webapp because there is nothing rewritten by Access Gateway so no more troubleshooting of rewrite policies and broken links.
Secure Browse is currently only available on the Receiver for IPAD, other Receivers will still leverage a full VPN tunnel to provide access to internal webapps, but it’s expected that Citrix will extend Secure Browse to other Receivers as well.
I tested Secure Browse with OWA and SharePoint and it works really well, I can’t wait to see this functionality on other Receivers to.

5: Receiver for Web VS Native Receiver

In my previous wrap-up I talked about the difference between Receiver for Web and Webinterface, it’s clear that there are still features missing in Receiver for Web in comparison to Webinterface, but Citrix is closing this gap in upcoming releases of Storefront, in version 1.2 for example they already made some enhancements by separating Desktops from Apps and allowing user initiated desktop restarts. In this wrap-up I made a list of different functionality I noticed when using the Receiver for Web and the Native Receiver, please note that this list can quickly become out dated when updates of the Receivers are released.

Receiver for Web Native Receiver
User initiated Desktop restarts Yes No
Desktop viewer used Only for XenDesktop Also for XenApp published desktops
ShareFile integration Through web SSO Embedded in Receiver
Location aware with Web Beacons No* Yes
SSO through Appcontroller Yes Yes
Initiate full VPN by clicking app No Yes
Separated views for Apps and Desktops Yes No
Auto Subscribed applications Yes Yes
Auto launch applications Yes** No
Sticky (mandatory) applications Yes*** No

* When connecting with Receiver for Web through Access Gateway, the connection is always established through Access Gateway. In Webinterface it was possible to configure the connection method based on the IP range, Web beacons makes the decision based on internal and external URLs but this only works for the Native Receiver.

** Auto launch is default enabled in Receiver for Web when there is only 1 published desktop, if you have more desktops published you can manually edit the Default.htm.script.min.js file to define which one should be auto launched, expected is that auto launch will be enabled through a keyword  just like with auto subscribed applications

*** Sticky applications removes the delete cross when you hover over the app icon, to enable this you need to manually edit the Default.htm.script.min.js file, expected is that sticky apps will also be enabled through a keyword in the app description

So it’s clear that there are some functional differences between this Receivers, which one you should use depends on the functionality you want and the access scenario you need to provide.
For example : If you want ShareFile integration besides published apps go for the Native Receiver, if you want to provide a (kiosk) access model for published desktops go for Receiver for Web.

6: ShareFile

ShareFile is another element of Cloud Gateway i’m very excited about, if you integrate ShareFile into your Cloud Gateway setup, you can give your users a true follow-me-data experience on every device without compromising on security. IT can remotely wipe data from lost or stolen devices and data is stored in an encrypted format to further improve security. Data is also available offline with ShareFile sync. Besides follow-me-data ShareFile enables users to share (large) files with colleagues, but also with external contacts, you can trace who downloads files or control the amount of downloads. Besides that file versioning, drive mappings and Outlook integration are all features that are available with ShareFile. Some elements are comparable with other follow-me-data solutions like RES Hyper-Drive, which is also a nice on-premise follow-me-data concept from RES Software, because ShareFile is now part of Citrix there is a tight integration with Cloud Gateway and the Receivers.

ShareFile and Appcontroller
You can establish a SAML trust between Appcontroller and ShareFile to provide account provisioning, in this way your AD users are automatically created inside ShareFile, I noticed in my demo environment that  this process sometimes take a while, so before you start troubleshooting wait a bit. Also be sure to create a role for ShareFile users inside Appcontroller, because if you select All Users, every AD account will be created in ShareFile consuming all your licenses (learned this the hard way). What’s very cool is that you can configure ShareFile to only allow authentication through Appcontroller and SSO, in this way users cannot connect to ShareFile directly further improving security. If SSO to ShareFile isn’t working please check the time settings on Appcontroller, if it’s a few minutes of SSO doesn’t work correctly, it took me some time to figure that out.

ShareFile Storage Zones
With ShareFile storage zones you can control where data is stored, a Storage Zone can also be On-Premise, more awesomeness will come in feature releases of Storage Zones when we can connect ShareFile to existing CIFS shares and SharePoint environments, imagine the possibilities!

7: Conclusion

Congratulations! you made it to the conclusion section 😉 sorry it was a bit of lengthy post but that’s because Cloud Gateway and surrounding technologies are a big deal and there is so much to cover. Cloud Gateway lets you aggregate and secure Cloud services and On-premise services into one logical logon point with the same look and feel on every device. Integration plays the key role in Cloud Gateway. I’m very excited about Cloud Gateway 2.0 and the upcoming features, on one hand its giving IT the flexibility and control over data, apps and security they need and on the other hand gives the user freedom to choose whichever device they like to use, I really think this concept is the future and I think it will change the traditional desktop as we know it today.

If you want to be notified when Part 3 or another blogpost comes out, subscribe to my blog site or follow me on Twitter : @bramwolfs

Please note that the information in this blog is provided as is without warranty of any kind, it is a mix of own research and information provided by Citrix. Some information is based on speculations and predictions.

Citrix HDX SoC finally the Thinclient becomes Thin again!

Table of contents :

1: HDX System on a Chip architecture
2: SoC and Citrix Receiver
3: SoC and RemoteFX
4: SoC and Windows Embedded
5: Conclusion

1 : HDX System on a Chip (SoC) architecture

In this blog we are going to take a closer look at the SoC architecture, and I will tell you why I think this is going to change the Thinclient industry.

What is a SoC?

A system on a chip (SoC) is an integrated circuit that integrates all components of a computer or other electronic system into a single chip.

This Single Chip contains both hardware and software components. The last one is for controlling hardware components inside the SoC. Please note that the Citrix Receiver software isn’t part of this software, this software contains codecs and other controlling software bound to the SoC itself.

The following components are part of the HDX SoC :

– ARM based CPU
– DSP (Digital Signal Processor)
– DMA (Direct Memory Access)
– NIC, Audio, Video and USB Controller
– Multimedia encoders/decoders

As you can see there is a lot of stuff in the Chip, you can almost say that basically everything you need is on this Chip, only memory and storage are external from the chip. This single chip approach is also taken by many other devices such as phones and tablets.

To make the chip more visual I made a drawing of the SoC architecture, please note that this is not a diagram of how the components interact with each other, but just a basic overview of the components inside the SoC :

The DSP is an interesting one, this component is used for image decoding which would otherwise be done by the CPU. Now that the DSP is taking care of the heavy lifting, CPU cycles are saved for other processing tasks increasing performance. Because the SoC is built from industry standard building blocks, the costs are kept to a minimal, so while the performance is increasing, the costs are being lowered, that’s what I call a win-win situation 😉

2: SoC and Citrix Receiver

Citrix Receiver itself isn’t part of the SoC architecture, this has the huge advantage that the Receiver software can be modified by Citrix without the need to update the SoC, so Citrix can add more features to Citrix Receiver without being slowed down by the SoC vendors to update the SoCs. To leverage the components on the SoC, Citrix provides a modified version of the Citrix Receiver for Linux, the process that takes place is fairly simple :

1: The SoC architecture exposes API’s to the underlying OS, this is done by the SoC vendor
2: The Citrix Receiver checks if this API’s exist when booting up
3: If the API’s are in place, Citrix Receiver will use the components inside the SoC and start offloading image processing to the DSP for example.

To illustrate this, I added the Citrix Receiver into the picture :

3: SoC and RemoteFX

While Citrix started the SoC initiative with a few chip manufacturers, the SoC isn’t reserved for use with Citrix HDX only, Thinclient vendors also support other remoting protocols on Thinclients with the SoC on board, for example RemoteFX can leverage the DSP as well. Now this combination is interesting, because you might think that RemoteFX features only works on windows clients with an supported version of RDP. Since this SoC consists of an ARM CPU and is Linux based you would not expect RemoteFX features there. Well this is done through the open source RDP client named FreeRDP. FreeRDP can leverage the DSP to offload image processing as well. Please note that FreeRDP currently only supports RDP 7 in combination with Win7\2008R2, there is no support for the new Remote FX features in RDP 8 (Win8\2012) yet. There is also no official statement that Microsoft is going to support FreeRDP with the new features in RemoteFX.

4: SoC and Windows Embedded Thin Clients

The first HDX SoC is coming with an ARM based CPU, this means no support for Windows Embedded, also Citrix Receiver for Linux is the only client from Citrix that supports the SoC initially. The x86 based HDX SoC will follow later, as you might know Microsoft will release an ARM based version of Windows 8 (Windows RT). The interesting part is that there will also be a Windows Embedded 8 ARM version, this OS will be more suitable for Thinclient hardware because it’s cheaper and less power consuming. WES nowadays isn’t really made for running on cheap Thinclient hardware, it’s like a big beast in a tiny cage, it’s slow and clunky to say the least. If you are looking for a way to tame the beast you should take a look at Thinkiosk from Andrew Morgan which provides a uniform interface across all your Windows Fat and Thinclients. There is also a interesting comparison between Fat and Thinclient hardware by Kees Baggerman and Barry Schiffer which shows some interesting results and discussions.

While the Linux based SoC Thinclients has much advantages regarding performance, small footprint and costs, there are some special use cases when you need Windows on your Thinclient end-point, for example when you need local printer or scanner redirection based on Windows drivers. I don’t think you need to make the decision based solely on HDX features anymore, because the most important features are covered in both versions (Windows and Linux), but there are certainly use cases that needs Windows Thinclients, my point is don’t buy them only because there is Windows on it which sounds save, but examine the use cases and mix and match!

A little bit off topic but I summarized a few highlights of Windows Embedded 8 that looks very interesting :

Hibernate-Once-Resume-Many restarts devices the same way every time
– Drive efficiencies by creating a custom image with only necessary functionality included
Keyboard Filter blocks special key-combinations on both physical and virtual keyboards
– Suppress Windows system dialogs with the Dialog Filter
– Manage and configure lockdown technologies with Unified Configuration Tool
Embedded Device Manager, together with SCCM, for operating system and application deployment

5: Conclusion

A lot of Thinclients (especially the more powerful WES variants) are almost in the same price range as a normal Fat client PC. When you are designing a VDI\SBC environment, this can be really a show stopper, because the goal on those projects is often to save money.
Now with the arrival of SoC the Thinclient is finally getting “Thin” again, Thin in form factor and price but not in performance!

5 Reasons why I think SoC Thinclients are going to take over the current Thinclient market :

1: Much cheaper to manufacture, because the SoC consist of industry standard building blocks it’s cheaper to manufacture then individual chips and components
2: The SoC uses far less energy to operate, there are even Thinclients coming that runs on PoE
3: It fits in more and smaller form factors, because it’s one chip it’s easier to integrate into other devices, such as monitors (or even TV’s)
4: The SoC is future proof, it contains standardized codecs and Citrix Receiver can be upgraded apart from the SoC
5: Performance wins, this is one of the biggest enhancements IMHO, because of the offloading and intelligent use of the SoC components performance is dramatically improved

I think with the coming of SoC we are also a step closer to nirvana phones, which I really think is the future together with BYOD.  Just dock in your phone and login, maybe Citrix should work together with some Phone manufacturers to get some HDX ready phones on the market 😉

Please note that the information in this blog is provided as is without warranty of any kind, it is a mix of own research and information from the following sources :

– Citrix unveils SoC initiative
– HDX SoC on Synergy 2012
– SoC WIKI reference
– SoC spurs innovation
– New Citrix Blog post from Vipin Borkar : Evolution of HDX SoC 

Customizing the HP Smart Client

There is a great blog post from Ingmar Verheij explaining the new HP Smart Client software which is part of the new HP flexible Thin Client series (T410,T510,T610). Please read his blog to get a better understanding about the HP Smart Client software.

In this blog I will show you how you can customize the appearance of the Smart Client, I will give an example how to change the login page when connecting to a Citrix back-end, but you can also use this information when customizing the Smart Client for other protocol connections.

Ok lets start from scratch :
When you configure the Smart Client for use with XenApp\XenDesktop you will get the following logon screen :

(Screenshot is taken with a camera)

As you can see HP made a default page that looks very similar to the Citrix Web Interface 5.4 layout. In the Smart Client admin guide, there is a chapter about customizing the layout but I found it very unclear. So my goal was to get the existing files from the Smart Client to get an example of how this login page is constructed. Because every door on the Smart Client is locked, in terms of remote file management, there is no easy way to get to the files on the Smart Client.

To fetch the files I used the drive mapping feature of the Citrix Receiver for Linux, the default location of the drive redirection is \media, this is the mount point in Linux for USB sticks and other storage devices. By default this folder is mapped as drive leter Z: in the session. In the profile editor enable drive mapping and change the drivePathMappedOnZ value to /etc, see the below screenshot  :

After rebooting the Smart Client, logon to a XenApp\XenDesktop session and open the drive letter Z: You will now see the content of the ETC folder from the Smart Client, browse to the following folder hptc-zero-login\styles there you will find all the default styles from the different connection protocols, see screenshot :

In this case we will open the xen folder, because we want to edit the Citrix login page styles. Copy the folder from the Smart Client to a different location so we can start editing them, in the folder you will find this 2 files :


This file consists of the layout setting of the login screen, it’s very easy to edit this file and make customization to it, so I will not cover all the options but instead give an example :

Change the background color to white :

global {
color: FFFFFF; # White
padding: 20; # 20 pixels

Change the default footer text :

text {
name: ad line;
text: Welcome to, type in your cup size and password to continue;
position: %50%,85%;
alignment: hcenter vcenter;
color: 000000;
font-size: 16pt;
max-width: 98%;
context: login;

Change the logo :

image {
name: computers image;
source: /usr/share/icons/hptc-zero-login/mypicture.png;
position: 50%,50%;
alignment: hcenter vcenter;
context: login;

As you can see the image source directory is on the USR directory, if you want to retrieve them simply change the ETC drive redirect folder from the previous step to USR and browse to the /usr/share/icons/hptc-zero-login folder.


In this file you can edit the dialog text, in this example I will change it to Dutch :

LoginArea QLabel#loginHeader {
qproperty-text: Welkom;
color: white;
font-size: 20pt;
text-align: left;
LoginArea QLabel#userLabel {
qproperty-text: Gebruikersnaam;
color: white;
font-size: 12pt;
LoginArea QLabel#passwdLabel {
qproperty-text: Wachtwoord;
color: white;
font-size: 12pt;
LoginArea QLabel#domainLabel {
qproperty-text: Domein;
color: white;
font-size: 12pt;

Ok now we want to deploy this custom files to all the Smart Clients out there, to do this open the Profile editor and go to the additional Configuration Files section, add the files like this :

Now reboot the Smart Client and voila  :

Conclusion :
In this blog post I explained how you can customize the appearance of the HP Smart Client. You can also use this Drive Redirect option to view other files on the Smart Client such as the ICA client files. In this way you can deploy custom settings to the Smart Client by editing the files and deploy them through the Smart client Profile editor. This is useful when you cannot find the setting in the registry options in the Profile editor.

* Note you can also redirect the style directory so you can place them in a different folder, in this case I used the default locations, but the style directory can be set to a custom location with this option :

Follow me on twitter (@bramwolfs) if you want to be notified when a new blog post is available!

Please note that the information in this blog is provided as is without warranty of any kind.

How to configure WebTrace functionality in RES Workspace Manager

How to configure WebTrace functionality in RES Workspace Manager

If you have read my previous blog posts, you will notice that they are all around Citrix products. The reason for this is simple, I just have a strong passion for the Citrix product portfolio and I’m working with them for a long time now. Even in my current role as “Independent” Technical Consultant, I sometimes doubt how independent i really are, this doesn’t mean I dislike competitors, but everybody has its preferences. I share this same passion and commitment for products from RES, and especially the combination between those two.

In this first RES blog post I wanted to talk about the WebTrace functionality in RES Workspace Manager, WebTrace allows you to track down internet usage of your users, of course this must be allowed by company policies and the users should be aware that there internet behavoir is monitored.
WebTrace is one of the few components in RES Workspace Manager that doesn’t completely work out-of-the-box, when you enable it in a WIN2008R2\WIN7 environment in combination with Internet Explorer you will soon see that there is no internet traffic being logged.
This is mainly because the Web Trace Browser Helper Object (BHO) is being blocked by the Internet Protected mode which is switched on by default on the Internet security zone.

To enable the WebTrace functionality you can follow this procedure :

Step 1: Switch on WebTrace in the Workspace Manager Console

Switch on the following option under Setup -> Usage Tracking :

Step 2: Configure an Internet Explorer policy to enable the WebTrace BHO for the users

Before we do this we first have to look up the Class ID of the Web Trace BHO by opening the properties of the BHO, the BHO can be found in Internet Explorer under Manage Add-Ons :

Ok now we know the BHO Class ID we can create the Internet Explorer policy, to enable the WebTrace BHO and block the users from disabling it you can follow this steps :

– Create a new (or edit existing) policy based on inetres.admx
– Search the policy for the option “Add-on List” and open it.
– Fill in the Class ID : {65363486-5B64-4199-9087-2CB3543A3BDC} with a value of 1, see screenshot :

– Enable the option “Do not allow users to enable or disable add-ons” (to block users from disabling the BHO)
– Enable the option “Disable add-on performance notifications” (to avoid performance popups for startup times of the add-ons)

The policy now looks like this :

Step 3: Configure an Internet Explorer registry key to disable Internet Explorer protected mode

**Warning:  Before you proceed, you should be aware that disabling Internet Explorer protected mode can have security implications**

Because the WebTrace BHO doesn’t work when Protected Mode is switched on (according to RES this is because the BHO needs more access to system resources which is blocked when running in Protected Mode) we need to turn it off for the Internet Security zone (it’s already switch off for the other zones by default). This can be done through the following registry key :

“HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\3” “2500”=dword:00000003
To make it more visual (click to enlarge) :

If Protected Mode is switched off users will get a message when browsing over the internet stating that Protected Mode is not turned on, to disable this warning import the following registry key :

“HKCU\Software\Microsoft\Internet Explorer\Main” “NoProtectedModeBanner”=dword:00000001
And this looks like this :

That’s it, WebTrace will now start logging Internet traffic, which you can view in the Usage Tracking viewer in the RES Workspace Manager Console.

Conclusion :

In this blog post I showed you how you can enable the WebTrace functionality in a WIN2008R2\WIN7 environment when using Internet Explorer. It’s a nice feature but you should only enable it when necessary and take into consideration that Internet Explorer Protected Mode needs to be switched off for the WebTrace BHO to function, maybe RES will update the WebTrace BHO in the future so this would not be necessary anymore.

Please note that the information in this blog is provided as is without warranty of any kind.

Citrix UPS is awesome, but maybe it arrived too late?

Citrix Universal Print Server (UPS) is awesome, but maybe it arrived too late?

First I wanted to say that I think the Universal Print Server (UPS) is a really great feature, for those who doesn’t know the UPS feature yet, it’s build on the proven and evolved Universal Printer driver from Citrix which is used for client printer redirection for a long time, with UPS this Universal printer driver is extended to network printers also. UPS consist of a client component and a server component which you install on the print server.  I will not go into technical details about UPS, this is pretty much covered here.

While the Citrix UPS solves a lot of printing horror (think of unstable printer drivers, driver replication issues, print spooler crashes)  I do think we needed the Citrix UPS a while ago harder than we do now, this is mainly because of the following evolutions in the printing space :

1: Follow me printing concept
I see more and more customers that are moving towards a follow me printing concept, in this concept the user is presented with one print object (may be more if you want to preset printing defaults), when a user prints to this object the job is queued on a central print server, the user walks to the nearest printer and types in a code or provide a token\card and after that the print job is send to the printer, this concept has the following advances :

– User can walk to the nearest printer (no more connecting printers based on location or group)
– The user can take away confidential documents immediately
– Monitoring print behavoir and charge-back functionality
– And of course where this blog post is about : There is only one driver needed to connect to this printer object

To illustrate the follow me printing concept, I added a picture from Konica Minolta :

2: Universal Printer Drivers from the printer manufacturer
Yes there where (and maybe still are depending on the manufacturer) a lot of issues with universal  printer drivers, but the fact is that they are getting better and have broader support for different platforms and printing devices. The reason to use Universal Printer drivers from you manufacturer is simple :

– One driver to maintain
– Supports a wide range of print devices from the same manufacturer

3: Printer Driver isolation
Last but not least, in windows 2008R2 and Windows 7, there is a mechanism called printer driver isolation. Printer driver isolation means that the printer driver is isolated (Duh) from the print spooler and optionally also from other printer drivers. In this way a single bad printer driver cannot crash the entire print spooler, one side note is that your printer driver needs to support this isolation. If you never have looked at this feature, you should definitely do this because it can be a real life saver when you have a lot of printer drivers to manage.
While this is a nice build-in feature, it’s a little bit working around the issue so it’s not a reason to not look at better solutions like the Citrix UPS or option 1 and 2. The following isolation modes can be selected:

Driver-isolation mode Meaning
Shared Run the driver in a process that is shared with other printer drivers but is separate from the spooler process
Isolated Run the driver in a process that is separate from the spooler process and is not shared with other printer drivers
None Run the driver in the spooler process

I’m sure there are a lot of good use cases for the Citrix Universal Print Server feature, and it’s even getting better in version 2 when there is also support for advanced printer properties.
For example it uses some nice compression technology between the client and the UPS, so if you are connecting to print servers over the WAN you better start looking at the Universal Print Server!

But on the other hand I do think the Universal Print Server arrived too late, we needed the Universal Print Server much harder a while ago then we do now because of the evolvements in the printing space I summarized in this blog.

The mystery of the Citrix Interceptor BHO

The mystery of the Citrix Interceptor BHO

When there is a new fix or update released for a product I am working with frequently, I always like to read the release notes to see what has been fixed or what kind of new features has been added. This helps in calculating what kind of impact the fix\update will have when installing it in a environment. It frustrates me when there is new functionality added that is not listed in the release notes, and to make it worse when a production environment is suffering because of this.
This is exactly the case with the Internet Explorer Add-On from Citrix called the Citrix Interceptor BHO. (BHO = Browser Helper Object)

This Citrix Interceptor BHO is automatically added through one of the following ways :

CtxVDAIEInterceptorBHO Class installed through Hotfix 025 for XenApp 6.5

CtxIEInterceptorBHO Class installed through Citrix Receiver 3.2

When one of the above is installed, you will likely see popups  when opening Internet Explorer to allow this BHO and its components to load outside of Internet Explorer protected mode.
If you were lucky you spot this popup when going through a test procedure, but if Internet Explorer was not on your test list you will get a lot of calls to the support desk from frustrated users.
Imagine that your users (or worse some management staff) ask you what this Internet Explorer Add-on is about and you cannot really give a good answer to it, will they take you serious about the things you are installing in a production environment? I think this is really bad…

Neither the release notes of Citrix Receiver 3.2 or XenApp 6.5 Hotfix 025 has some details about this BHO, it looks like it’s kept top secret, the only information from Citrix I could find is the following :

“In recent releases of Citrix Receiver for Windows, Citrix has implemented a new Browser Helper Object (BHO) – CtxIEInterceptorBHO (IEInterceptor.dll). This BHO does not currently provide any additional functionality for the majority of customers running XenApp or XenDesktop and is actively used only by certain XenApp Cloud Service Provider customers.”

Ok so it’s not used for the majority of customers running XenApp or XenDesktop, only by certain XenApp Cloud Service Providers… Why is it (already) added to a public release of Receiver and a public release of a XenApp hotfix with so less information provided?

I think only Citrix can answer this question, my guess is that the BHO adds functionality for some reverse seamless\content redirection functionality from project Dorado and that Citrix will keep this secret till the functionality is officially announced (maybe with the release of XenApp 6.5 Rollup Pack 1)

Since there is so little information about this BHO, I now choose to disable this add-ons entirely till there is more information about it from Citrix. The BHO add-ons can easily be disabled through a group policy see this CTX KB for more details on how to do this.
In this way it’s easy to enable the add-on again when Citrix comes with more information about the BHO and you decide you want to use it.

In a previous blog I wrote a workaround to get rid of the annoying popup in IE and enable the BHO and its components for everybody, but for now I would advise to disable the Citrix BHO entirely till Citrix comes with more information about it.
If someone has additional information about the interceptor BHO please let me know.

Please note that the information in this blog is provided as is without warranty of any kind.