App-V Scheduler now more powerful than ever

This blogpost will highlight the new features in App-V Scheduler 2.1 and the new App-V Scheduler Central View management console. App-V Scheduler 2.1 is an update of the previously released App-V Scheduler 2.0 version, if you are reading about App-V Scheduler for the first time I would recommend to start with the previous blogpost.

What’s new in App-V Scheduler 2.1
App-V Scheduler 2.1 contains improvements and new enterprise features, let’s talk about the new features first:

  • Application Pre-Launch
  • Mount selected packages
  • App-V Scheduler Central View management console

Application Pre-launch
Application Pre-Launch allows you to start selected virtual applications one time after the machine is (re)booted, this feature will improve application launch time for the first users logging in to the machine. Especially when used in combination with Shared Content Store mode and bigger packages this feature can optimize the user experience. Application Pre-Launch can be used for virtual applications and natively installed applications.

This is how it works :
You select the application which you want to Pre-Launch in the package details window in App-V Scheduler :

App-V Scheduler 2.1 Package Details Application Pre-Launch

App-V Scheduler will store this application in a XML file on the package source location. There is no need to configure this on every machine or inside your image. You can also directly edit the XML file if you like. When the machine boots the App-V Scheduler service will deploy all packages to your machine and after that read the XML file for applications to pre-launch. If there are applications found, App-V Scheduler will launch them all together and keep them open for 60 seconds. After that the applications are closed. When the first users log in to the machine and launch the application, it will open much quicker because application assets are already present in memory.

Mount selected packages
Besides the option to mount all packages, App-V Scheduler can also mount only selected packages. This means you can use Shared Content Store mode for all your packages but select specific packages which should be fully mounted inside the cache so they are always available or to reduce network load to the content share.

This mechanism works much in the same way as the application Pre-Launch feature, when you open the package details window you only have to select the “Mount this package” check box :

App-V Scheduler 2.1 Package Details Mount Selected package

After you select this option, the package will also be saved in the XML file on the package source location. So also no need to change this on multiple machines or inside the image. App-V Scheduler will mount the selected packages automatically the next time the machine starts.

Other improvements in App-V Scheduler 2.1

  • It’s now possible to set the deployment timer to manual, if you want to use Central View for example to initiate the deploy whenever you like on multiple machines at the same time
  • Improved clean cache mechanism
  • Scenario options removed, settings are now directly available in the configuration window to make the configuration more transparent
  • Option to use RES Workspace Manager variables, if you enable this option App-V Scheduler will use the version variable of RES Workspace Manager when generating the Command Line Hook switch for example. The version variable is used by Workspace Manager to detect the latest version of the package so you don’t have to change the version GUID after deploying a new package
  • The App-V Scheduler event log will now show how long it took to load all packages, this gives you insight in the machine boot time especially handy in non-persistent environments where all packages are loaded to the machine at start up

App-V Scheduler Central View real-time management console
Part of the Enterprise license is also a lightweight portable central management console called App-V Scheduler Central View. This console allows you to centrally manage packages on multiple machines. You can see which packages are currently deployed, compare machines and update packages by invoking a remote deploy process. You can remove packages on the fly or clean the whole cache remotely. Central View leverages Windows Remote Management (WinRM) so no need to open any exotic ports. Below you will find a print screen of the console :

App-V Scheduler Central View Not Maximized

You can change the layout to easily sort on package name or you can sort all in use packages for example. It’s possible to invoke a deploy new packages procedure on all the machines in the group or to individual machines by selecting the icon in the group view. You can also refresh individual machines from here to immediately reflect the changes. It’s also possible to view and control all running virtual processes remotely, this is handy if you want to understand virtual application usage or want to quickly see which process keeps the package in use.

For troubleshooting purposes Central View has the option to open the App-V Scheduler or App-V 5 Client event log directly on the remote machine.

Central View uses an Active Directory group to read machines from and for remote management it uses integrated windows authentication or you can specify a custom account if you want to delegate the console to accounts that doesn’t have the permissions to make remote management connections.
The inventory accounts password is stored encrypted and cannot be retrieved from the console. Below you will find an screenshot of the Central View settings dialog :

App-V Scheduler Central View Configuration Window

The Central View console can easily be sequenced and deployed in your environment as part of your management tool set, you don’t need a dedicated server for it. You can also use Central View without App-V Scheduler on the machine, when you use another deployment method for example, but it will add value when used in combination with each other.

What’s next for App-V Scheduler?
We are working on the following features for the next release of App-V Scheduler :

  • Fail over package source location, configure a backup source location which App-V Scheduler can use when the primary location isn’t available
  • Further improving App-V Scheduler Central View with new capabilities
  • Support for Deployment Config XML file and connection with the App-V Configuration Editor (ACE) from Virtual Engine
  • And more…

Big thanks for everybody providing feedback and supporting the App-V Scheduler project. Especially Kees Baggerman, Andrew Morgan and Nathan Sperry for providing valuable feedback in the last months.

Suggestions and feature requests are always welcome!

App-V Scheduler 2.1 is available on the App-V Scheduler website, you will also find the pricelist and the feature comparison there.


App-V Scheduler 2.0 Release

app-v-5-scheduler-logoApp-V Scheduler

You may have heard about the App-V 5 Scheduler project, if not you can read more on how everything started in this previous blogpost. In short the vision of App-V Scheduler is to reduce complexity and make the deployment and management of App-V 5 packages in RDS & Citrix environments easy. No need for complex PowerShell scripts with limited functionality and visibility, also no need for full infrastructure components like App-V Management and Publishing servers or System Center Configuration Manager.

App-V Schedulers goal is to deploy App-V 5 packages on a machine level and manage them the same way we do with natively installed applications and this is where user environment tools like RES Workspace Manager comes to play. The power of such tools is that we can control application access and configuration from one single console, without having to configure application access and settings on multiple levels and in multiple consoles. I will explain the powerful combination of App-V Scheduler and RES Workspace Manager later on in this blog post. First let’s have a look at the new App-V Scheduler 2.0 version.

App-V Scheduler Editions
To start with App-V Scheduler is now available in 2 editions : Community and Enterprise edition.

Community Edition
Community edition provides the same functionality as the previous 1.3 version, this means automatically deployment of packages and connection groups, multiple cache options, aware of single image management technology like MCS\PVS and with this new release the community edition also comes with the redesigned GUI :


As you can see you have a very clear view on which packages and which versions are deployed on the machine, the package size is displayed in a readable format and on the bottom left you can see the total size of all packages currently loaded. When you select a package you can remove it manually, or launch CMD\Regedit inside the virtual environment for troubleshooting purposes. You can also directly launch the application from here to test its functionality. There is an administrator guide attached to the download which goes further into technical details, be sure to read it before installing.

Enterprise Edition
Enterprise edition will add the following features on top of community edition :

Pending Tasks support
When an updated package is deployed while the previous version is in use, the App-V client will create a pending task. For global publishing this means the pending task will be processed when the machine reboots. This is not desirable when you want to deploy a new version during the day. App-V Scheduler detects pending tasks and will process the pending task automatically when the package is no longer in use. You only have to ask the user to close the application. All of the processing is automatically done by the App-V Scheduler service, no need to leave the GUI open, you will get an overview of pending tasks by clicking on the Pending Tasks button :


Virtual Process overview
Quickly get an overview of all virtual processes on a machine, also native processes started inside a virtual environment are shown here. You can see which user(s) are associated with the virtual process and the path where the executable is started from. You can also end virtual processes from here. Virtual process overview is very handy in combination with pending tasks, it allows you to easily see which users\processes keeps the package in use.


Package Details
Package details gives you a clear view on which extension points are registered for a given package. The output is filtered so you will only see the information that is applicable to the selected package. With the blink of an eye you can see which shortcuts, file type associations, services, ActiveX and other com objects are registered in a readable and understandable format. Also extension points like browser plugins and shell extensions are shown here. App-V 5 comes with a lot more extension points than its predecessor, its important to know how a package is integrated in the OS to understand the behaviour of the application. The auto generated commandline hook switch can be used as a parameter for native processes to launch them inside the virtual environment of the package, think of Excel addins etc. In the RES Workspace Manager part I will give you an example on how to configure this.


There is also a details button to zoom further into the extension point details like this :


Central management console
Part of Enterprise edition is also a lightweight (portable) central management console called Central View where you can centrally manage packages on multiple machines. For example you can see which packages are currently deployed and in use, you can also update packages by invoking a remote deploy process and view pending tasks remotely. The central management console will form a great combination together with App-V Scheduler but can also be used without it, for example if you decide to deploy packages in another way.


Support and software assurance
Part of Enterprise licensing is free support and upgrades to the latest versions for the first year, the subscription can be renewed on an annual basis for a fraction of the price. App-V Scheduler is being actively developed and new features are added frequently. Compatibility with future App-V 5 releases and service packs will also be assured.


App-V Scheduler in combination with RES Workspace Manager
After App-V Scheduler deployed a new package, we can configure the application in the RES Workspace Manager console by leveraging the App-V 5 integration. All we have to do is select the .appv file and the integration will take care of :

  1. Dynamically locate the package installation root (App-V Cache location, which can be configured directly in App-V Scheduler)
  2. Dynamically select the latest version of a package that is deployed, so you never have to worry about changing paths after a version upgrade
  3. Load application settings inside the virtual environment (think of registry values, files and folders, etc)

Below you will find a screenshot of the App-V 5 integration in RES Workspace Manager :


As you can see it’s really easy to integrate App-V 5 applications in RES Workspace Manager, but how can we integrate a natively installed application with a virtual application? We could use great new App-V 5 technics like run virtual or dynamic virtualization. But if we want to do this more selectively? let’s say an Excel addin for a group of users? This is where the command line hook switch comes in handy which App-V Scheduler automatically generates when you open up the package details. All you have to do is hit the copy to clipboard button and paste it in the parameters field of the native application :


Finally configure access to a select group of people and that’s it. You can open the virtual process overview to check if Excel runs virtualized.

Conclusion and availability
App-V 5 Scheduler, in combination with an User Environment Management tool like RES Workspace Manager, is a powerful and simple way to deliver packages to your machines without the need for a full App-V 5 infrastructure model or complex scripting. Just place the package on a share and App-V 5 Scheduler will do the rest for you.
Community edition already gives you a good starting point to simplify the App-V 5 deployment in your environment, Enterprise edition features will make the management of App-V 5 packages a breeze and there are more features to come.

How to get Enterprise Edition ?

Thank you! It would be great if you consider to upgrade to Enterprise edition, besides the additional features, this will also support the further development of App-V Scheduler.

App-V Scheduler 2.1 released!

Please visit the App-V Scheduler 2.1 release blog post for the availability and more information about the new version.

Finetuning a Citrix StoreFront deployment

Finetuning a Citrix StoreFront deploymentFinetune

In this short blogpost I gathered some fine tuning tips I came across with when migrating a Webinterface deployment to Storefront with Netscaler Gateway. The deployment had the following main goals :

  1. Access from Receiver for Web and all the Native Receiver versions (Windows, IOS, Android, etc)
  2. Security on the client side is important, access takes place from unmanaged and public devices
  3. Performance needs to be comparable with the Webinterface deployment
  4. Customized branding for each Netscaler VIP

In this blogpost I will cover the following :

  • Prevent users from saving passwords
  • Shorten the login token lifetime
  • Increase performance (page load times, etc)
  • Modified homepage for different VIPs on the Netscaler
  • Workspace Control in combination with XenApp published desktops

Prevent users from saving passwords
Saved passwords may give users easy access to the environment, but it decreases the security of the environment, especially on unmanaged and public devices where it’s unknown how the devices are used and by who. To turn password saving off :

For Receiver for Web :
This is done automatically by the login page from Netscaler by telling the browser not to use the autocomplete feature. Most browsers respect this setting but IE 11 ignores it, you can read more about this here. It is recommended to always use 2-Factor authentication for external access when possible. To expensive? Take a look at SMS2, it’s free and the RADIUS extension works pretty neat in combination with Netscaler Gateway.

For Native Receivers :
Open the Authenticate.aspx file (default location : C:\inetpub\wwwroot\Citrix\Authentication\Views\ExplicitForms) and comment the SaveCredentialsRequirement statement like this :

SaveCredentialsRequirementThis will prevent the save password option from showing in the Native Receivers.

Shorten the login token lifetime
When a user logs on through the Native Receiver, the credential wallet service of Storefront will keep your token alive for 20 hours by default. When a user closes his application or desktop but doesn’t logoff the Receiver, it’s possible that someone else can click on the icon to log back on within this time period. This is not really secure when users are sharing devices or leave them unattendant. Receiver for Web is somewhat resticter, by default the page will timeout after 20 minutes idle time.
If you only publish a desktop and your users doesn’t need to click on published application icons the whole day, you can make the life time as short as possible without affecting the user experience. In the following example I will change the token life time to 5 minutes for both Native Receiver and Receiver for Web.

For Receiver for Web :
Open the web.config file in the Receiver for Web site folder (default location : C:\inetpub\wwwroot\Citrix\yourwebstore) and search for the session state timeout.
The value is in minutes :


For Native Receivers :
Open the web.config file in the authentication folder (default location : C:\inetpub\wwwroot\Citrix\Authentication) and search the maxLifetime values till you found the correct one, see this example :


After making this changes, users have to authenticate again after 5 minutes idle time.

Increase performance (page load times, etc) 
After some tweaking the Storefront performance is good and acceptable, but it will not be as quick as Webinterface. I also think this isn’t possible because of the design differences. I changed 2 things to speed up Storefront : Enable socketpooling and disable signature verification (the latter will lower the security a bit).
On Marius Sandbu’s and Richard Egenas blog you can read more Storefront performance tips.

Modified homepage for different VIPs on the Netscaler
This deployment needed a customized branding for each Netscaler VIP, I will not go into detail how to configure this because there is already a very detailed article from Citrix here. It comes down to redirecting users based on the entered FQDN with Responder policies, while this works great the article doesn’t mention that it will break the access from the Native Receivers to the VIP where the responder policy is active. To prevent this change the responder policy expression to include :


When you create this exclusion, the Responder policy will not kick in when the User-Agent contains CitrixReceiver, allowing the Native Receivers to successfully connect.

Workspace Control for XenApp published desktops
When you publish a desktop through XenApp you will notice that Workspace Control isn’t working like expected in Storefront. This is because the desktop is shown on the Desktop tab and Workspace Control isn’t enabled there. Instead autolaunch is enabled which is also killing for single session control. If you want to use Workspace Control you need to treat the desktop as application by using the TreatAsApp keyword. You can read more about Workspace Control in combination with Storefront in great detail in a previous blog post : Deeper look into Workspace Control and it’s challenges.

I must say I like Storefront, it’s stable (talking about the latest versions of course), looks good and gives users an unified login experience, but I have some wishes left so in case someone from Citrix read this, here are my feature requests for Storefront :

  • More options in the GUI, manual editing the web.config files feels a little silly and can be error prone. If the goal is to keep the console simple, then I would suggest an option to switch to advanced view
  • More informative messages for the users, for example Webinterface shows when a Desktop is (re)starting and cleaner messages when applications are disabled etc
  • Redirect users to a specific Store (when using Native Receiver), now the user gets a popup to select a store but it will be nice to control this in a session policy or directly in Storefront

Please note that the information in this blog is provided as is without warranty of any kind.

IE11 ignores Autocomplete=Off setting used by Netscaler Gateway and put users at risk

IE-Red-smallSince Microsoft is pushing Internet Explorer 11 through Windows Update as an important update for Windows 7, a lot of users are starting to use it as their default browser. Also users on Windows 8.1 are already using Internet Explorer 11 by default.

Internet Explorer 11 brought some issues for customers using Netscaler Gateway, for example the login fields in combination with the Green Bubble theme weren’t displayed correctly and this prevented users from logging in correctly. Citrix released new maintenance releases for Netscaler Gateway which will fix this layout issues.

But there is more : Microsoft decided to ignore the Autocomplete=Off setting used by Netscaler Gateway (and a lot of other login pages), this setting tells the browser not to use the store password option and will protect the user from accidently saving their username and password on their machine (or worse a public machine!). Below a screenshot from the login.js file where you can find the autocomplete=off setting on the Netscaler :


Up till Internet Explorer 10 and every other major browser like Chrome and Firefox respect this setting and will not bother the user to store the credentials, but Microsoft decided to ignore this setting (by default!) in IE11 because they want to give this control back to the user, you can read more about it here and here. Below a screenshot of the message users get when logging in through IE11 on Netscaler Gateway :


I think it’s a wrong choice of Microsoft to ignore the autocomplete=off setting, but even more wrong to ignore it by default, because they forget that a lot of people don’t know how to use a password manager wisely and just click OK on every message they see without thinking about the risks. When users click on Yes, everyone with access to their computer can simple hit the first letter of their username and the rest is auto filled so it’s very easy to make abuse of this :


Of course users can always bypass the autocomplete=off setting by installing\enabling a password manager themselves (also in other browsers) but in this way, they are conscious what they are doing. This default setting will put a lot of users (the ones we all know and hit Yes on everything on their way) at risk, without they even know it.

Possible workarounds when using Internet Explorer 11 :

  • When machines are managed (through GPO or tools like Thinkiosk) disable Autocomplete in the browser completely or only for certain websites
  • Change the password field on the Netscaler Gateway from type Password to type Text, this will prevent autocomplete from kicking in but will lower the security when people are typing in their password
  • Don’t allow IE11 : Block the login page from showing (through EPA scan or some code in the index page) and notify the users to use another browser
  • Don’t use Receiver for Web \ Netscaler Gateway Portal and only use the native Citrix Receivers

Of course 2-way factor authentication is a life saver here, but the security is already lowered when username and passwords are already stored on the machine. People with bad intentions only need the phone or token as an extra step to get access from a machine with prefilled username and password.
Please let me know if you have other ways to work around this default behaviour of Internet Explorer 11.

App-V 5 Scheduler, an easy way to deploy App-V 5 applications to your machines


Please note : App-V Scheduler 2.0 has been released, read more about it here


I think it didn’t escape you but Citrix is leaving the application virtualization space, they announced that they will not further enhance and support their streaming technology on newer platforms (that is XenDesktop 7 and Windows 2012\8). A lot of customers are starting to look at App-V 5 as a replacement, but also new and customers currently on App-V 4.X are looking to benefit from the latest improvements in the App-V product. App-V 5 can be deployed and managed in the following 3 ways:

  • App-V 5 Full infrastructure (Management, Publishing and optional a report server)
  • System Center Configuration Manager (SSCM) integration
  • Standalone

The first 2 have their place and advantages, no doubt about that, but they also come with their prerequisites (like full SQL etc) and need to fit in the customers environment. Most customers I work with are using UEM (User Environment Management) tools and most of them from RES Software (Workspace manager). They want a simple method to deploy and upgrade their virtual applications and manage them with RES Workspace manager. They don’t want an extra management layer, that overlaps in functionality and adds complexity, that’s why I often used the streaming only method in App-V 4.x and with Citrix application streaming (while it had its limitations) it was even more simple : just place the profiled application on a share and import it in RES Workspace Manager, that’s it.

With this in mind I created App-V 5 Scheduler, which extends the standalone deployment method by allowing you to automatically deploy packages and connection groups on machine level with a configurable time interval. But it can do more, I will dig deeper in App-V 5 Scheduler in a sec, but first I want to note you on another great tool that’s build around the App-V 5 standalone method by well-known App-V guru Tim Mangan. This tool is called App-V Self Service and provides users with a self-service portal where they can select and deploy packages (without needing admin rights), App-V Self Service can also deploy packages automatically at user login based on AD group membership.
If you are looking for a solution to deploy packages to users and don’t want to implement the App-V 5 full infrastructure, I would highly recommend looking at this tool.

App-V 5 Scheduler takes a slightly different approach, where App-V Self Service uses deployment based on users and is triggered when a user logs in, App-V 5 Scheduler uses machine level deployment and therefore fits best in environments that uses UEM products like RES Workspace Manager to control and manage the applications and the users workspace.

App-V 5 Scheduler can also remove packages on machine start-up to keep the package installation root clean and in case you use Citrix Provisioning Server (PVS) or Machine Creation Services (MCS) it can detect when the image is in private mode so it will not deploy packages filling up your image accidently.
To keep it simple I made App-V 5 Scheduler scenario driven, it’s only necessary to select the scenario that fits your environment best. I will explain the scenarios later on but first a quick look at the base of App-V 5 Scheduler, App-V 5 Scheduler consists of 2 components : 

The App-V 5 Scheduler GUI

The GUI will allow you to see which packages and connection groups are currently deployed to the machine, it will also allow you as an admin to do some troubleshooting steps like opening CMD or Regedit inside the virtual application context (or bubble or sandbox, whatever you like to call it ;). The GUI also displays basic information about the App-V client configuration and will allow you to configure the App-V 5 Scheduler Service. Below is a screenshot of how the App-V 5 Scheduler GUI looks like:

App-V 5 Scheduler GUIThe App-V 5 Scheduler Service

The service will deploy new packages and connection groups which are added after the last system start-up based on a configurable time interval. Depending on the selected scenario the service can detect when the image is in read\write mode to prevent the deployment of packages. Below is a screenshot of the App-V 5 Scheduler Service configuration dialog :

App-V 5 Scheduler Service dialog

Scenario 1 : Non-Persistent image with shared content store mode enabled

Select this scenario if you use Citrix Provisioning Services (PVS) or Citrix Machine creation Service (MCS) for single image management and you want to leverage the Shared Content Store (SCS) functionality of App-V 5 to lower the storage needed for package content. This scenario will configure the service to do the following :

  • Remove packages and connection groups at machine start-up to keep the package root clean
  • Deploy packages and connection groups after machine start-up
  • Deploy only new packages and connection groups based on the configured time interval
  • Publish packages and connection groups globally on the machine
  • Don’t remove or deploy packages when the image is in private mode

In this scenario you can redirect the App-V package root location outside the image (write cache disk for example), but since you are leveraging the App-V Shared Content store mode, a minimal amount of storage will be used in the package root so you could decide to keep it in the primary location.

Scenario 2 : Non-Persistent image with shared content store mode disabled

Select this scenario if you use Citrix Provisioning Services (PVS) or Citrix Machine creation Service (MCS) for single image management and you want to deploy the packages in its full size to the App-V package root location. This scenario will configure the service to do the following :

  • Remove packages and connection groups at machine start-up to keep the package root clean
  • Deploy packages and connection groups after machine start-up
  • Deploy only new packages and connection groups based on the configured time interval
  • Publish the packages and connection groups globally on the machine
  • Pre-Cache (mount) the package inside the package root location
  • Don’t remove or deploy packages when the image is in private mode

In this scenario you can also redirect the App-V package root location to another location (outside the image for example), this is advisable if your write cache location is limited and you want to keep it small.

Scenario3 : Persistent image mode

This scenario is very similar to scenario 2 only it will not check if the image is in private mode or not, select this scenario if you have persistent machines where you want to deploy App-V 5 packages to.

This scenario will configure the service to do the following :

  • Remove packages and connection groups at machine start-up to keep the package root clean
  • Deploy packages and connection groups after machine start-up
  • Deploy only new packages and connection groups based on the configured timer interval
  • Publish the packages and connection groups globally on the machine
  • Pre-Cache (mount) the package inside the package root location

Which scenario is best depends on your environment, key factors are the size of your packages and the amount of storage you have available. Scenario 2 and 3 will give you the best overall performance because the applications are fully mounted to the machines lowering bandwidth consumption and eliminating network bottlenecks. That’s why I would recommend this scenarios when you use a shared platform (RDS\XenApp). Besides better performance it will also make the virtual application higher available since they don’t rely on the content share after being deployed. Since scenario 2 and 3 mounts the package there is no load time at all when users start the application.

Below you will find a high level UML diagram of the App-V 5 Scheduler Service :


Deploying connection groups with App-V 5 Scheduler

Connection groups are basically XML files filled with information of which packages that can connect to each other, Tim Mangan created a very useful tool to create connection groups called App-V DefconGroups. With this tool you can easily select packages that you want to connected to each other. You can save the output file (with the AppG extension) somewhere on the package source location and App-V 5 Scheduler will deploy it to the machine (globally) for you.


App-V 5 Scheduler Service logs its actions in the event viewer and will give you information about the packages that are removed or deployed and the status of the service. For troubleshooting purposes it’s the place to look at because all other operational events are logged there as well. Below is an example message in the event viewer when private mode is detected :



App-V 5 Scheduler, in combination with an User Environment Management tool like RES Workspace Manager, is a powerful and simple way to deliver packages to your machines without the need for a full App-V 5 infrastructure model. Just place the package on a share and App-V 5 Scheduler will do the rest for you. If you use the App-V 5 integration in RES Workspace Manager and you have imported the application once, it will look up the newest version of the package automatically. You only have to place the updated package on the share and after App-V 5 Scheduler deployed the new version to the machine it’s immediately available to your users, they only need to close and reopen the application to get the latest version. App-V 5 Scheduler GUI will give administrators an overview of the current deployed packages on the machine and allow them to open CMD or Regedit to perform basic troubleshooting steps inside a virtual application.

What’s coming

To complement App-V 5 Scheduler, I will also release a tool called App-V 5 Central View. This tool allows you to select an Active Directory group (with machine accounts) and it will give you an overview of the currently deployed applications on that machines. In combination with some basic management tasks, this tool will give a central point of view of which applications are deployed through your environment. App-V 5 Central View is in depended of App-V 5 Scheduler and can also be used without it, but will form a great combination together.


App-V Scheduler 2.0 has been released and has a lot more features and enhancements, click here to read more.

UDadmin GUI a free tool to manage XenDesktop User\Device Licenses

UDadmin_GUI_logoUDadmin GUI has been integrated to CtxUniverse
(see more information at the bottom of this post)

Since the new licensing model for Citrix XenDesktop, which is based on named users and devices, I have customers asking questions like :

  • We are running out of licenses but how do we know which users or devices claimed them?
    XD Usage overview
  • We have a tight budget and don’t want to buy any more licenses then strictly necessary, how can we get better control and insight in the current usage?
  • What is the balance between User licenses and Device licenses?
  • We created some temporary accounts for testing purposes, how can we release them?

UDadmin commandline tool
Citrix provides a command line tool named UDadmin to control the license usage, it’s part of the Citrix Licensing server software and installed by default. You can find it in the LS directory of your license server installation directory. With this tool you can view and reclaim licenses, but it’s not really user friendly mainly because it runs in a cmd box and it requires the right parameters. To provide something easier to this customers I created a GUI around UDadmin :

UDadmin GUI
UDadmin GUI is a lightweight .NET application that visualizes the output of UDadmin, and provides an easy way to release single- or multiple licenses at once, just select them and hit release. I also added some additional features like exporting the current usage to PDF for reporting purposes. When you launch UDadmin GUI it determines the licensed feature for you, no configuration is required at all. Below is a screenshot of UDadmin GUI in action :
UDadminGUI The XenDesktop named license usage is updated every 15 minutes, in UDadmin GUI you can see when the next update schedule occurs (in blue). So after releasing licenses the changes are reflected when the next update schedule runs. If you don’t want to wait that long there is also an option in UDadmin GUI to restart the Citrix Licensing service to reflect the updated usage directly afterwards :
UDadminGUI_restart_ls Prerequisites
There are only 2 prerequisites

  • Microsoft .NET Framework 3.5
  • Citrix License Server installed (UDadmin GUI is tested with version 11.10 and 11.11, but every edition which supports the new licensing model should work)

In other words, you need to run UDadmin GUI on the same machine as where Citrix License Server is running, this is because Citrix doesn’t support running UDadmin remotely. UDadmin GUI works with XenDesktop Enterprise and Platinum edition, support for App edition will be included in a future version (very soon!).

Just like the UDadmin command line tool, UDadmin GUI needs administrative privileges to run properly. But don’t worry UDadmin GUI is UAC aware and will prompt you when necessary.

UDadmin GUI has been acquired by Infralogic Inc. and is now integrated to a CMDB platform named CtxUniverse, under the License Manager module. You can download the latest version on the Infralogic website.
Version history

= 1.0 =
– Initial version

= 1.1 =
– Multiple fixes and enhancements

= 1.2 =
– Added an option to restart the Citrix licensing service and refresh usage afterwards

= 1.3 =
– Added export to PDF functionality

= 1.4 =
– Changed the window layout so UDadmin GUI also looks good on lower screen resolutions
– Added support for multiple license files and SA dates

– Added support for mixed XenDesktop Editions (both Enterprise and Platinum on same license server), if UDadmin GUI detects multiple editions a Combobox is visible to switch between editions :
Edition selection
= 1.5 =
– Added support for XenDesktop VDI Edition
– Fixed an issue with releasing user\device licenses which contains whitespaces

= 1.6=
– Added support for XenApp Advanced Edition (named licensing model)
– This version can only release one license at a time to be compliant with Citrix licensing terms
– This version is portable, just place and run the executable on your Citrix license server

A graphical deep dive into XenDesktop 7

Make sure to also check this blogpost about a very handy tool named Remote Display Analyzer

A graphical deep dive into XenDesktop 7graphics


A while ago I wrote an article about Adaptive Display and how it can be fine tuned. Well Adaptive Display as we know it hasn’t seen the light for a very long time because a lot has been changed in XenDesktop 7. In this blog post I will describe this changes and dig deeper in the configurable options related to graphics in XenDesktop 7.

Adaptive Display First generation = Legacy Graphics mode

In a short time frame a lot have changed in the delivery of graphics, users are demanding more media content, higher frame rates and a fluid user experience. While Progressive display did a very good job for a long time, it required a lot of manual tuning to accommodate different use cases, because of this it was often misconfigured resulting in a degraded user experience. To overcome this problem Citrix developed the first generation of Adaptive Display, it was still based on settings around progressive display but it was now auto tuning according to the available bandwidth and the capabilities of the client device. The concept of this first generation was simple : use a different compression algorithm for moving images and still images and tune it on the fly.

Adaptive Display Second generation, the new standard in XenDesktop 7

Well this concept is pretty much the same in the second generation of Adaptive Display but it’s now based on different codecs, the SuperCodec as Citrix calls it can dynamically decide which compression is used for different parts of the screen. The most important codec that is used in the second generation of Adaptive Display is the H.264 deep compression codec which we also know from HDX 3D Pro, together with features like Desktop Composition Redirection it forms the base of Adaptive Display second generation, before I go on let’s summarize the most important new graphics settings in XenDesktop 7 :


Default setting


Legacy Graphics Mode Off Revert to Adaptive Display first generation
Target Frame rate 30 Sets the maximum frames send to the client (now up to 60 FPs can be configured!)
Visual Quality Medium Sets the level of compression for the new codecs, besides low, medium and high also build to lossless or always lossless can be selected
Desktop Composition   Redirection Enabled Render Windows Desktop Manager (WDM) generated graphics on the client
Desktop Composition   Redirection Quality Medium Sets the default level of compression for Desktop Composition Redirection

I think the above settings are the most important changes in XenDesktop 7 according to graphics, besides that there are a lot of windows media related policies added. There is nothing really arranged what’s current and what’s legacy so it’s easy to get lost if you don’t know where Citrix came from with Adaptive Display first generation and beyond. You can only see on what kind of OS it applies but that’s about it.

I tried every combination of the above policies and made an UML diagram of my findings, the diagram shows the descisions made from the moment the session is launched (click to enlarge) :

Graphics UML XD7

H.264 Deep Compression codec

Citrix was the first vendor to use H.264 compression for delivering graphics through their remote display protocol (ICA), it was targeted for a specific use case : 3D modelling and graphics designers. At the beginning the first version of this codec was only available when combined with Nvidia GPU cards that had enough Cuda cores, as the codec was designed to leverage the Cuda cores for compression and encoding.

Later the codec have evolved and Citrix made it also possible to encode completely in CPU. The new version, called the Deep Compression V2 codec, uses even less bandwidth then it’s predecessor and since it’s CPU based it can be used for a lot more use cases. There is one downside though, since it’s CPU based (default setting) the load on the host side increases and can affect the scalability of the total solution, when CPU resources are limited or scalability is a concern in your environment you have the following options in XenDesktop 7 (please note that it’s about finding the right mix between server scalability, bandwidth usage and user experience. What’s best for your environment depends on the use case and of course the amount of $$$ you can spend) :

Option 1 : Leverage the new Desktop Composition Redirection feature to offload the host CPU

Citrix extended the aero redirection feature (known from XD 5) and made it possible to remote Desktop Window Manager (DWM) DirectX commands to the client to be rendered there. This also means that graphics generated from application like Internet Explorer, Office 2010 and other modern applications are offloaded to the client. You will also notice when installing the XenDesktop 7 VDA on Windows 7 that the Aero theme is enabled by default, even when you disable Desktop Composition Redirection the Aero theme can still be used completely rendered on the new WDDM driver on the host side. The quality of Desktop Composition Redirection can be configured in the following policy :


While this feature is awesome, since it leverages the clients capabilities and thus lowering the resources needed on the host side, there are some attention points when using Desktop Composition Redirection:

–          On the host side you need a Desktop OS (Windows 7 or Windows 8)
–          On the client side you need a compatible Windows client with a moderate GPU
–          Doesn’t work in legacy graphics mode (Adaptive Display first generation)
–          Increased bandwidth when using server rendered video

Desktop Composition redirection together with the new compression codecs forms the base of Adaptive Display second generation. It’s a combination that’s default enabled when connecting from a windows Client. If you use this combination you have to keep them both in mind when fine tuning the user experience. For example you configure the visual quality to always lossless to prevent lossy data send to the client (for medical purposes for example).  You will still see lossy artefacts like the below example in Wordpad.

Desktop Composition Redirection wordpad example

To get a really lossless experience you have to either disable Desktop Composition Redirection or set the graphics quality of Desktop Composition also to lossless, increased bandwidth consumption needs to be taken into account of course.

Option 2 : Turn on the legacy graphics mode policy

When you use the default graphics settings in XenDesktop 7 and you open HDX Monitor, you will see under Graphics – Thinwire that Adaptive Display is disabled (see below screenshot) and that thinwire redirection is disabled by a policy. This means that you are leveraging the new codecs and not using Adaptive Display first generation.Adaptive_Display_OFF

When you look at Graphics – Thinwire Advanced you will notice that the Legacy Graphics mode is turned off with the Legacy Graphics mode policy (see below screenshot).

ThinWire_Advanced__HDX_Monitor_PolicyOk so this is the default, now when user density and scalability is more important than the mobile user experience it’s possible to revert back to Adaptive Display first generation by enabling the Legacy graphics mode policy. Please note that you also have to switch off Desktop composition redirection to make this work. When you have done that you can verify that Adaptive Display first generation is back in business again through HDX monitor :

Adaptive_Display_ON_LegacyYou will also notice that, when using Windows 7, Aero functionality is switched off when using legacy mode:


For further fine tuning the legacy graphics mode in XenDesktop 7, read my previous blogpost on Adaptive Display.

Option 3 : Use Graphics Processing Units (GPU)

Instead of rendering and encoding everything in CPU, you can also leverage a GPU to do the heavy lifting, after all this is where a GPU is built for. Moving the graphical load away from the CPU, means you have more resources available for your applications etc making the solution more scalable. In XenDesktop 7 you have the following options :

–          Leverage the GPU directly (Either physically or through the Hypervisor (GPU passthrough))
–          Leverage the GPU indirectly through GPU virtualization (Available by using Nvidia GRID\VGX)

Leveraging the GPU directly is available for a while now but since it’s a one-to-one solution (except when using hosted shared desktops, where you can share a GPU with multiple sessions on the same server OS) it’s not very scalable and mostly limited to be used for heavy graphical designers and alike use cases. VGX on the other hand will extend the use cases by delivering a one-to-many solution. Personally I think this is the future because it provides the best combination of scalability, bandwidth and user experience. Let’s dig a bit deeper in VGX.

When we open HDX monitor in a XenDesktop 7 session you will see the below message when VGX is not available :


Screen scraping vs Direct frame buffer access

When you have a GPU available graphics are first rendered on the GPU before they are (deeply) compressed in CPU and then send down the client, so first the output of the GPU (which would normally be send to the monitor attached to it) needs to be captured and after that compressed and tuned, this process is called screen scraping.

Screen scraping, while better than doing everything in CPU, consumes additional resources and this is where Nvidia VGX (formerly known as Monterey and now called NVIDIA GRID technology) comes to play, they provide an API which allows remote display protocols to access the frame buffer (the dedicated H.264 encoding engine of the GRID card) directly. The below picture shows how the GRID technology is integrated on the Hypervisor level :

VGX (Source Nvidia)

NVIDIA User Selectable Machines (USM) (Now called VGPU Profiles)

USM is the way Nvidia licenses the GRID technology, you can also translate the USM to use cases. There are 3 configurable USM’s :

Standard USM (bundled with the NVIDIA GRID card enabling a true PC experience for up to 100 task workers)
NVS USM (mission-critical professionals who use a variety of productivity and dedicated business applications)
Quadro USM (designers, artists, and scientists that require interactive 3D graphics and full compatibility)

So when you want to leverage the GRID virtualization technology you can do this for about 100 “normal” task users on a single GRID card (this can be less or more depending on the type of card and type of users), in this way you can already build a high scalable solution for general use cases. For more heavy graphical demands you need NVS USM or Quadro USM and this will cost additional licenses, good thing is you can mix and match the USM’s on a single GRID card. Of course the limits of the GRID card needs to be taken in to account. For more information about Nvidia GRID and the USM use cases click here.
XenServer will be the first hypervisor supporting GRID GPU virtualization, at this time of writing it’s not yet available but you can already order servers with GRID cards and leverage the onboard kepler GPU’s by using traditional GPU passthrough technology.

** Update **
1 October 2013, Citrix and NVIDIA released the GRID VGPU (Tech Preview) and made some changes in names and numbers :

– USM is now called VGPU Profiles, where K100 profile being the lowest which supports a maximum of 32 VGPU’s per board (K1 Card) and K260Q being the highest, which supports 4 Power Users (designers) on a K2 Card.
– In the following table you will find the maximum VPGU per card.


– Cannot find anything on licensing, it looks like the above hardware limits are the only limitation in the tech preview.


In this blogpost I described the new graphics options and choices you have in XenDesktop 7, remember that you don’t have to configure anything to get an awesome experience out of the box. The CPU based deep compression codec is really a giant leap forward compared to the first generation of Adaptive Display especially when it comes down to mobile user experience and WAN use cases. As more Citrix Receivers supports the new codecs it’s really a fluid experience across a broad range of devices. But keep in mind that you need additional resources on your host side compared to the “legacy display mode”.

Personally I had the best experience with the visual quaility set to Build to Lossless and disabling the desktop composition redirection feature, of course this depends on the use case, what I primary tested was server rendered video (youtube) and the look and feel of the desktop over a high speed WAN connection. What also amazed me was the look and feel of Windows 8 on the IPAD with receiver 5.8, they really made a big step forward making it a native mobile experience and playing video content really performed very well using the new codecs.

Can we safely say that Adaptive Display first generation and progressive display become absolete?
Yes I think so, but for now there is still a valid use case : for example, when you want to get high density numbers and you have primarily LAN users Adaptive Display first generation still gives you the most scalable solution, but as CPU’s become faster and when GPU’s with GRID technology become more mainstream (also for general use cases), I think we can say goodbye to the good old progressive display after all those years.

Please note that the information in this blog is provided as is without warranty of any kind, it is a mix of own research and information from the following sources :

Edocs (Optimize graphics and multimedia delivery in XD7)
Edocs (GPU acceleration for Windows Desktop OS)
– Blogs from Derek Thorslund (reinventing HDXHDX leaps ahead GPU sharing and Optimizations for W8\W2012)
– Nvidia (GRID Technology, GPU Virtualization)

XenDesktop 7, are XenApp Advanced customers left in the dark?

left in the darkI’m sure it didn’t escape you, but this week Citrix announced XenDesktop 7 (formally known as Excalibur), I think it’s awesome that XenApp and XenDesktop finally melt together as one, XenApp is not disappearing but it is just a simple VDA agent you install on top of RDS 2008R2 or 2012. What is disappearing is the whole IMA architecture, this is taken over by the FMA architecture known today in current releases of XenDesktop.

The 2 main editions of XenDesktop 7 are:

– XenDesktop 7 Platinum (lots of added value here)
– XenDesktop 7 Enterprise

Besides that there are 2 other editions :

– XenDesktop 7 VDI Edition (basic VDI functionality)
– NEW : XenDesktop 7 App edition (XenApp functionality only)

Citrix released a blog post regarding XenDesktop 7 and what it means for current XenApp customers, at the bottom you can find :

XenApp Enterprise and Platinum customers with active Subscription Advantage can update to this (XenDesktop app) edition at no additional charge and migrate their environments at their own pace.

So what can we conclude here and what does this mean for XenApp Advanced customers with Active Subscription?

1: This customers cannot upgrade to Windows 2012, as XenApp 6.5 is the last edition they can use and it’s only supported on 2008R2
2: This customers are stuck with IMA and cannot integrate it with the new FMA architecture of XenDesktop (think hybrid environments, with different license use cases)
3: This customers can upgrade their XenApp 6.5 environment with rollup pack 2, which is released at the same time as XenDesktop 7, while this brings a lot of the same functionality of XenDesktop 7 App edition to XenApp 6.5, will this be the case on ongoing basis?

This is what the main benefit of Citrix Subscription Advantage is according to Citrix :

Get free version upgrades:  download new version releases at no additional cost. These updates include any major changes to the underlying product architecture as well as updates to the feature set of a given product platform. (source)

As you can read, this includes any major changes to the underlying product architecture, can we translate the underlying architecture to Windows? or just the Citrix product? either way XenApp Advanced customers with an active subscription has disappeared from the radar since the XenDesktop 7 announcement.

Comments are already filling on their blogpost, so hopefully Citrix will provide more information about this later (UPDATE: see this article for more information), I think a decent trade-up (and not the existing trade up to the named license model) for this customers should be in place. So they can use it the same way as today (CCU), after all it’s not their choice that there will only be one edition of XenApp left (XenDesktop App edition), so it’s not really fair to left them behind in the dark while paying a fair amount of subscription fee to get the latest versions (Citrix) on the latest platform (Windows).

Please note that the information in this blog is provided as is without warranty of any kind.

3 tips to harden your XenApp\RES environment

safe** This post was updated on 13-5-2013 and contains, besides additional information, also statements from RES Software. They responded very quick on the outcome of the security audit and would like to thank them for the nice collaboration **

Lately I was involved in a security and penetration scan at a customer servicing in Healthcare, because they store privacy sensitive information they need to apply to certain security regulations which are audited on a regular basis. Based on the results of this scan I will provide some findings and tips that you can use to further enhance the security of your XenApp environment. This tips are primarily focused on XenApp in combination with RES Workspace manager, but elements can also apply to other environments containing other UEM products.

Lets begin with a short description about the security scan :
The scan was performed by a company specialized in IT related security audits, you can also say the scan was performed by a group of legal hackers 😉

The security scan consisted of 3 parts :

1: Scan from an unauthorized internal perspective  (plug in the UTP cable and see how far you can get without any account)
2: Scan from an unauthorized external perspective (try to gain access from outside the corporate network trough external components)
3: Scan from an authorized user perspective (logged in with standard user credentials and see what kind of damage can be done)

Part 1 consisted of multiple scans for weaknesses like missing patches and gathering NTLM hashes to decrypt passwords etc, keypoint here is to consider disabling cached credentials for internal pc’s which doesn’t leave the building or limit the amount of cached credentials (the default is 10 cached credentials on Windows). And of course use strong passwords so when they have access to the hashes the decryption is very time consuming. They use different tools and methods to decrypt hashes, one of them is John the Ripper and pre calculated rainbow tables. When they got access to the password hashes its amazing (and scary) how fast the hash can be decrypted into a plain text password.

Part 2 consisted of a DNS lookup to gather information about external accessible elements of the infrastructure, after they are identified they try to logon using default user names and passwords and scan for other weaknesses. This customer uses Netscaler in DMZ in combination with SMS Passcode authentication, so they didn’t come far on that part. What should be taken into account is webmail\active-sync traffic (which often doesn’t have 2-factor authentication) in combination with password lockout policies, a hacker can perform a denial of service by trying lots of different usernames and wrong password to intentional block users in AD (especially admin accounts).

Part 3 consisted of tests on the XenApp environment while logged in with a normal user account, the following tips are derrived from findings in this part of the scan.

Tip 1: Timing

The first tip is about timing, if you are like me you want to put all of the user environment settings such as policies, registry and application settings as much as possible in RES Workspace Manager. If you are configuring things outside Workspace Manager, you lose the single point of management and that’s one of the key points the customer invested in RES Workspace Manager.
Well there is absolutely nothing wrong with that, but I will give you an example to think about when security is high on your design checklist.

Imagine that the company’s policy is to block the task manager from running in a user session, to do this you can import the CtrlAltDel.admx policy in RES Workspace Manager and disable the Task manager there. This policy backed with the fact you have appguard running in the background to block all unauthorized processes (so also task manager) you should assume it’s absolutely not possible to start the task manager inside the user session, at least I thought so till a saw the security report that proved otherwise…
What they did was fairly simple, consider that RES Workspace Manager by design removes all policies at logoff and that it takes some time (seconds) that the policies are applied and before appguard is fully functional when a user logs on. While the workspace composer does its magic, it seems possible to start the task manager through the CTRL-F3 hotkey right after the session is launched (press the hotkey repeatedly after the session is launched). See the following screenshot how this looks like.


After Task Manager is launched, RES Workspace manager completes the workspace with all policies and rules so starting any unauthorized processes through task manager isn’t possible, but users can see information you rather want to hide (like performance, who is logged on etc). But besides that when you click on browse in task manager before the Workspace Composer applies the drive restriction settings it is possible to browse the C Drive even when you strictly prohibited this in RES Workspace Manager :


Workaround :

If you (or your customer) don’t care about users accessing task manager and local drives, there is absolutely nothing you have to do here, but if the company’s policy is to strictly hide task manager and to hide and prevent access to local drives, you can do the following things to overcome this timing issue :

–  Apply the task manager policy through AD, this policy is applied in an earlier stage when a user logs on so passing the hotkey at session creation has no effect
–  Consider using desktop viewer, if you use desktop viewer it’s not possible to pass hotkeys
–  Modify the default ica file on the Webinterface\Storefront server and disable hotkeys there (Thanks Kees Baggerman for pointing this one out!)

RES Software statement :

The timing issue with taskmanager will be fixed in a upcoming service release of RES Workspace Manager, expect a custom executable to test with shortly.

Tip 2: Macro security

Another way to bypass the application guard is to run processes inside memory of other (allowed) processes. This is often done through Office, because most of the time this processes are marked as trusted and malicious code can be easily fired of using macro’s. The Office KAT excel sheet (sources are at the bottom of this post) is an example of a sheet containing such macro’s. When opening Office KAT in a user session with default office settings, the user is prompted to allow the macro’s (message is in Dutch but it says : warning macro’s are turned off, enable content to turn them on) :

Office KAT

When the content is enabled, we click on “open command line” and we are prompted with the following message :


After clicking OK, the command prompt is started, in the following screenshot you can see that the actual process is Excel.exe, you can also see that we can easily switch to the C drive and browse through the contents of the XenApp server (note that drive restriction policy are ignored) :


(Please note that this user session hasn’t got access to the command prompt and that access to local drives are strictly prohibited).

And when we click on “open regedit” from Office KAT, which launches a registry editor under the Excel.exe process, you can see how simple it is to browse the whole registry of the XenApp server :


After chatting with Kees Baggerman about this topic, he pointed me to a similar Excel macro from Remko Weijnen, Remko was the first pointing out that there was a way to bypass the application guard by running untrusted processes in memory of authorized processes using a Excel macro. You can read about it here. This was one of the reasons that RES tightened the security rules of the application guard driver, they did this by blocking the svchost process from running other (unauthorized) processes indirectly. This security enhancement is available since Workspace Manager 2011 SR4, the following rule is created (and disabled) by default in new RES Workspace Manager deployments :


Note : Check this rule out if you upgraded your environment till this point, because during upgrade the rule is created and enabled by default, this is done to avoid security warnings thrown at your users after the upgrade, you can run the rule in logging mode for a while to identify what the impact will be in your environment. So if you want to harden your environment you should disable this rule.

Another way to tighten the security of your RES environment is by restricting a managed application to be only launched by Workspace Manager itself (see below screenshot). This prevents the managed application to be launched through other applications or unmanaged file types. This example video contains Remko’s macro in combination with this setting, you can see that the authorized process (excel.exe in the video example) isn’t started by RES Workspace Manager itself and thereby blocked from launching after this setting is active.
Back to the Office KAT sheet which was used in the security scan, I tested all of this security settings in combination with Office KAT and it happens to be that I could still run the CMD and Regedit instance, I think this is because the macro’s in Office KAT starts a different embedded CMD instance in memory (ReactOS) without reading\executing anything from disk. I think this is why application guard cannot detect it, and this would probably the moment where the virusscanner should kick in.

Does it mean that malicious macro’s like Office KAT can put your environment in direct danger? No not really because we still have only user rights and any processes launched from the command prompt will be blocked by appguard. But what might be a risk is that the content of the local drives and the registry is exposed which may contain sensitive information about your infrastructure. Besides that there is another risk in the way how the default file system in Windows is designed, Tip 3 will go further on this subject but first : how can we prevent this kind of Macro’s from running?

Workaround :

– You can define trusted locations (Office 2010 and above) and run unsigned macro’s from there, when using trusted locations the Trust Center security feature is bypassed, but please note that Office KAT isn’t noticed by this feature so I wouldn’t rely on this feature only
– You can disable all macro’s through a policy or only allow signed macro’s
– Some virus scanners (also windows defender in Windows 8) detects Office KAT as a thread and places the Excel sheet in quarantine, but there are more variants of this macro’s and not all virus scanners detects them, if you want to rely on the virus scanner pick one that’s designed to detect this kind of malicious macro’s. Also be careful with stripping your virusscanner to optimize performance, it maybe you win on performance but on the other hand you might lose on security.  You can download Office KAT at the bottom of this post to check if your virusscanner detects it.

RES Software statement :

This kind of macro’s, which runs malicious code totally in memory (ReactOS in this case), is out of the scope of RES appguard, this type of macro’s should be detected by the virusscanner. (They added to this that there will be an even more tighter security mechanism in the current under development .net version of RES Workspace Manager).

Tip 3: File system security

So in Tip 1 and 2 we found possible ways to get access to the local drives of the XenApp server even when access to them is strictly prohibited through policies. Besides exposing sensitive information, there is another culprit when users have access to local drives : The default ACL’s in Windows allows users to create folders on the local drives and store data in it. See below an example of CMD running under the Excel process, creating a folder and store data in it :


You can imagine that users (or hackers with user rights) with bad intentions can make a mess of your XenApp server and even worse they can fill up the disk with data till the disk reaches full capacity which can result in a denial of service. The benefit of using provisioned XenApp servers is that they revert to a clean state after each reboot, but the risk stays the same and may be even worse because the write-cache will grow and you can end up with failing XenApp servers because the write-cache location isn’t sized for that. Also when using provsioning in combination with a local disk for the write-cache, the same default file system security applies to this disk. We can for example browse to the D drive and see the cache file :


By default users can create folders on this disk which will persist after a reboot, when this disk is filled with data the cache file cannot grow any further. This can affect the XenApp server during run time but also after a reboot!

Workaround :

I think the most recommended way to address this risk is to accept that users in some way can access the local drives of the XenApp server, you can hide them and narrow the risk that they can be accessed through Tip 1 and 2 for example, but it’s hard to totally prevent it. Some applications for instance doesn’t respect the drive restriction policies and can expose access to the local drives. To prevent write access to the local drives you can use the RES Workspace Manager security feature called Read-only blanketing (RoB), this feature transforms the local drives to read-only drives without messing around with the default ACL’s, it’s there in Silver (security) and Gold edition, so if you are not using this feature yet you might consider it! 

Conclusion :

In this blog post I showed you some simple tips to further enhance the security of your XenApp environment in combination with RES Workspace Manager. Hopefully this information was helpful for you when security is an important consideration in your environment or design. How tight you need to secure your XenApp environment depends on the kind of company and type of data you are securing. Most of the time the more secure you make it, the more complex it will become. It’s all about eliminating risks, but I think even more important : accepting risks.

With RES Workspace Manager it becomes a lot easier to harden your environment from a user perspective, use the security features and maybe more important audit this features on ongoing basis.

You can download Office KAT here, it’s part of the Interactive Kiosk Attack Tools (iKAT) which contains more useful tools to test the security of your environment. Please be careful browsing this pages because it can contains content and material not suitable for work.

Please note that the information in this blog is provided as is without warranty of any kind.

Replicating your VDisk stores with DFS-R

DFS-R_title_smallIn this blogpost I want to share some experiences and practical tips when using DFS-R to sync local VDisk stores between multiple Provisioning Servers. Lets begin with a quick intro :

If you are streaming your VDisks from the Provisioning Servers local storage, you often want to replicate the stores to other Provisioning servers to provide HA for the VDisks and target connections. I’m a big fan of running VDisks from local storage because :

– No CIFS layer between the Provisioning Servers and the VDisks, increasing performance and eliminating bottlenecks
– No CIFS single point of failure
– No expensive clustered file system needed to provide HA for the VDisks

Caching on the device local harddrive or in the device RAM is the best option when using local storage for your VDisk store, this way you can easily load balance and failover between Provisioning Servers. Of course there are some down sides when placing the VDisks on local storage :

– Need to sync VDisk stores between Provisioning Servers, resulting in higher network utilization during sync
– Double storage space needed
– Not an ideal solution when using a lot of private mode VDisks (VDisk is continuous in use and cannot sync). Luckily we now have the Personal VDisk option in XenDesktop so IMHO private VDisks aren’t really necessary anymore in a SBC or VDI deployment.

Because one size doesn’t fit all, you can always mix storage types for storing VDisks depending on your needs, but for Standard mode images combined with caching on the device hard drive or device RAM using the local storage of the Provisioning Server is a good option.

Since Provisioning Server version 6 Citrix added functionality regarding versioning, you can now also easily see if the Provisioning servers are in sync with each other, but you have to configure the replication mechanism yourself. I have worked with a lot of different replication solutions to replicate the VDisks between provisioning servers, from manual copy to scripts using robocopy and rsync running both scheduled and manual. Lately I use DFS-R more and more to get the job done. Because DFS-R provides a 2-way (full mesh) replication mechanism, it’s a great way to keep your VDisk folders in sync, but there are some caveats to deal with when using DFS-R. Below I will give you some practical tips and a scenario you can run into when using DFS-R :

Last Writer wins
This is one of the most important things to deal with, DFS-R uses the last writer wins mechanism to decide which file overrules others. It’s a fairly easy mechanism based on time stamps : whoever changes the file last wins the battle and will synchronize to the other members, it will overwrite existing (outdated) files!
If you hold this mechanism against the mechanism how Provisioning Server works you will quickly run into the following trap :

Imagine you’re environment looks like the image below.


Step 1:
Because you want to update the image, you connect to the Provisioning console on PVS01 and create a new VDisk version, this will create a maintenance (.avhd) file on PVS01. Because this file is initially very small it will quickly replicate to the other Provisioning servers.

Step 2:
You spin up the maintenance VM, at this point you don’t know from which Provisioning server the maintenance VM will boot (decided based on load balancing rules), so let’s say it boots from PVS03.
You make changes to the maintenance image and shut it down.
Now the fun part is going to start! Based on the changes you made and the size of the .avhd file, it can take some time to replicate the updated file to the other Provisioning Servers.

Step 3:
In the meantime, still connected to PVS01, you promote the VDisk to test or production.
When you promote the VDisk, the SOAP service will mount the VDisk and make changes to it for KMS activation etc.

Step 4:
You boot a test or production VM from the new version, and you don’t see you’re changes, further more they are lost!
What happened? Well you ran into the last writer wins mechanism trap of DFS-R :
The promote takes place on the Provisioning Server which you are connecting to, so in the example this is PVS01. PVS01 doesn’t have the updated .avhd from PVS03 yet, so you promoted the empty .avhd file created when you clicked on new version.
Because the promote action updates the time stamp of the .avhd file, it will replicate this file to the other Provisioning server (again quick because it’s empty) overwriting the one with your updates.

Here are 2 options how you can work around this behaviour :

Option 1 :
After you make changes wait till the replication is finished (watch the replication tab in the Provisioning console) promote the version when every Provisioning Server is in sync.

Option 2 :
If you can’t wait connect the console to the Provisioning Server where the update took place, promote the new version there, so you are sure you promote the right .avhd file

Below I will give some other practical tips when using DFS-R to replicate your VDisk stores.

1. Ensure you have always enough free space left for staging files and make your staging quotas big enough to replicate the whole VDisk (1,5 time the VDisk size for example)

2. Create multiple VDisk stores for your VDisks, this allows you to create multiple DFS-R replication folders, replication works better with multiple smaller folders then a very large one

3. Watch the event viewers for DFS-R related messages, DFS-R logs very informative events to the event log, keep an eye on high watermark events and other events related to replication issues

4. Check the DFS-R backlog to see what’s happening in the background and to check that there are no files stuck in the queue, you can use the dfsrdiag tool to watch the backlog, for example :

dfsrdiag backlog /receivingmember:PVS03 /rfname:PVS_Store_01 /rgname:PVS_Store_01 /sendingmember:PVS01

5. Exclude lok files from being replicated, they should not be the same on every Provisioning Server

6. Plan big DFS-R replica traffic during off-peak hours, when DFS-R is replicating booting up your targets will be slower, you can also limit the bandwidth used for DFS-R replica traffic

7. Before you start check your Active Directory scheme and domain functional level, if you want to use DFS-R your Active Directory scheme must be up-to-date and support the DFS-R replication objects. Also note that only DFS-R replication is necessary, no domain name spaces are needed.

I can be very short here, my conclusion is that DFS-R can be a very nice and convenient way to keep your VDisk stores in sync, but you must understand how DFS-R replica works and how it behaves when combined with Provisioning Server. Hopefully this blog post gave you a better understanding when using DFS-R in combination with Provisioning Server and keep above points in mind when you consider using DFS-R as the replication mechanism for your VDisk stores.

Please note that the information in this blog is provided as is without warranty of any kind.