U2U Blog

for developers and other creative minds

Using Azure VMs as remote R workspaces in R Tools for Visual Studio

Running R on your local laptop is easy: you just download one of the R distributions (CRAN or Microsoft) and kick off RGui.exe. But if you’re a Microsoft oriented developer or data scientist you are probably familiar with Visual Studio. So you download and install R Tools for Visual Studio (RTVS) and you can happily run r code from within Visual Studio, link this to source control etc.

Remote workspaces in R Tools for Visual Studio

But this decentralized approach can lead to issues:

  • What if the volume of data increases? Since R holds its data frames in memory we need at least 20 Gb of memory when working with e.g. an 16 Gb data set. The same remark for CPU power: if the R code runs on a more powerful machine it returns results faster.
  • What if the number of RTVS users increases? Do we copy the data to each users laptop? This makes it difficult to manage the different versions of these data sets and increases the risk of data breaches.

This is why RTVS also allows us to run R code from within a local instance of Visual Studio, but it executes on a remote R server, which also holds a central copy of the data sets. This remote server can be an on-premise server, but if the data scientists do not need permanent access to this server it could be cheaper to just spin up an Azure virtual machine.


When we click the Setup Remote R… menu in our Visual Studio R Tools it takes us to this webpage, which explains in detail how to setup a remote machine.
Unfortunately this detailed description was not detailed enough for me and I bumped into a few issues. So if you got stuck as well, read on!

Create an Azure VM

Login in the Azure portal, click the + to create a new object and create a Windows Server 2016 Data center virtual machine. Stick to the Resource Manager deployment model and click create.
When configuring the VM you can choose between SSD and HDD disks. The latter are cheaper, the former are faster if you often need to load large data sets. Also pay attention when you select your region: by storing it in the same region as where your data is stored you can save data transfer costs. But also be aware that the cost for a VM is different over the regions. At the time of writing the VM I used is 11% cheaper in West Europe than in North Europe.

In the next tab we must select the machine we want to create. I went for an A8m V2, which delivers 16 8 cores and 64 Gb of RAM at a cost of about 520 euro/month if it runs 24/7.

Azure VM settings

Before I start my machine I change two settings: 1 essential, 1 optional.

The essential setting is to give the machine a DNS name such that we can keep addressing the machine using a fixed name, even if it got a different IP address after a reboot:
In the essentials window of the Azure virtual machine blade click on the public IP address. This opens up a dialog where we can set an optional (but in our case required) DNS name. I used here the name of the VM I created, don’t know if that’s essential, but it did the job in my case:

The optional setting is the auto-shutdown option on your VM, which can save you some costs by shutting down the VM when all data scientists are asleep.

Configure the VM

Now we can start the virtual machine, connect via remote desktop and start following the instructions in the manual:

  1. Create a self-signed certificate. Be sure to use here the DNS name we made in the previous steps. In my example the statement to run from PowerShell would be:
    New-SelfSignedCertificate -CertStoreLocation Cert:\LocalMachine\My -DnsName "njremoter.westeurope.cloudapp.azure.com"
  2. Grant Full Control and Read permissions on this certificate to the NETWORK SERVICE account using certlm.msc:
  3. Install the R service using this installer
  4. Edit the C:\Program Files\R Remote Service for Visual Studio\1.0\Microsoft.R.Host.Broker.Config.json file and point it to the DNS name we used before. Again, on my machine this would be:
      "server.urls": "",
      "security": {
        "X509CertificateName": "CN=njremoter.westeurope.cloudapp.azure.com"
  5. Restart the virtual machine. Verify that the two R related services start up correctly:
    If the services fail to start verify the log file at C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Temp
  6. Last but not least we must open up port 5444. The R service installer already takes care of that on the windows firewall, but we still need to open up the port at the Azure Firewall. In the Azure portal go to the virtual machine and select Network Interfaces and click the default network interface. In the blade of this interface click the default network security group:
    Create a new inbound security rule, allowing access on TCP port 5444. In my example the firewall allows access from all IP addresses, for security reasons you better limit yours using a CIDR block to just the IP addresses of your data scientists

Configuring RTVS

Our server is now configured, last (and easiest) step is to configure our R Tools For Visual Studio. So, on the client machines open up the R Tools Workspaces window:
Then click Add and configure the server. On my machine this would be:

Click Save, then click the blue connect arrow next to our remote connection:
Unless you added the VM to your domain we will now need to provide the credentials of one of the users we made at the server. You also get a prompt to warn you that a self-signed certificate was used, which is less safe:

And from now on we are running our R scripts on this remote machine. You can see this amongst others by a change of prompt: *> instead of >

If you have problems connecting to your server, Visual Studio can claim it couldn’t ‘ping’ your server. Don’t try to do a regular ping, since Azure VMs don’t support this. Use e.g. PsPing or similar options to ping a specific port, in our case 5444:

I hope this gets you up to speed with remote R workspaces. If you still experience issues you can always check the github repository for further help.

Creating histograms in Power BI was never easier

Power BI Desktop contains since the October 2016 release a binning option (under the group option). With this option it becomes super-easy to create a histogram. Let’s demonstrate:

First assume you made a data model containing the DimProduct and the FactInternetSales table of the AdventureworksDW sample database. If we would request the sum of the order quantity grouped by ListPrice we would get numbers like these:

But if we would plot these numbers in e.g. a bar chart, it becomes hard to read due to the large number of distinct list prices:


This is where a histogram would come in handy. With the new release of Power BI Desktop all we have to do is right click the ListPrice column in our field well and select Group:


In the Groups dialog we select the bin size. Unfortunately we cannot set the number of bins, so we still need to do a bit of math for that. In our case we want to group Listprices in 100$ groups, so we set the Group Type to Bin and the bin size to 100. Notice this only works on columns which are numerical or of type date/datetime. Other columns require the list group type.

As soon as we click OK, we now get a new field in our field well: ListPrice (bins):


By using this new field in our charts and reports, we get the data per bin. For a histogram chart, just plug it into a column chart:

See, that was easy!

Applying JSLink on a Dashboard page

In this article I will discuss the issues I came across with when trying to apply jslink on a SharePoint dashboard page. For me, a dashboard page is a page on which you put multiple web parts, in order to get a global overview on a certain entity.

In order to demonstrate this, consider that you keep track of a normal task list and an issue tracking list, which you relate to the task list by means of a lookup field. I wanted to play around with the percentage complete column, so I made sure that it was added to the default view of the tasks list.
Once this was all set up, I created a new page, with a two column layout on which I add the tasks list and the issues list web parts, and connected them based on the lookup field that I created.



With the dashboard page created, I could now add some jslink to the tasks web part. I decided to simply style the Due Date and the % Complete. Here is the code I used for this:

var U2U = U2U || {};

U2U.RegisterTemplateOverrides = function () {
    // Fields template registration
    var overrideCtx = {
        Templates: {
            Fields: {
                        'View': U2U.RenderDueDate
                        'View': U2U.RenderPercentCompleteView

U2U.RenderDueDate = function (ctx) {
    var value = ctx.CurrentItem[ctx.CurrentFieldSchema.Name];

    var date = new Date(value);
    var today = new Date();

    return (date < today) ? '<b>' + value + '</b>' : value;

U2U.RenderPercentCompleteView = function (ctx) {
    var value = ctx.CurrentItem[ctx.CurrentFieldSchema.Name];

    // .ms-TopBarBackground-bgColor
    // .ms-EmphasisBackground-bgColor

    var output = [];
    output.push('<div style="display:block; height: 20px; width: 100px;" class="ms-TopBarBackground-bgColor">');
    output.push('<span style="color: #fff; position:absolute; text-align: center; width: 100px;">');
    output.push('<div style="height: 100%; width: ');
    output.push(value.replace(" %", ""));
    output.push('%;" class="ms-EmphasisBackground-bgColor"></div></div>');
    return output.join('');


Notice in the code that I’m (ab)using the classes ms-TopBarBackgound-bgColor and ms-EmphasisBackground-bgColor. By using these, even when the composed look of the site would be changed, the task progress bar would still have the same look and feel as the page being hosted on.

After associating this JavaScript file with the tasks web part on my dashboard page, I noticed that the task list check boxes and the stripe through were not working any more. Moreover, the due date in the issue list was also being styled, while I didn’t associate my JavaScript file with that web part.
Now, what’s happening here?


Fixing the original rendering of the tasklist

Now for what is happening with the task list, if you dig around in the sources of the web page (make sure the debug.js files are loaded ;-)) and compare your page without jslink to your page with jslink, you’ll find that the file hierarchytasklist.js is the one responsible for styling the task list. In the page that is using the jslink, this file is apparently loaded “long” after your file is loaded. You can find this out by putting some breakpoint in your code and in the hierarchytasklist.js.


In order to solve this, you have to make sure the hierarchytasklist.js is registered before your code starts doing the rendering. You can achieve this by putting the next two lines before calling RegisterTemplateOverrides(ctx). I’m using the debug, because I’m not done with my issues yet …

RegisterSod('hierarchytaskslist.js', '/_layouts/15/hierarchytaskslist.debug.js');
LoadSodByKey('hierarchytaskslist.js', null);

Elio Struyf has described a solution for this in his blog post.

Now after applying this change, and having a look at the final result, the title now looks good, but I lost my rendering on the DueDate.


Overriding the hierarchytasklist templates

To understand what is happening now, you need to understand what the file hierarchytasklist.js is doing. That file itself is actually doing the jslink registrations for a couple of fields in a task list, and thus overwriting your template for DueDate. For example, this is the template registration for DueDate:

function _registerDuedateFieldTemplate() {
    var DuedateFieldContext = {
        Templates: {
            Fields: {
                'DueDate': {
                    'View': window.DuedateFieldTemplate.RenderDuedateField
            ListTemplateType: 171

ExecuteOrDelayUntilScriptLoaded(_registerDuedateFieldTemplate, 'clienttemplates.js');

You’ll notice the use of ExecuteOrDelayUntilScriptLoaded to make sure the registration does not happen until the clienttemplate.js is loaded. The clienttemplate file is the so called “engine” behind jslink doing all the necessary things on your page. Now what we want to make sure is that we register our template after hierarchytasklist has done this, and before the engine starts rendering. So let’s just use the same trick as the hierarchytasklist file is using. I pull the two lines for registration of the hierarchytasklist.js out of my RegisterTemplateOverrides and at the end of the file I add the following line:

ExecuteOrDelayUntilScriptLoaded(U2U.RegisterTemplateOverrides, 'hierarchytaskslist.js');

After applying this, problem solved! Using some breakpoints, you’ll notice that the original template is registered first but then overridden by yours.


Back to the dashboard

Now, I didn’t look at my dashboard page for a while, assuming everything would be just fine. However, I went to the dashboard page and noticed my rendering wasn’t working at all, nor the original task hierarchy templates.

The main reason for this was that my dashboard page was running on a site with Minimal Download Strategy (MDS) enabled. So I fixed that as described here and replaced my last line, which waited on hierarchytasklist.js to be loaded, by:

//Register for MDS enabled site otherwise the display template doesn't work on refresh
U2U.sitePath = window.location.pathname.substring(0, window.location.pathname.indexOf("/_layouts/15/start.aspx"));
// CSR-override for MDS enabled site
RegisterModuleInit(U2U.sitePath + "/SiteAssets/JSLINK/tasks.js", U2U.RegisterTemplateOverrides);
//CSR-override for MDS disabled site

But of course, since I’m not waiting on the task hierarchy templates to be loaded any more, I lose the original styling again (i.e. check boxes, stripe-through title, …). I need to make sure that I wait for the task hierarchy templates to be loaded before clienttemplate.js can start doing the rendering.

The problem however is that you can wait for hierarchytasklist.js to be loaded, but clienttemplates.js doesn’t wait for it. If you haven’t registered your templates before he starts rendering them, then they are not applied at all. So the trick is to make sure that before he renders, everything is loaded. If you have a look at the source of your page and the file clienttemplate.debug.js, you’ll find a function RenderListView that is called from within your page, and actually starts the rendering.


So why not make sure, that from the moment the clienttemplates.js is loaded, we alter the RenderListView function so that it waits for the necessary things to be loaded, in our case the task list hierarchy templates. So I removed the code for the MDS registration and replaced as follows, fixing the dashboard!

		//Register for MDS enabled site otherwise the display template doesn't work on refresh
		U2U.sitePath = window.location.pathname.substring(0, window.location.pathname.indexOf("/_layouts/15/start.aspx"));
		// CSR-override for MDS enabled site
		RegisterModuleInit(U2U.sitePath + "/SiteAssets/JSLINK/tasks.js", U2U.RegisterTemplateOverrides);
		//CSR-override for MDS disabled site

// Now override the RenderListView once the ClientTemplates.JS has been called
    function () {
        //Take a copy of the existing definition of RenderListView
        var originalRenderListView = RenderListView;

        //Now redefine RenderListView with our override
        RenderListView = function (ctx, webPartID) {
					// Call the original RenderListView
					originalRenderListView(ctx, webPartID);
    }, 'clienttemplates.js');



What about that DueDate column in the issues list?

We still see the DueDate column in the issues list being styled too, which is weird. That is actually because the clienttemplate.js engine applies your templates on the whole page and if your template does not limit to a certain list, view, or other, it will match any column with the same name. Now, when registering your template, you can set the property ListTemplateType to 171 to make sure it only works on task lists.
The property ListTemplateType will not help if you have multiple task list view web parts on your page that need different rendering for the same columns, have a look here on how you could possibly overcome this.

If you want to review the complete file, you can find it on GitHub.

Seamless integration of Microsoft Word and Microsoft SharePoint Online with the new SharePoint Add-In Export to Word

BRUSSELS - June 17th 2016 - U2U today announced the release of the Microsoft SharePoint Online Add-In Export to Word, which includes better integration of Microsoft Word and Microsoft SharePoint Online. Export to Word increases users’ productivity by automating the creation of Word documents based on data stored in SharePoint. Export to Word will take your Word document, and injects data coming from SharePoint list.

The creation of Word documents containing data stored in SharePoint lists is not a trivial task in SharePoint Online. Export to Word allows you to associate your Word templates with existing SharePoint lists and to map SharePoint columns to different locations within your Word document.

“Microsoft SharePoint Online and Microsoft Word are amongst the most used products within our organization. U2U has provided us with an easy to use product that brings them closer together,” said Cliff De Clerck, Managing Partner at MediPortal. “The addition of Export to Word had a positive impact on the productivity of our users and allowed us to automate the generation of our Word documents.”

Export to Word is now available for free in the Office Store. Go to your SharePoint Online environment and install the app in your SharePoint sites.

“Customers have told us that Export to Word was the app they were looking for to automate their document creation,” said Jan Tielens, Technical Evangelist at Microsoft Belgium & Luxembourg. “The addition of applications like Export to Word to the Office Store has proven again the power of the new SharePoint Add-in model.”

U2U organizes training for developers and IT professionals on Visual Studio, SharePoint, Microsoft SQL Server, Microsoft Dynamics CRM, Microsoft Exchange Server, Windows Server, business intelligence, mobile and web technologies.

More info at: http://wordit.u2u.be

What’s new in SharePoint 2016 CSOM (DocumentSets)

With SharePoint 2016 RC available, it’s interesting to see what new additions we have in CSOM and/or REST APIs. Doing some research in the differences between the SharePoint 2013 RTM CSOM and SharePoint 2016 RC CSOM, I noticed several differences. This post will describe the differences related to DocumentSets.

In SharePoint 2013, the only thing you could do with document sets from within CSOM was to create one, given a parent folder, name and the contenttype id of the document set. You weren’t able to do anything else.

So what’s new? You’re now also able to do the following things:

  • Create/alter the contenttype definition of a documentset contenttype.
  • Get a document set based on a listitem/folder.
  • Export a document set to a zip file (I mean OpenXML file).
  • Import a document set from a zip file (again … OpenXML file).



There is a new class available that allows you to change the definition of a documentset contenttype. It gives you access to the SharedFields, Templates, AllowedContentTypes, …
You can use it as follows:

var documentSetTemplate = DocumentSetTemplate.GetDocumentSetTemplate(ctx, ct);
documentSetTemplate.DefaultDocuments.Add("template.odt", ct_document.Id, bytes);

Where ct and ct_document are respectively a documentset contenttype and a document contenttype.

Unfortunately, you’re still not able to directly set the default document set view, nor alter the url of the docsethomepage.aspx.

Export Document Set

You can now export a documentset to an OpenXML (zip) file as follows:

ListItem item = GetDocumentSet(...); // DocumentSet item
ctx.Load(item, it => it.Folder);

var documentSet = DocumentSet.GetDocumentSet(ctx, item.Folder);

var stream = documentSet.ExportDocumentSet();
using(FileStream fs = new FileStream("docset.zip", FileMode.Create, FileAccess.Write, FileShare.None))



Import Document Set

You can now import an exported document set as follows:

using(FileStream fs = new FileStream("docset.zip", FileMode.Open, FileAccess.Read, FileShare.None))
    DocumentSet docSet = DocumentSet.ImportDocumentSet(ctx, fs, "Imported", list.RootFolder, ct.Id);



Announcing the SharePoint Add-in “Export to Word”

Today, we’re glad to announce the FREE SharePoint Add-in “Export to Word”, developed by the team at U2U. This add-in fixes one of those issues that you’ll probably will have come across with in SharePoint, namely generating Word documents based on SharePoint list items. A bit like a mail merge for list items!

This functionality has never been out of the box available for regular SharePoint list items. You could achieve something like it on document libraries only, using Quick Parts in Word. But however, that only works for document libraries.

Now, what does the add-in allow you to do? It allows you to configure, per SharePoint list, one or more Word templates. You can even upload your own templates! The add-in then allows you to link fields from your selected list to the content of the template. Once your template is completely designed, you can just go to the SharePoint list and export any item to Word using your template. You can even export multiple items in bulk!

CreateNewTemplate_ori GeneratedDocuments_ori GeneratedDocuments2_ori

Do you want to know more about this add-in? Just go to the official site.
Do you want to install it from the Office Store? You can find the add-in here.

For a quick start guide on how to work with the add-in, have a look at the following video:


Do note that this is the first version of the Add-in, and we want your feedback to further extend and improve the add-in. You can contact us via the regular ways.

Use PowerShell and CSOM to batch create SharePoint Online Site Collections with a custom template



In this post I will show you how to create multiple Site Collections in SharePoint Online using PowerShell and the CSOM (Client Side Object Model) libraries provided by Microsoft. It will also be possible to apply a custom template to the Site Collections that we create.

As at the time of writing the possibilities of the SharePoint Online Management Shell are too limited to do this - especially regarding Sandboxed Solutions and thus applying templates – everything will be done by using the CSOM SharePoint assemblies in PowerShell. We will initialize .NET objects using PowerShell.

These are the steps we want to implement in the PowerShell Script:

  1. First we will check if the Site Collection already exists in SharePoint Online. If it does, we will delete it.
  2. Before we can create a new Site Collection at the URL of a deleted Site Collection, we need to remove it from the recycle bin.
  3. When we are sure no Site Collections exist, let's now create them!
  4. In case we want to apply a custom template, there are a few steps to perform:
    1. Upload the template to the Solutions gallery
    2. Activate the uploaded solution
    3. Apply the uploaded template to the Root Web of the Site Collection
    4. Delete the uploaded template file
  5. Set the security for the site collection

The assemblies we need to load are the following:

  • Microsoft.Online.SharePoint.Client.Tenant.dll
  • Microsoft.SharePoint.Client.dll
  • Microsoft.SharePoint.Client.Runtime.dll
  • Microsoft.SharePoint.Client.Publishing.dll

To get these DLL's, download the SharePoint Server 2013 Client Components SDK here.

The tenant DLL is used for deleting and creating site collections using your SharePoint Online admin URL.
The other DLL's are used to access the created site collections and perform operations on them. The publishing DLL specifically is required to activate sandboxed solutions.


Getting started


First we'll create a PowerShell cmdlet, which we can then call with a set of parameters to get the Site Collection creation process going. All these parameters have a default value, but can be overwritten when calling the function. Notice the SiteTemplate by default uses the Team Site template ("STS#0"). If you wish, you can get a list of available templates from your tenant as well: link.

The rest of the parameter names should speak for themselves.

function CreateSiteCollections{
        [string]$Domain = "*yourid*.onmicrosoft.com",
        [string]$UserName = "*adminusername*@$domain",
        [string]$UserPassword = "*yourpassword*",
        [string]$AdminUrl = "https://*yourid*-admin.sharepoint.com",
        #site details
        [string]$SiteUrl = "https://*yourid*.sharepoint.com",
        [string]$SiteTitle = "My Team Site",
        [string]$SiteOwner = "*siteowner*@$domain",
        [string]$Prefix = "*SiteUrlPrefix*",
        [string]$Suffix = "",
        [string]$SiteTemplate = "STS#0",
        #site numbers
        [int]$StartIndex = 1,
        [int]$NumberOfSites = 5,
        [int]$BatchSize = 5
    #We’ll add the rest here

Inside this function, the first thing we are going to do is load the CSOM assemblies from their installation path. When installing the SDK the assemblies will be found at C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\ISAPI – the Tenant DLL assembly - and C:\Program Files\SharePoint Client Components\16.0\Assemblies.
As a first step in the script we will verify that both paths exist.

As we only need the publishing assembly when applying a custom template, we will only load it when the $IsCustomTemplate switch parameter is passed in. When executing this script we are assuming there is a folder Templates at the execution path from which we will load the custom template when required. When $IsCustomTemplate is provided to the function call, we expect the parameter $SiteTemplate to contain the filename of the site template, i.e. "MyCustomTemplate.wsp".

If so desired, you could also create a local folder containing your assemblies and point the Add-Type calls to them.

Write-Host "Loading Assemblies`n" -ForegroundColor Magenta
$ClientAssembyPath = resolve-path("C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\ISAPI") -ErrorAction Stop
$TenantAssembyPath = resolve-path("C:\Program Files\SharePoint Client Components\16.0\Assemblies") -ErrorAction Stop
Add-Type -Path ($ClientAssembyPath.Path + "\Microsoft.SharePoint.Client.dll")
Add-Type -Path ($ClientAssembyPath.Path + "\Microsoft.SharePoint.Client.Runtime.dll")
Add-Type -Path ($TenantAssembyPath.Path + "\Microsoft.Online.SharePoint.Client.Tenant.dll")

Add-Type -Path ($ClientAssembyPath.Path + "\Microsoft.SharePoint.Client.Publishing.dll")
$CurrentPath = Convert-Path(Get-Location)
$PackagePath = resolve-path($CurrentPath + "\Templates\" + $SiteTemplate) -ErrorAction Stop

Write-Host "Can't load assemblies..." -ForegroundColor Red
Write-Host $Error[0].Exception.Message -ForegroundColor Red

Batch create sites

For this script, I've chosen to create the Site Collections in batches. Calling ExecuteQuery after each operation takes too long, but creating 20 sites with one ExecuteQuery call is just too much for the web service to handle. We'll pick 5 as a default batch size, but feel free to choose what works best for you.
The idea is that we launch for instance 5 site collection creation operations in one go, and then wait for all the operations to complete. This will improve the execution time of our script.

First, we will generate the site collection names and put them in the $SiteNames variable. Each site name is a combination of the $Prefix parameter, the current index and the $Suffix parameter.
We will then grab a batch sized chuck of the site names array and copy them into the $BatchSiteNames variable. We will then perform all required operations for the sites in that batch.

$indexes = $StartIndex..($StartIndex + $NumberOfSites - 1)
[string[]]$SiteNames = @() 
$indexes | % { $SiteNames += ($SiteUrl + "/sites/" + $Prefix + $_ + $Suffix) }
Write-Host "The following sites will be created:" -ForegroundColor Magenta

$CurrentIndex = 0

While($CurrentIndex -lt $SiteNames.Length){ 
      	if($CurrentIndex + $BatchSize -le $SiteNames.Length){
$BatchSiteNames = $SiteNames[$CurrentIndex..($CurrentIndex+($BatchSize - 1))] 
$BatchSiteNames = $SiteNames[$CurrentIndex..($SiteNames.Length - 1)] 
Write-Host "`nProcessing batch of sites $CurrentIndex -> $($CurrentIndex + $BatchSiteNames.Length)" -ForegroundColor Yellow

#Site Creation logic goes here

$CurrentIndex += $BatchSiteNames.Length

Write-Host "Something went wrong..." -ForegroundColor Red
throw $Error[0]

Create Sites

Let's start with the first operation, which is deleting the site collection if it already exists! First, we'll need to get all the existing site collections from the SharePoint Online tenant, then we'll match the url's with the ones we are trying to exist. If we have a match we will delete the site collection.

#create the sites   
Write-Host "`nCreating the site collections`n" -ForegroundColor Magenta    
CreateSites -Sites $BatchSiteNames 

We will split up the logic into functions. Each function will execute one step of the site creation process. First we'll create the CreateSites function. This function will accept a set of site names, namely the ones we want to create in the current batch.

The function should be inside the CmdLet body, in order to have the parameters available.

We need to initialize a SiteCreationProperties object with our site name, url and other configuration properties.

You'll notice that we get a ClientContext object from the GetContext function, so we'll implement that one in the next step. When working with a ClientContext, we first need to perform a ClientContext.Load() on all the objects we want to use and read properties from, followed by an ExecuteQuery() to actually get the objects. With the Load we are queueing things to be loaded, while the EecuteQuery will then execute the load.

When we perform operations on the TenantAdministration.Tenant namespace in CSOM, what we get back is an SpoOperation object. On this object we can check if the operation has completed by reading the IsComplete property. As mentioned before, we will create a batch of operations first, execute the query, and then wait for all the operations to complete. As this can be implemented in a generic way for all the operations we are going to perform we'll write a function WaitForOperations that will do exactly that.

Notice that we will also create a generic ExecuteLoad function, as well as a generic ProcessError function.

function CreateSites($Sites){
        #Set Admin Context
        $Context = GetContext -Url $AdminUrl
        #Get the Tenant object
        $Tenant = New-Object Microsoft.Online.SharePoint.TenantAdministration.Tenant($Context)
        $Operations = @()
        $Sites | % {
            $SiteCreateUrl = $_
                #Set the Site Creation Properties values
                $properties = New-Object Microsoft.Online.SharePoint.TenantAdministration.SiteCreationProperties
                $properties.Url = $SiteCreateUrl
                $properties.Title = $SiteTitle
                    $properties.Template = $SiteTemplate
                $properties.Owner = $UserName
                $properties.StorageMaximumLevel = 250
                $properties.UserCodeMaximumLevel = 50
                $properties.TimeZoneId = 3
                $properties.Lcid = 1033
                #Create the site using the properties
                Write-Host "Creating site collection at url $SiteCreateUrl"
                $Operation = $Tenant.CreateSite($properties)
                $Operations += $Operation
                ProcessError("Error creating site $SiteCreateUrl")

        ExecuteLoad -ExecContext $Context

        WaitForOperations -JobContext $Context -Operations $Operations -Description "Site creation"


Let's impletement the GetContext function next. First we'll get a ClientContext object. Then we will add some Credentials to it, using the parameters that were passed in through the CmdLet.

On the ClientContext, we will change a few settings, as the defaults kept giving me errors.

Some errors that I had before these changes:

  • The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.
  • Service endpoint not found

By switching the http protocol to version 1.0 and disabling KeepAlive on the webrequest, these errors seem to go away. I would have to spend some more time to figure out why these errors occur using the SPO web service.

function GetContext($Url){

$Context = New-Object Microsoft.SharePoint.Client.ClientContext($Url)      
$credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials(
$(ConvertTo-SecureString $UserPassword -AsPlainText -Force)
$Context.Credentials = $credentials
$Context.RequestTimeout = -1
$Context.PendingRequest.RequestExecutor.RequestKeepAlive = $true
$Context.PendingRequest.RequestExecutor.WebRequest.KeepAlive = $false
$Context.PendingRequest.RequestExecutor.WebRequest.ProtocolVersion = [System.Net.HttpVersion]::Version10

return $Context

Next, let's implement the ExecuteLoad function. In this function we will call the ExecuteQuery method on the ClientContext. The reason I put this in a separate function is that it allows us to build our own retry or reload logic when desired.

function ExecuteLoad([Microsoft.SharePoint.Client.ClientContext]$ExecContext, $ReloadObject, $Retry = 0){
            Write-Host "Something went wrong..." -ForegroundColor Yellow
            Write-Host $Error[0].Exception.Message -ForegroundColor Yellow
            if($Retry -lt 5){
                $Retry += 1
                Write-Host "Retrying ($Retry)..." -ForegroundColor Yellow
                ExecuteLoad -ExecContext $ExecContext -ReloadObject $ReloadObject -Retry $Retry 
                ProcessError -Message $Error[0].Exception.Message 

Once ExecuteQuery has been called, we have to wait for all the operations to complete. For this we'll write the WaitForOperations function. It will accept a collection of SpoOperation objects, on which there is a method RefreshLoad available, to check whether they are complete or not. It is important not to perform a Load on the SpoOperation again of course, as that will try to execute the operation again, which will cause errors like "Site already exists". Once the RefreshLoad method has been called, we have to call ExecuteQuery again to perform the actual refresh. While not complete we will wait for 10 seconds and loop.

function WaitForOperations($JobContext, $Operations, $Description){
    Write-Host "$Description executing..."
    $TotalOperations = $Operations.length
    if($TotalOperations -eq 0){
        Write-Host "Nothing to execute!"
    while ($true)
        $CompletedJobs = (($Operations | ? IsComplete -EQ $true) | Measure-Object).Count
        Write-Host "$Description status: $CompletedJobs of $TotalOperations completed!"
        if($CompletedJobs -eq $TotalOperations){
            Write-Host "Operation completed!" -ForegroundColor Green
        Sleep -Seconds 10
        $Operations | % { 
        ExecuteLoad -ExecContext $JobContext -ReloadObject $Operations

Also, let's implement the ProcessError method. This method will store the error in a collection variable we will print at the end of the script execution.
First, add this to the start of the CmdLet:

$ErrorLog = @() 

Implement the ProcessError method like this:

function ProcessError($Message){
    Write-Host "$Message`n" -ForegroundColor Red
    Write-Host $Error[0].Exception.Message -ForegroundColor Red
    $ErrorLog += $Message

Add the following code to the end of the cmdlet body:

if($ErrorLog.Count -ne 0){
    Write-Host "`nThe script was executed successfully, but some errors occured:" -ForegroundColor Red
    $ErrorLog | % {
        Write-Host "`n$_" -ForegroundColor Red
    Write-Host 'Check the $Error variable for more information' -ForegroundColor Red
    Write-Host "`nALL DONE! Virtual pat on the shoulder and everything!`n" -ForegroundColor Magenta   

Delete Sites

Before we (re)create sites, we should check if they exist already. If so, we will delete them. For this we need two steps:

  1. Delete the site
  2. Remove the site from the recycle bin – don't forget this step as creating sites at a URL is not possible when a site that was at that URL is still in the recycle bin

The CmdLet content should now look like this:

#Delete sites first if they exist!
Write-Host "`nDeleting existing sites`n" -ForegroundColor Magenta
DeleteSites -Sites $BatchSiteNames

#Recycle sites first if they exist!
Write-Host "`nRecycling deleted sites`n" -ForegroundColor Magenta
RecycleSites -Sites $BatchSiteNames

#now create the sites   
Write-Host "`nCreating the site collections`n" -ForegroundColor Magenta    
CreateSites -Sites $BatchSiteNames 

Let's implement the DeleteSites function.
In this function we will call the GetSiteProperties method to get all the existing site collections. In my opinion this is more efficient than trying to get a site collection by identity for each site collection you want to create. This depends on the number of site collections your tenant has of course.
We'll loop over the site collections and check if they are in the batch of sitenames we want to create. If so, we'll delete the site.

function DeleteSites ($Sites){
    Write-Host "Getting Sites"
    #Set Admin Context
    $Context = GetContext -Url $AdminUrl
    #Get the Tenant object
    $Tenant = New-Object Microsoft.Online.SharePoint.TenantAdministration.Tenant($Context)
    #Get all site collections
    ExecuteLoad -ExecContext $Context
    $Operations = @()
    $TenantSiteCollections | % {
        if($_.Url -in $Sites){
            $SiteDeleteUrl = $_.Url
                Write-Host "Site with url $SiteDeleteUrl will be deleted" -ForegroundColor Yellow
                $Operation = $Tenant.RemoveSite($SiteDeleteUrl)
                $Operations += $Operation
                ProcessError("Error deleting $SiteDeleteUrl")
    ExecuteLoad -ExecContext $Context -ReloadObject $Operations

    WaitForOperations -JobContext $Context -Operations $Operations -Description "Site deletion"


And in a pretty similar way, the RecycleSites function.

function RecycleSites($Sites){
    Write-Host "Getting Deleted Sites"
    #Set Admin Context
    $Context = GetContext -Url $AdminUrl
    #Get the Tenant object
    $Tenant = New-Object Microsoft.Online.SharePoint.TenantAdministration.Tenant($Context)
    ExecuteLoad -ExecContext $Context
    $Operations = @()
    $OperationsStarted = 0
    $TenantDeletedSiteCollections | % {
        if($_.Url -in $Sites){
            $SiteRecycleUrl = $_.Url
                Write-Host "Site with url $SiteRecycleUrl will be removed from the recycle bin" -ForegroundColor Yellow
                $Operation = $Tenant.RemoveDeletedSite($SiteRecycleUrl)
                $Operations += $Operation
                ProcessError("Error removing $SiteRecycleUrl from recycle bin")
    ExecuteLoad -ExecContext $Context -ReloadObject $Operations
    WaitForOperations -JobContext $Context -Operations $Operations -Description "Recycled sites deletion"

Let's now check if we need to apply a custom template.
A template would typically be saved from a site you created on SharePoint Online that you want to reuse. You may try with solutions from on-prem or that you found on the internet, but none are guaranteed to work as some features might be missing.
Add this to your CmdLet body:

#now apply the template if needed
Write-Host "`nApplying Custom Template`n" -ForegroundColor Magenta    
ApplyTemplate -Sites $BatchSiteNames

Let's implement the ApplyTemplate function. First we need to upload the solution file. Then we'll activate it. Next, we'll apply it to the rootweb of the site collection. After this we'll remove the uploaded file.

function ApplyTemplate($Sites){
    $Sites | % {
        $SiteApplyUrl = $_

            Write-Host "Uploading and applying template $SiteTemplate to $SiteApplyUrl" -ForegroundColor White

            $Context = GetContext -Url $SiteApplyUrl

            $PackageFileStream = New-Object System.IO.FileStream($PackagePath, [System.IO.FileMode]::Open) 
            Write-Host "Uploading template" -ForegroundColor Gray
            $SolutionGallery =  $Context.Web.Lists.GetByTitle("Solution Gallery") 
            $SolutionGalleryRootFolder = $solutionGallery.RootFolder

            $SPFileInfo = New-Object Microsoft.SharePoint.Client.FileCreationInformation 
            $SPFileInfo.Overwrite = $true 
            $SPFileInfo.ContentStream = $PackageFileStream 
            $SPFileInfo.URL = $SiteTemplate 

            $UploadedFile = $SolutionGalleryRootFolder.Files.Add($SPFileInfo)
            ExecuteLoad -ExecContext $Context

            Write-Host "Activating template" -ForegroundColor Gray

            $PackageInfo = New-Object Microsoft.SharePoint.Client.Publishing.DesignPackageInfo
            $PackageInfo.PackageName = $SiteTemplate 
            $PackageInfo.PackageGUID = [GUID]::Empty 
            $PackageInfo.MajorVersion = "1" 
            $PackageInfo.MinorVersion = "0"
            [Microsoft.SharePoint.Client.Publishing.DesignPackage]::Install($Context, $Context.Site, $PackageInfo, $UploadedFile.ServerRelativeUrl)
            ExecuteLoad -ExecContext $Context

            Write-Host "Applying template" -ForegroundColor Gray

            $AvailableTemplates = $Context.Site.GetWebTemplates(1033, 0)
            ExecuteLoad -ExecContext $Context

            $Context.RequestTimeout = 480000
            $Context.Site.RootWeb.ApplyWebTemplate($AvailableTemplates[$AvailableTemplates.Count - 1].Name)
            ExecuteLoad -ExecContext $Context

            Write-Host "Deleting temporary file" -ForegroundColor Gray
            ExecuteLoad -ExecContext $Context

            Write-Host "`n"
            ProcessError("Error applying template to site $SiteApplyUrl")

As a final step we'll add the user from the $SiteOwner parameter as a site collection administrator.
Add this to the CmdLet body:

Write-Host "`nSetting Security`n" -ForegroundColor Magenta    
SetSecurity -Sites $BatchSiteNames 

Let's implement the SetSecurity function:

function SetSecurity ($Sites){
    $Sites | % {
            Write-Host "Adding site collection admin $SiteOwner to $_" -ForegroundColor White

            $Context = GetContext -Url $_

            $User = $Context.Web.EnsureUser($SiteOwner)
	        ExecuteLoad -ExecContext $Context

	        $User.IsSiteAdmin = $true
	        ExecuteLoad -ExecContext $Context

            ProcessError("Error applying security to site $_")

That's it! We should be good to go and create endless batches of SharePoint Online site collections!

Have fun ;)

You can download the script, it is attached to this post...







U2U.CreateOnlineSiteCollections.ps1 (15,4KB)

Recap of Microsoft Ignite 2015 event

At last month's conferences, Build 2015 in San Francisco and Ignite in Chicago, Microsoft has shown the world their latest innovations and technologies. U2U trainers have attended both conferences to gain in-depth knowledge on the upcoming technologies. It inspired us to organize 2 free evening events on 8th and 15th June 2015 demonstrating the highlights of the Microsoft Build and Microsoft Ignite conferences.

We were pleased to welcome so many of you on both evenings.

Those who are interested in the presentation of the second evening: "Recap of Ignite" can download it here:

UIGNITE_20150619_U2U.pdf (6,6MB)

Reporting with Dynamics CRM 2015 and the new Power BI Designer

This article will show how you can use the new Power BI Designer to create Power View reports on top of data that is stored inside Microsoft Dynamics CRM 2015. First we will build Power Query queries to extract data from the Accounts, Opportunity and Product entities. Next we will use the reporting capabilities to create some Power View reports on top of the extracted data.

The new Power BI Designer

Using the new Power BI Designer you first create Power Query queries that extract data from your Data Sources. The data that is extracted using the Power Query queries is loaded into an PowerPivot data model that is not visible in the Power BI Designer. Once you have loaded all the data you can start defining Power View Reports on top of the data model that you created. In the screenshot below you can see the UI of the Power BI Designer. In the lower left corner you see the two tabs that allow you to access the Power Query queries and Power View Reports.

Everything that you create using the Power BI Designer is saved in a local Power BI file (.pbix file) This Power BI file will not only contain the definition your Queries and Reports, but will also store your data.

Extracting Accounts data 

Lets start by extracting some account data from Dynamics CRM. To load this data we can click the Get Data button in the ribbon and use the Microsoft CRM Online predefined data source. When you enter the URL of your Dynamics CRM organization data service and press the OK button you will be asked for the credentials you would like to use to access Dynamics CRM

Once you are signed in you will see the Navigator window from which you can select the entity set you would like to import. In this example we'll take the Account entity

You have now created a first Query that will load all fields and all rows from the Account entity into Power BI. In the Name textbox of the Query Settings window we can change the name to Accounts. Next the Remove Columns transformation can be used to only keep the Name, AccountId, Address1_Coutry,Address1_City, Address1_StateOrProvince and StateCode fields.


Notice that the StateCode field (which is an OptionSet inside Dynamics CRM) comes in as a column of type Record. If you click on the Expand button in the caption you can select the that you want to have the Value field shown.

For all the Active Accounts the StateCode.Value field will be 0, for the Inactive ones it will display a value of 1. Since this is not very user friendly we will need to replace these to values with the labels Active and Inactive. For this w can use the Replace Values transformation of Power Query. But first you will need to use the Change Data Type transformation to make the StateCode.Value column of type Text. After this the Rename Column transformation can be used to give all the columns a decent name. The final query looks as follows:

Extracting Opportunity data 

In the second query we will be extracting the Opportunity data from Dynamics CRM. If you know upfront what entity set you want to retrieve and don’t want to use the Navigator window you can directly enter a valid OData URL in the URL textbox. Since we know that we want to retrieve all opportunites enter https://u2u365.crm4.dynamics.com/XrmServices/2011/OrganizationData.svc/OpportunitySet.

In the Query Settings window we can name this Query Opportunities. Again the Remove Other Columns transformation can be use to only keep the OpportunityId,CustomerId, PurchaseTimeFrame, EstimatedValue, Name and StateCode, EstimatedCloseDate columns.

Notice that the fields CustomerID (EntityReference in Dynamics CRM), EstimatedValue(Money) and PurchaseTimeframe(OptionSet) are again of data type record. If we use the Expand transformation to transform them we get the following result

For the PurchaseTimeframe field we could use the same procedure to replace the OptionSet values with their labels. But this is no very maintainable as these Labels can change over time. Another way of working is to write another Query to retrieve the labels of the PurchaseTimeframe OptionSet and then combine these to Queries in a third Query. This is what we will do in a later step in this blog post.

Also notice how in the column EstimatedCloseDate the dates are stores in UTC date format.

We want to convert these dates into our local time format. Problem is there is no support in the UI for doing this. We will actually need to write a little bit of M code to achieve this goal.

We first use the Rename Column to change the name of the EstimatedCloseDate field to EstimatedCloseDateUTC. Also the data type of this column needs to be transformed to Date/Time/TimeZone. Next we can use the Add Column transformation to insert new column in the Query. To do this, we can go to the Add Column tab in the ribbon and click on the Date->DateOnly menu item. This will add a second column with the date as shown below.

When you add this transformation the following M code was generated for this step.

In this M code we need to manually replace the DateTime.Date function with the DateTimeZone.ToLocal function. Next the data type of this new column can set to Date Only. When this is done we get the following result

To finish we can remove the EstimatedCloseDateUTC column and rename Date to EstimatedCloseDate.

If we use the Rename Column transformation to give all columns a decent name we get the following result for the Opportunities query

To replace the PurchaseTimeframe OptionSet values with their corresponding labels we need to write another Query to retrieve the labels first. Inside Dynamics CRM all OptionSets are store in an entity set called PickListMappingSet. We can create a third Query named PurchaseTimeFrameOptionSet using the following OData URL https://u2u365.crm4.dynamics.com/XrmServices/2011/OrganizationData.svc/PickListMappingSet. Again we can use the Remove Other Columns transformation to only keep the SourceValue, TargetValue and ColumnMappingId columns. We get the following result

Epand the ColumnMappingId Name field and you can now see the name of the different OptionSets. Since we are only interested in the PurchaseTimeframe OptionSet we will filter out all other values.

Notice that there are many duplicate values. We can remove them using the Remove Duplicates transformation.The final version of the PurchaseTimeFrameOptionSet query will return the following result

We now have to Queries for which the results need to be merged. In the Queries window we can select the Opportunities query and in the ribbon press the Merge Queries button. A Merge window will be displayed that will ask us for the second Query to merge the first one with and on what fields this has to be done.

From the first Query we can take the PurchaseTimeframe column and from the second Query the Target Value. When the OK button is pressed a new column labelled NewColumn of type Record is added to the Opportunities Query. If we expand this column and only select the SourceValue column we now have the Labels of the OptionSet. To clean things up we can remove the original PurchaseTimeFrame column and rename NewColumn.SourceValue to PurchaseTimeFrame. 

Loading Product Data

In Dynamics CRM an opportunity typically has many products in it. Let's write one more Query that that extracts all product related data for the opportunities. This data is stored in an entity set called OpportunityProductSet. The URL for this entity set is https://u2u365.crm4.dynamics.com/XRMServices/2011/OrganizationData.svc/OpportunityProductSet. If you know upfront that you will only need certain columns of an entity set you can use the $select filter operator to limit which column you are loading in your query. So in the URL text box of the Dynamics CRM Online data source we could actually type in the following URL https://u2u365.crm4.dynamics.com/XRMServices/2011/OrganizationData.svc/OpportunityProductSet?$select=OpportunityId, ProductId, PricePerUnit, Quantity, ExtendedAmount, VolumeDiscountAmount, ManualDiscountAmount

Lets name this Query Products. We immediately get the limited list of columns. There is no need anymore to do a Remove Other Columns transformation. 

When we expand all columns and give them a decent name we get the following result.

We now have 4 Queries in Power BI to load the Account, Opportunity and Product data from Dynamics CRM. Since most reporting is also done over Date we want to add a Date Query to our model. To generate the Date data we can use a custom M function. On the internet you will find quite some Power Query M functions to generate this kind of data. If you go to https://gist.github.com/philbritton/9677152 you can copy the source code of a sample M function. Let's copy this M code in the clipboard. When we go back to the Power BI Designer we can now add a Blank Query to our designer and name it Dates. If we go to the Advanced Editor window of your Query we can paste the M code in.

If we press the Invoke button we are asked for the parameters values of the function

When the OK button is pressed, the M function will get executed and the result is displayed. Lets rename the MonthName column to Name and QuarterInCalendar to Quarter.

Enriching the model

When we now click the Report tab in the Power BI Designer all of our Queries start executed and once finished the following window will be displayed

There are 2 more things we need to do before we can create a Report. First we want to hide the PurchaseTimeFrameOptionSet table. This can be done by right clicking on it and selecting Hide. Next we actually need to define the necessary relationships between the different tables. For this we press the Manage button in the ribbon. This will display the Manage Relationships window

Notice that the relationship between Products and Opportunities was automatically detected. First we will define the relationship between Opportunities(EstimatedCloseDate field) and Dates(Date field). Next we can add the one between Opportunities(CustomerId field) and Accounts(AccountId field)

Once we have these relationships in place we can start creating Reports. But to clean up our model even more there some columns we would like to hide in the Report window. From the Accounts table, hide the AccountId field. Next we hide the columns CustomerId, EstimatedCloseDate and OpportunityId from the Opportunities Table. From the Products table we will hide the ProductId and OpportunityId columns. And from the Dates table we only keep Day, Month Quarter and Year as visible columns.

Creating Reports

Lets first create a tabular report that shows the order quantity by product name. From the Products table we select the Product Name and Quantity field and the following visualization is displayed.

Using a column chart we would also like to visualize the sales by year. First select the Year field from the Dates Table. When we do this a column chart will be added to the current page. On the Values axis Count Of Year was added as the field. We would like to see the sum of Quantity. So Count Of Year should be removed and we can drag and drop the Quantity field on the Value axis. This will show us the Sum of Quantity By Year.

We would like to be able to double click on a Year and see the Quantity by Quarter. To achieve this we can drag and drop the Quarter field underneath the Year field on the Axis window. If we now double click on the year 2014 we see the Quantity per Quarter for 2014.

Lets now select the Country field. Again a table visualization is added to our Report. If we convert the able to a map visualization and drag and drop the Quantity field in the Values window we get the following map