U2U Blog

for developers and other creative minds

Creating A Self-Signed Code Signing Certificate from PowerShell

In PowerShell, being able to execute scripts depends on the execution policy of your machine. You might be able to change the execution policy yourself and set it to Unrestricted, meaning you can execute scripts without signing them. If you are not an administrator, or your group policy defines the execution policy, you will need to sign your script. To see your current execution policy, execute the following command:

Get-ExecutionPolicy -List


To create a self-signed code signing certificate for PowerShell, makecert used to be the best solution. In PowerShell we have a cmdlet called New-SelfSignedCertificate which we can also use for this purpose. Since V5 this cmdlet has been updated to make it easier to do so. To create a Code Signing certificate execute the following command:

$cert = New-SelfSignedCertificate -CertStoreLocation 
Cert:\CurrentUser\My -Type CodeSigningCert -Subject "U2U Code Signing"

To verify that the certificate has been generated, run this command:

Get-ChildItem -Path Cert:\CurrentUser\My | ? Subject -EQ "CN=U2U Code Signing"

The result should look like this.


Great! Now use the certificate to sign your script:

Set-AuthenticodeSignature -FilePath .\signedscript.ps1 -Certificate $cert

Oops! That didn't work!


Our certificate is not trusted as it is in the personal store. Let's move it to a better location:

Move-Item -Path $cert.PSPath -Destination "Cert:\CurrentUser\Root"

Make sure you confirm the installation of the certificate.

Now try again!

Set-AuthenticodeSignature -FilePath .\signedscript.ps1 -Certificate $cert

Better! You should now be able to execute the signed script!

The full script looks like this:

$cert = New-SelfSignedCertificate -CertStoreLocation Cert:\CurrentUser\My -Type CodeSigningCert -Subject "U2U Code Signing"
Move-Item -Path $cert.PSPath -Destination "Cert:\CurrentUser\Root"
Set-AuthenticodeSignature -FilePath .\signedscript.ps1 -Certificate $cert

Deciding when to do a differential backup

SQL Server allows for differential backups, which only backups extends modified since the last full backup. The advantage of these differential backups is that if only a small portion of your data has changed since the last full backup, you need less time and storage for your backups! The disadvantage however is that the time needed for a restore increases: We must first restore the full backup, then restore the last differential backup.
So when few extends have changed since the last full backup, the gain on the backup is huge, and the pain when restoring is limited. But as more and more extends are being modified, the pain grows and the gain shrinks. So to decide whether we go for a full or differential backup we need to know the number of extends modified since the last full backup. But on SQL Server 2016 and earlier the only easy way to figure out was… by taking a differential backup.

SQL Server 2017 offers a nice improvement on this. In the sys.dm_db_file_space_usage dynamic management view an extra column modified_extent_page_count has been added which provides this information. So if you’re willing to stick to differential backups till they are 70% of the size of your full database, you could use this script to get the backup advice:

SELECT total_page_count, allocated_extent_page_count, modified_extent_page_count
, modified_extent_page_count * 100 / allocated_extent_page_count AS [% changed]
, CASE WHEN modified_extent_page_count * 100 / allocated_extent_page_count > 70
    THEN 'FULL'
    ELSE 'DIFFERENTIAL'
    END AS Advice
FROM sys.dm_db_file_space_usage

An example:

Capture

So with this we can improve our maintenance plan. Next step: convince Ola to include this in his maintenance script Smile

Angular Language Service

One of the best extensions you can find for your Angular programming needs is called the "Angular Language Service", especially if you're using VS Code.

What is the Angular Language Service?

The Angular Language Service is a plugin built for the TypeScript [TS] language. It allows to evaluate the templates (inline and html files) at development time, from your IDE. In other words you will get "true" autocompletion when trying to use properties from an interpolation directive or red squiggly lines if you made a mistake in your template.

How does it work (without going into too much detail)?

On 27 April 2017, the TS team made an announcement for version 2.3 of the language. The team states that together with the Angular team they were able to create a public Language Server Plugin API. This allows people to augment their editing experience by creating plugins that make use of the Language Server.

The internal name for the Language Server is called "tsserver" which stands for "TypeScript standalone server". It is a node executable that includes the TypeScript compiler and language services. "tsserver" exposes an API through a JSON protocol.

In other words, the Angular Language Service will talk to this tsserver executable in JSON to get tsserver to compile and evaluate the templates so that we can make use of TypeScript in those same templates! Which is a great step forward for us as an Angular Developer - or teacher!

How do I make use of it in VS Code?

Open your extensions menu and look for "Angular Language Service". The author is Angular, it is pretty hard to miss ;-).

What about other editors?

Some editors don't make use of the tsserver when evaluating TS code, so it might be that your editor already supports template evaluation! I've found a blog post from Brian Love explaining how to install it for WebStorm and Sublime right here

Enjoy coding!


PS: This is a full copy of my blog post at: https://diedrikdm.github.io/2017/05/01/angular-language-service.html

Watch out with calculated DateTime fields in CSOM

var localDateOnlyValue = ctx.Web.RegionalSettings.TimeZone.UTCToLocalTime((DateTime)item.FieldValues["DateOnly"]);
ctx.ExecuteQuery();

I recently came across a baffling issue when using CSOM to connect to a SharePoint Online list to retrieve values from a Date and Time field and another field calculated based on the first one.
To illustrate the issue, I created a dummy list with 2 colums:

  • DateOnly: Date and Time (show Date Only)
    01
  • CalculatedDateOnly: Calculated with formula [DateOnly] + 1 (Also showing as Date Only)
    02

 

When creating new items in this list, all looks well. The Date Only and calculations are done correctly.

04

The story in CSOM

Now, up to CSOM, when accessing this list with the following code:

var list = ctx.Web.Lists.GetByTitle("TestDateTime");
var item = list.GetItemById(1);
ctx.Load(item);
ctx.ExecuteQuery();

var dateOnlyValue = item.FieldValues["DateOnly"];
var calculatedDataOnlyValue = item.FieldValues["CalculatedDateOnly"];

 

The values in the dateOnlyValue and calculatedDataOnlyValue are not what you might expect:

05

Observe that the DateOnlyValue is –2 hours from what was originally selected. Now this is perfectly explainable. SharePoint stores the DateTime values in UTC, and when using the web interface, SharePoint takes into account the regional settings of your site to convert between local and UTC. If you use CSOM however, you get the value in UTC, no problem here. However, my calculated field, that just did a +1 (add a day) is not returned in UTC, which is confusing, because i just said that CSOM would return datetime in UTC.

Assuming that the dateOnlyValue is UTC, you can convert this to local time as follows (the inverse also exists):

var localDateOnlyValue = ctx.Web.RegionalSettings.TimeZone.UTCToLocalTime((DateTime)item.FieldValues["DateOnly"]);
ctx.ExecuteQuery();


Now, imagine you need to use these field values to forward to the user or some business process, then you might need logic to convert UTC values to local (for the dateOnlyValue) and other logic that does nothing for the calculated field (calculatedDateOnly). So basicaly, you have to assume that DateTime fields will be UTC and calculated ones will not be UTC. Let’s make that assumption …

The story in REST

Remember the assumption from CSOM: DateTime fields will be UTC and calculated ones will not be UTC.

Let’s see if this assumption stays valid when making REST calls. The following REST call was used to retrieve the values:

/_api/lists/getbytitle('TestDateTime')/Items(1)?$select=DateOnly,CalculatedDateOnly

and the result shows the following (baffling result)

<content type="application/xml">
	<m:properties>
		<d:DateOnly m:type="Edm.DateTime">2017-04-24T22:00:00Z</d:DateOnly> 
		<d:CalculatedDateOnly>2017-04-25T22:00:00Z</d:CalculatedDateOnly> 
	</m:properties>
</content>

Now both are in UTC!!!!!

So I will have to change my assumption to: DateTime fields will be UTC, but in CSOM calculated DateTime fields are local.

Solution

Now what about a solution that works in CSOM? If your goal is to use the values from SharePoint as they are “shown” to the users. Then you’re better off using the property FieldValuesAsText on a listitem.

var list = ctx.Web.Lists.GetByTitle("TestDateTime");
var item = list.GetItemById(1);
ctx.Load(item, it => it.FieldValuesAsText);
ctx.ExecuteQuery();

var dateOnlyValue = item.FieldValuesAsText["DateOnly"];
var calculatedDataOnlyValue = item.FieldValuesAsText["CalculatedDateOnly"];

 

Then both variables will contain the date values as shown in the web interface:

07

Using Azure VMs as remote R workspaces in R Tools for Visual Studio

Running R on your local laptop is easy: you just download one of the R distributions (CRAN or Microsoft) and kick off RGui.exe. But if you’re a Microsoft oriented developer or data scientist you are probably familiar with Visual Studio. So you download and install R Tools for Visual Studio (RTVS) and you can happily run r code from within Visual Studio, link this to source control etc.
image

Remote workspaces in R Tools for Visual Studio

But this decentralized approach can lead to issues:

  • What if the volume of data increases? Since R holds its data frames in memory we need at least 20 Gb of memory when working with e.g. an 16 Gb data set. The same remark for CPU power: if the R code runs on a more powerful machine it returns results faster.
  • What if the number of RTVS users increases? Do we copy the data to each users laptop? This makes it difficult to manage the different versions of these data sets and increases the risk of data breaches.

This is why RTVS also allows us to run R code from within a local instance of Visual Studio, but it executes on a remote R server, which also holds a central copy of the data sets. This remote server can be an on-premise server, but if the data scientists do not need permanent access to this server it could be cheaper to just spin up an Azure virtual machine.

Setup

When we click the Setup Remote R… menu in our Visual Studio R Tools it takes us to this webpage, which explains in detail how to setup a remote machine.
image
Unfortunately this detailed description was not detailed enough for me and I bumped into a few issues. So if you got stuck as well, read on!

Create an Azure VM

Login in the Azure portal, click the + to create a new object and create a Windows Server 2016 Data center virtual machine. Stick to the Resource Manager deployment model and click create.
When configuring the VM you can choose between SSD and HDD disks. The latter are cheaper, the former are faster if you often need to load large data sets. Also pay attention when you select your region: by storing it in the same region as where your data is stored you can save data transfer costs. But also be aware that the cost for a VM is different over the regions. At the time of writing the VM I used is 11% cheaper in West Europe than in North Europe.

In the next tab we must select the machine we want to create. I went for an A8m V2, which delivers 16 8 cores and 64 Gb of RAM at a cost of about 520 euro/month if it runs 24/7.

Azure VM settings

Before I start my machine I change two settings: 1 essential, 1 optional.

The essential setting is to give the machine a DNS name such that we can keep addressing the machine using a fixed name, even if it got a different IP address after a reboot:
In the essentials window of the Azure virtual machine blade click on the public IP address. This opens up a dialog where we can set an optional (but in our case required) DNS name. I used here the name of the VM I created, don’t know if that’s essential, but it did the job in my case:
image

The optional setting is the auto-shutdown option on your VM, which can save you some costs by shutting down the VM when all data scientists are asleep.
image

Configure the VM

Now we can start the virtual machine, connect via remote desktop and start following the instructions in the manual:

  1. Create a self-signed certificate. Be sure to use here the DNS name we made in the previous steps. In my example the statement to run from PowerShell would be:
    New-SelfSignedCertificate -CertStoreLocation Cert:\LocalMachine\My -DnsName "njremoter.westeurope.cloudapp.azure.com"
  2. Grant Full Control and Read permissions on this certificate to the NETWORK SERVICE account using certlm.msc:
    image
    image
  3. Install the R service using this installer
  4. Edit the C:\Program Files\R Remote Service for Visual Studio\1.0\Microsoft.R.Host.Broker.Config.json file and point it to the DNS name we used before. Again, on my machine this would be:
    {
      "server.urls": "https://0.0.0.0:5444",
      "security": {
        "X509CertificateName": "CN=njremoter.westeurope.cloudapp.azure.com"
      }
    }
  5. Restart the virtual machine. Verify that the two R related services start up correctly:
    image
    If the services fail to start verify the log file at C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Temp
  6. Last but not least we must open up port 5444. The R service installer already takes care of that on the windows firewall, but we still need to open up the port at the Azure Firewall. In the Azure portal go to the virtual machine and select Network Interfaces and click the default network interface. In the blade of this interface click the default network security group:
    image
    Create a new inbound security rule, allowing access on TCP port 5444. In my example the firewall allows access from all IP addresses, for security reasons you better limit yours using a CIDR block to just the IP addresses of your data scientists
    image

Configuring RTVS

Our server is now configured, last (and easiest) step is to configure our R Tools For Visual Studio. So, on the client machines open up the R Tools Workspaces window:
image
Then click Add and configure the server. On my machine this would be:
image

Click Save, then click the blue connect arrow next to our remote connection:
image
Unless you added the VM to your domain we will now need to provide the credentials of one of the users we made at the server. You also get a prompt to warn you that a self-signed certificate was used, which is less safe:
SelfSignedRTVS

And from now on we are running our R scripts on this remote machine. You can see this amongst others by a change of prompt: *> instead of >
image

If you have problems connecting to your server, Visual Studio can claim it couldn’t ‘ping’ your server. Don’t try to do a regular ping, since Azure VMs don’t support this. Use e.g. PsPing or similar options to ping a specific port, in our case 5444:
image

I hope this gets you up to speed with remote R workspaces. If you still experience issues you can always check the github repository for further help.

Creating histograms in Power BI was never easier

Power BI Desktop contains since the October 2016 release a binning option (under the group option). With this option it becomes super-easy to create a histogram. Let’s demonstrate:

First assume you made a data model containing the DimProduct and the FactInternetSales table of the AdventureworksDW sample database. If we would request the sum of the order quantity grouped by ListPrice we would get numbers like these:
image

But if we would plot these numbers in e.g. a bar chart, it becomes hard to read due to the large number of distinct list prices:

image

This is where a histogram would come in handy. With the new release of Power BI Desktop all we have to do is right click the ListPrice column in our field well and select Group:

image

In the Groups dialog we select the bin size. Unfortunately we cannot set the number of bins, so we still need to do a bit of math for that. In our case we want to group Listprices in 100$ groups, so we set the Group Type to Bin and the bin size to 100. Notice this only works on columns which are numerical or of type date/datetime. Other columns require the list group type.
image

As soon as we click OK, we now get a new field in our field well: ListPrice (bins):

image

By using this new field in our charts and reports, we get the data per bin. For a histogram chart, just plug it into a column chart:
image

See, that was easy!

Applying JSLink on a Dashboard page

In this article I will discuss the issues I came across with when trying to apply jslink on a SharePoint dashboard page. For me, a dashboard page is a page on which you put multiple web parts, in order to get a global overview on a certain entity.

In order to demonstrate this, consider that you keep track of a normal task list and an issue tracking list, which you relate to the task list by means of a lookup field. I wanted to play around with the percentage complete column, so I made sure that it was added to the default view of the tasks list.
Once this was all set up, I created a new page, with a two column layout on which I add the tasks list and the issues list web parts, and connected them based on the lookup field that I created.

image

image

With the dashboard page created, I could now add some jslink to the tasks web part. I decided to simply style the Due Date and the % Complete. Here is the code I used for this:

var U2U = U2U || {};

U2U.RegisterTemplateOverrides = function () {
    // Fields template registration
    var overrideCtx = {
        Templates: {
            Fields: {
				'DueDate':
                    {
                        'View': U2U.RenderDueDate
                    },
                'PercentComplete':
                    {
                        'View': U2U.RenderPercentCompleteView
                    }
            }
        }
    };
    SPClientTemplates.TemplateManager.RegisterTemplateOverrides(overrideCtx);
};

U2U.RenderDueDate = function (ctx) {
    var value = ctx.CurrentItem[ctx.CurrentFieldSchema.Name];

    var date = new Date(value);
    var today = new Date();

    return (date < today) ? '<b>' + value + '</b>' : value;
};

U2U.RenderPercentCompleteView = function (ctx) {
    var value = ctx.CurrentItem[ctx.CurrentFieldSchema.Name];

    // .ms-TopBarBackground-bgColor
    // .ms-EmphasisBackground-bgColor

    var output = [];
    output.push('<div style="display:block; height: 20px; width: 100px;" class="ms-TopBarBackground-bgColor">');
    output.push('<span style="color: #fff; position:absolute; text-align: center; width: 100px;">');
    output.push(value);
    output.push('</span>');
    output.push('<div style="height: 100%; width: ');
    output.push(value.replace(" %", ""));
    output.push('%;" class="ms-EmphasisBackground-bgColor"></div></div>');
    return output.join('');
};

U2U.RegisterTemplateOverrides();

Notice in the code that I’m (ab)using the classes ms-TopBarBackgound-bgColor and ms-EmphasisBackground-bgColor. By using these, even when the composed look of the site would be changed, the task progress bar would still have the same look and feel as the page being hosted on.

After associating this JavaScript file with the tasks web part on my dashboard page, I noticed that the task list check boxes and the stripe through were not working any more. Moreover, the due date in the issue list was also being styled, while I didn’t associate my JavaScript file with that web part.
Now, what’s happening here?

clip_image002[4]clip_image004[4]

Fixing the original rendering of the tasklist

Now for what is happening with the task list, if you dig around in the sources of the web page (make sure the debug.js files are loaded ;-)) and compare your page without jslink to your page with jslink, you’ll find that the file hierarchytasklist.js is the one responsible for styling the task list. In the page that is using the jslink, this file is apparently loaded “long” after your file is loaded. You can find this out by putting some breakpoint in your code and in the hierarchytasklist.js.

clip_image002[6]

In order to solve this, you have to make sure the hierarchytasklist.js is registered before your code starts doing the rendering. You can achieve this by putting the next two lines before calling RegisterTemplateOverrides(ctx). I’m using the debug, because I’m not done with my issues yet …

RegisterSod('hierarchytaskslist.js', '/_layouts/15/hierarchytaskslist.debug.js');
LoadSodByKey('hierarchytaskslist.js', null);

Elio Struyf has described a solution for this in his blog post.

Now after applying this change, and having a look at the final result, the title now looks good, but I lost my rendering on the DueDate.

clip_image002[8]

Overriding the hierarchytasklist templates

To understand what is happening now, you need to understand what the file hierarchytasklist.js is doing. That file itself is actually doing the jslink registrations for a couple of fields in a task list, and thus overwriting your template for DueDate. For example, this is the template registration for DueDate:

function _registerDuedateFieldTemplate() {
ULSEhz:
    ;
    var DuedateFieldContext = {
        Templates: {
            Fields: {
                'DueDate': {
                    'View': window.DuedateFieldTemplate.RenderDuedateField
                }
            },
            ListTemplateType: 171
        }
    };

    SPClientTemplates.TemplateManager.RegisterTemplateOverrides(DuedateFieldContext);
}
ExecuteOrDelayUntilScriptLoaded(_registerDuedateFieldTemplate, 'clienttemplates.js');

You’ll notice the use of ExecuteOrDelayUntilScriptLoaded to make sure the registration does not happen until the clienttemplate.js is loaded. The clienttemplate file is the so called “engine” behind jslink doing all the necessary things on your page. Now what we want to make sure is that we register our template after hierarchytasklist has done this, and before the engine starts rendering. So let’s just use the same trick as the hierarchytasklist file is using. I pull the two lines for registration of the hierarchytasklist.js out of my RegisterTemplateOverrides and at the end of the file I add the following line:

ExecuteOrDelayUntilScriptLoaded(U2U.RegisterTemplateOverrides, 'hierarchytaskslist.js');

After applying this, problem solved! Using some breakpoints, you’ll notice that the original template is registered first but then overridden by yours.

clip_image002[10]

Back to the dashboard

Now, I didn’t look at my dashboard page for a while, assuming everything would be just fine. However, I went to the dashboard page and noticed my rendering wasn’t working at all, nor the original task hierarchy templates.

The main reason for this was that my dashboard page was running on a site with Minimal Download Strategy (MDS) enabled. So I fixed that as described here and replaced my last line, which waited on hierarchytasklist.js to be loaded, by:

//Register for MDS enabled site otherwise the display template doesn't work on refresh
U2U.sitePath = window.location.pathname.substring(0, window.location.pathname.indexOf("/_layouts/15/start.aspx"));
// CSR-override for MDS enabled site
RegisterModuleInit(U2U.sitePath + "/SiteAssets/JSLINK/tasks.js", U2U.RegisterTemplateOverrides);
//CSR-override for MDS disabled site
U2U.RegisterTemplateOverrides();

But of course, since I’m not waiting on the task hierarchy templates to be loaded any more, I lose the original styling again (i.e. check boxes, stripe-through title, …). I need to make sure that I wait for the task hierarchy templates to be loaded before clienttemplate.js can start doing the rendering.

The problem however is that you can wait for hierarchytasklist.js to be loaded, but clienttemplates.js doesn’t wait for it. If you haven’t registered your templates before he starts rendering them, then they are not applied at all. So the trick is to make sure that before he renders, everything is loaded. If you have a look at the source of your page and the file clienttemplate.debug.js, you’ll find a function RenderListView that is called from within your page, and actually starts the rendering.

clip_image002[12]

So why not make sure, that from the moment the clienttemplates.js is loaded, we alter the RenderListView function so that it waits for the necessary things to be loaded, in our case the task list hierarchy templates. So I removed the code for the MDS registration and replaced as follows, fixing the dashboard!

ExecuteOrDelayUntilScriptLoaded(
	function(){
		//Register for MDS enabled site otherwise the display template doesn't work on refresh
		U2U.sitePath = window.location.pathname.substring(0, window.location.pathname.indexOf("/_layouts/15/start.aspx"));
		// CSR-override for MDS enabled site
		RegisterModuleInit(U2U.sitePath + "/SiteAssets/JSLINK/tasks.js", U2U.RegisterTemplateOverrides);
		//CSR-override for MDS disabled site
		U2U.RegisterTemplateOverrides(); 
	},
	'hierarchytaskslist.js');

// Now override the RenderListView once the ClientTemplates.JS has been called
ExecuteOrDelayUntilScriptLoaded(
    function () {
        //Take a copy of the existing definition of RenderListView
        var originalRenderListView = RenderListView;

        //Now redefine RenderListView with our override
        RenderListView = function (ctx, webPartID) {
			ExecuteOrDelayUntilScriptLoaded(
				function(){
					// Call the original RenderListView
					originalRenderListView(ctx, webPartID);
				},
				'hierarchytaskslist.js');
        };
    }, 'clienttemplates.js');

 

clip_image002[14]

What about that DueDate column in the issues list?

We still see the DueDate column in the issues list being styled too, which is weird. That is actually because the clienttemplate.js engine applies your templates on the whole page and if your template does not limit to a certain list, view, or other, it will match any column with the same name. Now, when registering your template, you can set the property ListTemplateType to 171 to make sure it only works on task lists.
The property ListTemplateType will not help if you have multiple task list view web parts on your page that need different rendering for the same columns, have a look here on how you could possibly overcome this.

If you want to review the complete file, you can find it on GitHub.

Seamless integration of Microsoft Word and Microsoft SharePoint Online with the new SharePoint Add-In Export to Word

BRUSSELS - June 17th 2016 - U2U today announced the release of the Microsoft SharePoint Online Add-In Export to Word, which includes better integration of Microsoft Word and Microsoft SharePoint Online. Export to Word increases users’ productivity by automating the creation of Word documents based on data stored in SharePoint. Export to Word will take your Word document, and injects data coming from SharePoint list.

The creation of Word documents containing data stored in SharePoint lists is not a trivial task in SharePoint Online. Export to Word allows you to associate your Word templates with existing SharePoint lists and to map SharePoint columns to different locations within your Word document.

“Microsoft SharePoint Online and Microsoft Word are amongst the most used products within our organization. U2U has provided us with an easy to use product that brings them closer together,” said Cliff De Clerck, Managing Partner at MediPortal. “The addition of Export to Word had a positive impact on the productivity of our users and allowed us to automate the generation of our Word documents.”

Export to Word is now available for free in the Office Store. Go to your SharePoint Online environment and install the app in your SharePoint sites.

“Customers have told us that Export to Word was the app they were looking for to automate their document creation,” said Jan Tielens, Technical Evangelist at Microsoft Belgium & Luxembourg. “The addition of applications like Export to Word to the Office Store has proven again the power of the new SharePoint Add-in model.”

U2U organizes training for developers and IT professionals on Visual Studio, SharePoint, Microsoft SQL Server, Microsoft Dynamics CRM, Microsoft Exchange Server, Windows Server, business intelligence, mobile and web technologies.

More info at: http://wordit.u2u.be

What’s new in SharePoint 2016 CSOM (DocumentSets)

With SharePoint 2016 RC available, it’s interesting to see what new additions we have in CSOM and/or REST APIs. Doing some research in the differences between the SharePoint 2013 RTM CSOM and SharePoint 2016 RC CSOM, I noticed several differences. This post will describe the differences related to DocumentSets.

In SharePoint 2013, the only thing you could do with document sets from within CSOM was to create one, given a parent folder, name and the contenttype id of the document set. You weren’t able to do anything else.

So what’s new? You’re now also able to do the following things:

  • Create/alter the contenttype definition of a documentset contenttype.
  • Get a document set based on a listitem/folder.
  • Export a document set to a zip file (I mean OpenXML file).
  • Import a document set from a zip file (again … OpenXML file).

image

DocumentSetTemplate

There is a new class available that allows you to change the definition of a documentset contenttype. It gives you access to the SharedFields, Templates, AllowedContentTypes, …
You can use it as follows:

var documentSetTemplate = DocumentSetTemplate.GetDocumentSetTemplate(ctx, ct);
documentSetTemplate.AllowedContentTypes.Add(ct_document.Id);
documentSetTemplate.DefaultDocuments.Add("template.odt", ct_document.Id, bytes);
documentSetTemplate.SharedFields.Add(field_customer);
documentSetTemplate.WelcomePageFields.Add(field_customer);
documentSetTemplate.Update(true);
ctx.ExecuteQuery();

Where ct and ct_document are respectively a documentset contenttype and a document contenttype.

Unfortunately, you’re still not able to directly set the default document set view, nor alter the url of the docsethomepage.aspx.

Export Document Set

You can now export a documentset to an OpenXML (zip) file as follows:

ListItem item = GetDocumentSet(...); // DocumentSet item
ctx.Load(item, it => it.Folder);
ctx.ExecuteQuery();

var documentSet = DocumentSet.GetDocumentSet(ctx, item.Folder);
ctx.Load(documentSet);
ctx.ExecuteQuery();

var stream = documentSet.ExportDocumentSet();
ctx.ExecuteQuery();
using(FileStream fs = new FileStream("docset.zip", FileMode.Create, FileAccess.Write, FileShare.None))
{
    stream.Value.CopyTo(fs);
}

Result:

image

Import Document Set

You can now import an exported document set as follows:

using(FileStream fs = new FileStream("docset.zip", FileMode.Open, FileAccess.Read, FileShare.None))
{
    DocumentSet docSet = DocumentSet.ImportDocumentSet(ctx, fs, "Imported", list.RootFolder, ct.Id);
    ctx.ExecuteQuery();
}

Result:

image

Announcing the SharePoint Add-in “Export to Word”

Today, we’re glad to announce the FREE SharePoint Add-in “Export to Word”, developed by the team at U2U. This add-in fixes one of those issues that you’ll probably will have come across with in SharePoint, namely generating Word documents based on SharePoint list items. A bit like a mail merge for list items!

This functionality has never been out of the box available for regular SharePoint list items. You could achieve something like it on document libraries only, using Quick Parts in Word. But however, that only works for document libraries.

Now, what does the add-in allow you to do? It allows you to configure, per SharePoint list, one or more Word templates. You can even upload your own templates! The add-in then allows you to link fields from your selected list to the content of the template. Once your template is completely designed, you can just go to the SharePoint list and export any item to Word using your template. You can even export multiple items in bulk!

CreateNewTemplate_ori GeneratedDocuments_ori GeneratedDocuments2_ori

Do you want to know more about this add-in? Just go to the official site.
Do you want to install it from the Office Store? You can find the add-in here.

For a quick start guide on how to work with the add-in, have a look at the following video:

 

Do note that this is the first version of the Add-in, and we want your feedback to further extend and improve the add-in. You can contact us via the regular ways.