Peter Himschoot

The new Content-Security-Policy HTTP response header helps you reduce XSS risks on modern browsers by declaring what dynamic resources are allowed to load via a HTTP Header.

For example, with the CSP header you can block inline scripts from executing, effectively stopping simple XSS attacks.

What is the Content Security Policy

HTTP headers are used by Servers and Browsers to talk to one another. For example the server can tell the browser what kind of content it is sending using the Content-Type and Content-Length header. Every web page is built-up from different content sources, for example the html comes from the server, but your style might come from a CDN server such as bootstrap. The new Content-Security-Policy is used by the server to tell the browser which content-sources it can use, for example:

Content-Security-Policy:default-src 'self'; style-src 'self' https://ajax.aspnetcdn.com

This header tells the browser to only use html from the server itself, and only to use styles from the server and the aspnetcdn server. Browsers that support CSP will not use any other content sources.

But wait! There is more. You can also tell the browser not to load your content into another page, protecting against Clickjacking. You can use the frame-ancestors directive for that:

Content-Security-Policy:default-src 'self'; script-src 'self' https://ajax.aspnetcdn.com ; style-src 'self' https://ajax.aspnetcdn.com; frame-ancestors 'none';

All the different kinds of content sources and directives can be found here

Using inline scripts with CSP

Maybe you are using some external inline script, such as Google Analytics or Microsoft Application Insights. Normally CSP will block any inline script or style. So how can I use my inline script?

One option would be to use the 'unsafe-inline' content source, but this allows any inline script to execute! Luckily there is another way.

Using hashes to allow inline scripts and styles

When CSP has been enabled, and you have an inline script on your page, your browser will not execute it. You will find in your browser's Console window some remark about it. For example in Chrome:

Refused to execute inline script because it violates the following Content Security Policy directive

Chrome also suggest a way to get around this, and even displays a hash for the inline script:

Either the 'unsafe-inline' keyword, a hash ('sha256-e3wuJEA9ZnrbftKXWc68bpGC5pLCehsGKmy02Qh9h74='), or a nonce ('nonce-...') is required to enable inline execution.

So the solution is to include this hash value in your content sources:

default-src 'self'; script-src 'self' https://ajax.aspnetcdn.com 'sha256-gKHd+pSZOJ3MwBsFalomyNobAcinjJ44ArqbIKlcniQ='; style-src 'self' https://ajax.aspnetcdn.com 'sha256-pTnn8NGuYdfLn7/v3BQ2pYxjz73VjHU2Wkr6HjgUgVU='; frame-ancestors 'none';

As you can see in the example above, you can also use this for inline styles.

Using nonces to allow inline scripts and styles

The mayor disadvantage of the hash is that you need to recalculate and update the hash value whenever you update the script/style. So if you update the script (or dynamically generate it) you will want to use a nonce.

A nonce is a 'number used only once'

You need to generate for each request a unique nonce for each inline script and inline style and include it in the CSP header:

Content-Security-Policy:default-src 'self'; script-src 'self' https://ajax.aspnetcdn.com 'sha256-gKHd+pSZOJ3MwBsFalomyNobAcinjJ44ArqbIKlcniQ=' 'nonce-1LCV8O37L47QVufyugd6rqoebY+OAQGq8iajMbdy3B8='; style-src 'self' https://ajax.aspnetcdn.com 'sha256-pTnn8NGuYdfLn7/v3BQ2pYxjz73VjHU2Wkr6HjgUgVU=' 'nonce-ZUqNLKpiwM9Hru6BjlIx6DtREfGXO2c38CCzMAW6TQ0='; frame-ancestors 'none';

You also need to attribute your scripts and styles with a nonce attribute, matching the nonce from the header.

<script nonce="1LCV8O37L47QVufyugd6rqoebY&#x2B;OAQGq8iajMbdy3B8=">

To make adding the CSP header easy in .NET Core I have built two NuGet packages: U2U.AspNetCore.Security.Headers and U2U.AspNetCore.Security.Headers.TagHelpers.

<PackageReference Include="U2U.AspNetCore.Security.Headers" Version="1.1.0" />
<PackageReference Include="U2U.AspNetCore.Security.Headers.TagHelpers" Version="1.1.0" /> 

This package allows you to add headers to your response, such as the CSP header. Call UseResponseHeaders in your Startup.Configure method:

app.UseResponseHeaders(builder =>
{
...
}

You can set any header like this:

builder.SetHeader("SomeHeader", "SomeValue")

You can set the CSP header using the SetContentSecurityPolicy method:

builder.SetContentSecurityPolicy(new ContentSecurityPolicy()
{
...
}

Now select the directives you need with their content sources:

DefaultSrc = new List<string> {
ContentSecurityPolicy.Source.Self
},
ScriptSrc = new List<string> {
ContentSecurityPolicy.Source.Self,
"https://ajax.aspnetcdn.com",
"'sha256-gKHd+pSZOJ3MwBsFalomyNobAcinjJ44ArqbIKlcniQ='"
},
StyleSrc = new List<string> {
ContentSecurityPolicy.Source.Self,
"https://ajax.aspnetcdn.com",
"'sha256-pTnn8NGuYdfLn7/v3BQ2pYxjz73VjHU2Wkr6HjgUgVU='"
}

Using nonces means that you need to generate a cryptographically randon nonce, and attach it to the header and the script of style tag. This package makes that easy through a nonce taghelper.

If you're not familiar with taghelpers, click this link

First of all enable nonces in Startup.Configure:

builder.SetContentSecurityPolicy(new ContentSecurityPolicy()
{
SupportNonces = true,

Next add the nonce taghelper to your views. The easiest way is to add following to _ViewImports.cshtml:

@addTagHelper *, U2U.AspNetCore.Security.Headers.TagHelpers

Now look for your inline script and style tag(s) and add the nonce attribute:

<script nonce="true">alert('Use the Nonce!');</script>

Start your website and use the browser debugger to look at the CSP header:

Content-Security-Policy:default-src 'self';
script-src 'self' https://ajax.aspnetcdn.com
'nonce-Gl9JnGKKw9+0+fThsPtVdYtraPLwxWDtB4Qq7qMKH0w=';
style-src 'self' https://ajax.aspnetcdn.com  'nonce-KX0fql/urMHxnZGnDqNoyOljycR/e8nNv2bsjk//sS8='; 

Your script tags should also include the nonce value:

<script nonce="Gl9JnGKKw9&#x2B;0&#x2B;fThsPtVdYtraPLwxWDtB4Qq7qMKH0w=">
</script>
<script nonce="stjl3RNNOKDytWwDlWb8Rr2FGmNAmEdykWaCCPc10TQ=">
</script>

Does your browser support CSP?

The easiest way to find out is to visit CSP-Browser-Test

But generally, if you are using the latest version of a modern browser, it should support CSP.

So Visual Studio Team Services now has this automatic deployment option, where you can checkin your changes in source control to have the application deployed to a development/testing/production environment automatically.

But how do you cope with each environment's different configuration? Read on...

What we want

As you are developing locally you probably want to develop on a local database. The easiest way to do this is to have everything in web.config (I am using a web project as an example here, but things are similar for other project types). This way developers can quickly change things locally, for easy testing with different databases, and even see how things work with live (I hope cloned from production) data.

But when deploying to a test environment (and other environments such as production) you want to use a different web.config, preferably NOT containing any production secrets (like the connection string to the production database).

VSTS release management makes this very easy.

How To

Let's say that I have application settings and connection strings that I want to give a different value in development, QA and release:

  <appSettings>
</appSettings>
<connectionStrings>
</connectionStrings>

I have already created a Build definition in VSTS, and I am using it to initiate a release with VSTS release management.

In my release definition I have three environments, one for dev, QA and production.

For each environment, click on the ellipsis (...) button and choose Configure variable

Add variables who's key matches the key from appSettings (or name for connectionStrings)

Look for the File Transforms & Variable Substitution Options section and make sure you check the XML variable substitution checkbox.

This will make the deployment task replace any key in appSettings and connectionStrings with your variable values.

That's it!

The excitement was there again, in San Francisco, where Microsoft displayed their latest innovations, to us, humble developers.

So what do I remember from /Build? That Microsoft is really going cross-platform. It is now possible to develop on Windows with Visual Studio, but now you can also create .NET applications and web sites on Linux and Mac on .NET core using the brand new Visual Studio Code editor. To enable this they've build a new execution environment (DNX).

For building ASP.NET 5 web sites Microsoft has integrated bower (which is a javascript package manager), gulp (a javascript task manager, for example to minimize your .css and .js files) and grunt (another task manager) into Visual Studio. What this means to us developers is that you can choose: you can develop on Windows with Visual Studio, or you can develop on Windows/Mac/Linux with your choice of tools. You can mix, someone using Visual Studio, someone not... it's up to you.

On the Windows 10 front, Microsoft now has a unified stack for building apps on desktop, tablet, phone, ... Of course not every device has the same capabilities.. This used to mean conditional compilation (!?). Now, in Windows 10, there are libraries that allow you to check if some capability is there (for example camera) and if it is not, the library still has stubs to the methods, except they don't do anything. This means one binary for all devices. But this was already common knowledge.

Closing the "app gap": Microsoft showed a demo with a simple iOS game being recompiled into a Windows Phone app. Yes, take an Objective-C application (no Swift support) and recompile it to a Windows 10 app! And you can do the same for an Android app! Amazing! The question remains: how far can you take this? Are all standard iOS/Android libraries supported? Time will tell. I really hope that Microsoft can make all of this work.

Run your web site as a Windows 10 app: Microsoft also demonstrated that you can take your web site (I think this will only work for ASP.NET based web sites) and wrap it into an app, then modifying the web site to take advantage of Windows 10 features.

What really excited me at /Build was Windows 10 Continuum. Just imagine, you walk into your office, take out your phone and place it on your desktop. The keyboard, mouse and screen on your desktop connect to your phone, allowing you to continue working on your phone, but now with a full desktop experience. Later, going home, you take your phone with you and you can continue working on it on the train, except of course now you have the small screen experience. No more carrying around a bulky device!

Project Spartan, which is an internal name, now got its real name: Microsoft Edge. This new browser is available in the latest Windows 10 insider build. I ran it against html5test and got a score of 390. It looks like they still have some work to do, but hey, this is preliminary software...

Team Build has been redesigned. Anyone ever needing to customize team build will testify that using XAML based Workflow Foundation to describe the build process was far from simple. Adding custom steps was even harder... Now Microsoft has made it really simple to customize builds using tasks:

What is also really neat is the ability to compare the build definitions, so you can figure out what was modified in that build definition:

Microsoft has done a lot of work to integrate docker into azure. If you have no idea what docker is, look here. In a nutshell it allows you to take your code, wrap it into a container and that run that container anywhere... You can then connect your container with other containers to make things happen. You can also build workflows from containers...

Companies with a lot of small databases will be happy to learn that they can now save costs with Azure SQL Database elastic databases. I think (sorry) that it allows you to pool your databases into one database server (still nicely separated into your tenants) and configure the minimal cpu requirements for each database.  I'm not that into database stuff so more about this here.

The star of the show: HoloLens. This kind of -- almost science fiction -- hardware makes a lot of people drool. Of course! Allowing people to run inside a virtual environment and react with it. Architects, product designers, game builders, the applications -- once they become available -- will open up a whole new world.
Some people had the luck to try it out for themselves, I was not one of them :(
But my colleague was, and his major comment was that it did not track where you were looking, it tracked where HoloLens was looking. I think they will have to work on iris tracking -- hoping the HoloLens device won't cost as much as a jet fighter's helmet :)

At Techorama I gave a presentation about Active Directory Authentication Libraries (ADAL) and how you can easily use these to add authentication to your mobile apps.

The session slides can be found here

If you want an excellent introduction into Windows Azure I can recommend “Windows Azure: Step by Step” from Roberto Brunetti. This book will teach you the basic components of Windows Azure and how to build an application with them. It will introduce you to Azure Compute, Azure Storage and Azure AppFabric Servicebus; in clear and easy-to-follow step-by-step instructions you build applications that use the different Azure features. In my opinion this is the best way to learn.

Unfortunately this book doesn’t have space to delve deeper into some subjects (or you would need a >1000 pages book), so if you want to learn more about all the other things I recommend our Azure training.

Windows Azure deploys your azure web or worker role in the cloud, on a machine with Windows Server 2008 and .NET 4 pre-installed. But what if you need an additional requirement? What if you need to install some performance counter, or if you need some other piece of software like the media encoder? Then you can use a startup task to get the job done. In this blog post you will create a simple web role using ASP.NET MVC 3, then add a startup task to ensure MVC 3 is also installed on the Azure instance. For this walkthrough you’ll need Visual Studio 2010 and ASP.NET MVC 3. You’ll also need the standalone MVC 3 installer, which you can find here.

Step 1: Create the Azure solution.

Start by creating a new Cloud project, call it UsingStartupTasks.

Click Ok. Don’t add any role just yet, so click Ok in the next screen. MVC 3 is not available from the “New Windows Azure project” dialog, so we’ll need to use another way to get an ASP.NET project in Azure…

Now add a new ASP.NET MVC 3 project, calling it HelloMVC3.

Select the Internet Application template, leave the rest to its defaults, then press Ok.

Right-click the Roles folder beneath your cloud project and select Add->Web Role Project in Solution

Select the HelloMVC3 project in the next screen and hit Ok.

Add a new folder StartupTasks to your MVC project and add the MVC installer AspNetMVC3Setup.exe to it. Open notepad.exe (don’t add the following file using Visual Studio because it will add a Byte Order Mark and the Azure runtime doesn’t like that) and create a new batch file called installmvc.cmd in the StartupTasks folder. To add it to the Visual Studio project first click on the Show All Files button in the solution explorer, and then right-click the installmvc.cmd file and select Include In Project. Do the same for the AspNetMVC3Setup.exe installer.

We’ll use this batch file to execute the installer as follows: enter following in installmvc.cmd:

%~dp0AspNetMVC3Setup.exe /q /log %~dp0mvc3_install.htm
exit /b 0

The %~dp0 actually returns the install folder for your azure project, so the first line will run the standalone MVC3 installer, this will write any install problems to a log file called mvc3_install.htm.

The %~dp0 is used to get the directory containing the startup tasks (an azure server local copy of the StartupTasks folder). The first statement will do a silent (quiet) install of MVC3, and the next line will return a success error code.

Make sure both files have a build action of “none” and Copy to Output Directory set to “Copy Always”.

Editing the Service definition file

Finally you need to open the ServiceDefinition.csdef file and add the task to it:

<?xml version="1.0" encoding="utf-8"?>
<WebRole name="HelloMVC3">
<Sites>
<Site name="Web">
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint1" />
</Bindings>
</Site>
</Sites>
<Endpoints>
<InputEndpoint name="Endpoint1" protocol="http" port="80" />
</Endpoints>
<Imports>
…
    </Imports>
<Startup>
executionContext="elevated"
/>
</Startup>
</WebRole>
</ServiceDefinition>

Deploying to Azure

Right-click on the Cloud project and select Publish… The Deploy Windows Azure project dialog opens:

Select your credentials, environment and storage account. You may need to do some setup for this to work…

Optionally you can configure remote desktop connections in case something went wrong, this will make it easier to see if MVC 3 was indeed installed.

Click on Ok and wait…

Deployment in Visual Studio should start:

Wait some more till complete (because the startup tasks are executing this will take a long time):

Click on the Url. You should now see the MVC 3 screen!

In my next blog post I will show you how to turn this startup task into an azure startup plugin, which will make it easier to re-use this startup task.

In my previous post I looked at getting started with table storage, in this one we will create a table for our entities and store them. As you’ll see, quite easy!

So, to store an entity in table storage you start by creating a TableServiceEntity derived class (recap from previous post):

public class MessageEntity : TableServiceEntity
{
public MessageEntity() { }

public MessageEntity(string partitionKey, string rowKey, string message)
: base( partitionKey, rowKey )
{
Message = message;
}

public string Message { get; set; }
}

You also need a table class, this time deriving from TableServiceContext:

public class MessageContext : TableServiceContext
{
public const string MessageTable = "Messages";

{
}
}

TableServiceContext requires a base address and credentials, and since our class derives from it we need a constructor taking the same arguments. I also have a const string for the table name.

If the table doesn’t exist yet you should create it. This is easy, just add the code to create the table in the MessageContext’s static constructor:

static MessageContext()
{
var tableClient =  MyStorageAccount.Instance.CreateCloudTableClient();
tableClient.CreateTableIfNotExist(MessageTable);
}

A static constructor is automatically called when you use the type. Note that I use the MyStorageAccount class, which uses the same static constructor trick to initialize the storage account.

public static class MyStorageAccount
{
public static string DataConnection = "DataConnection";

public static CloudStorageAccount Instance
{
get
{
return CloudStorageAccount.FromConfigurationSetting(DataConnection);
}
}

static MyStorageAccount()
{
CloudStorageAccount.SetConfigurationSettingPublisher(
(config, setter) =>
{
setter(
RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue(config)
:
ConfigurationManager.AppSettings[config]
);

RoleEnvironment.Changing += (_, changes) =>
{
if (changes.Changes
.OfType<RoleEnvironmentConfigurationSettingChange>()
.Any(change => change.ConfigurationSettingName == config))
{
if (!setter(RoleEnvironment.GetConfigurationSettingValue(config)))
{
RoleEnvironment.RequestRecycle();
}
}
};
});
}
}

public static MessageEntity CreateMessage( string message )
{
return new MessageEntity(MessageTable, Guid.NewGuid().ToString(), message);
}

{
this.SaveChanges();
}

The CreateMessage method creates a new MessageEntity instance, with the same partition key (I don’t expect to store a lot of messages), a unique Guid as the row key, and of course the message. The AddMessage method adds this entity to the table, and then calls SaveChanges to send the new row to the table. This mechanism uses the same concepts as WCF Data Services.

In the previous post we created a web site with a textbox and a button. Implement the button’s click event as follows:

protected void postButton_Click(object sender, EventArgs e)
{
string message = messageText.Text;
var msg = MessageContext.CreateMessage(message);
}

This will allow you to add messages to storage.

Before you can run this sample, you also need to setup the connection. Double-click the CloudMessages project beneath the Roles folder.

This open the project’s configuration window. Select the Settings tab and add a “DataConnection” setting, select “Connection String” as the type and then select your preferred storage account. In the beginning it is best to use development storage, and that is what I did here:

After running the web site you are of course wondering if your messages were actually added. So let’s add some code and UI to display the messages in the table.

Start by adding the following property to MessageContext:

public IQueryable<MessageEntity> Messages
{
get { return CreateQuery<MessageEntity>(MessageTable); }
}

This property returns an IQueryable<MessageEntity>, which is then used by LINQ for writing queries. To actual query is performed in our web page class. But first we need to add some UI to display the messages. Add a repeater control beneath the TextBox and Button:

<asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
<p>
<asp:TextBox ID="messageText" runat="server" Width="396px"></asp:TextBox>
<asp:Button ID="postButton" runat="server" OnClick="postButton_Click" Text="Post message" />
</p>
<p>
<asp:Repeater ID="messageList" runat="server">
<ItemTemplate>
<p>
<%# ((MessagesLib.MessageEntity) Container.DataItem).Message %>
</p>
</ItemTemplate>
</asp:Repeater>
</p>
</asp:Content>


Now that we can display the messages, let’s add a LoadMessages method below the click event handler of the page:

private void LoadMessages()
{
var query = from msg in context.Messages
select msg;
messageList.DataSource = query.ToList()
.OrderBy(m => m.Timestamp)
.Take(10);
messageList.DataBind();
}

Call this method in the Load event of the page:

protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
}
}

And again in the button’s click event:

protected void postButton_Click(object sender, EventArgs e)
{
string message = messageText.Text;
var msg = MessageContext.CreateMessage(message);
}

Run. Add some messages, and see them listed (only the first 10 messages will be displayed, change to query as you like).

Windows Azure storage gives you several persistent and durables storage options. In this blog post I want to look at Table storage (which I prefer to call Entity storage because you can store any mix of entities  in these tables; so you can store products AND customers in the same table). For this walkthrough you’ll need the AZURE SDK and setup for development

Start Visual Studio 2010 and create a new Azure project called MessageServiceWithTables:

In the New Windows Azure Project dialog select the ASP.NET Web Role and press the > button, then rename the project to CloudMessages:

Replace the content of default.aspx with the following:

<%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true"
CodeBehind="Default.aspx.cs" Inherits="CloudMessages._Default" %>

</asp:Content>
<asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
<p>
<asp:TextBox ID="messageText" runat="server" Width="396px"></asp:TextBox>
<asp:Button ID="postButton" runat="server" OnClick="postButton_Click" Text="Post message" />
</p>
</asp:Content>

2. Creating the table store entity classes

Add a new Class Library project to your solution, and call it MessagesLib. Delete class1.cs. Add a new class called MessageEntity.

2.1 Creating the entity class

We want to derive this class from TableServiceEntity, but first we need to add a couple of references. So select Add Reference… on the library project.

Browse to Program Files\Windows Azure SDK\v1.4\ref and select following libraries (or simply select them all):

You also need to add a reference to System.Data.Services.Client (I’m using Power Tools, so the Add Reference looks different (excuse me, better!):

Now you’re ready to add the MessageEntity class deriving from the TableServiceEntity base class.

public class MessageEntity : TableServiceEntity
{
public MessageEntity() { }

public MessageEntity(string partitionKey, string rowKey, string message)
: base( partitionKey, rowKey )
{
Message = message;
}

public string Message { get; set; }
}

This base class has three properties used by table storage: the partition key, the row key and the timestamp:

The partition key is used as follows: all entities sharing the same partition key share the same storage device, they are kept together. This make querying these objects faster. But on the other hand, entities with different partition keys can be stored on different machines, allowing queries to be distributed over these machines when there are many instances. So choosing the partition key is a tricky thing, and there are no automated tools to help you here. Some people will use buckets (like from 0 to 9) and evenly distribute their instances over all buckets.

The row key makes the entity unique and the timestamp is used for concurrency checking (optimistic concurrency).

So, what do we need to store? Since we just want to store messages we add a single Message property and constructors for easy instantiation.

In the next blog post we’ll be looking at creating the table in table storage and inserting new data…

1. Introducing Intelli-Trace

How do engineers figure out what caused a plane crash? One of the things they use is the black-box recording, which recorded all mayor data from the plane prior to the crash. This recording allows them to step back in time and analyze step-by-step what happened. Microsoft Visual Studio 2010 Ultimate also has a black-box for your code, called IntelliTrace. While your code is running, Intellitrace writes a log file (called an iTrace file), and you can analyze this using Visual Studio Ultimate. Windows Azure also allows you to enable Intellitrace on the server running your code, and this is ideal to figure out why your code is crashing on the server, especially early code (because you cannot attach the debugger).

In the next part you’ll walkthrough this process. But first you need to download the AzureAndIntelliTrace solution.

2. Deploying the solution

We’ll start by deploying the solution, so open the AzureAndIntelliTrace solution with Visual Studio. Right-click the Azure project and choose Publish… The Deploy Windows Azure project dialog should open.

Make sure you check the “Enable IntelliTrace for .NET 4 roles” checkbox.

Let’s have a look at settings, so click the “Settings…” link. Open the Modules tab:

You can just leave everything to its default settings, but you could remove the StorageClient library if you would want to use intellitrace to track down a storage problem…

Wait for the deployment until it says “Busy…”. If something goes wrong during the startup fase of your role instance with IntelliTrace enabled, Azure will keep the role busy so you can download the iTrace file.

So you’re waiting for this:

Then it is time to download the iTrace file. You can do that from the Server Explorer window. Open the Windows Azure Compute tree item until you reach the instance (note the Busy state!):

The instance name will also mention whether or not it is IntelliTrace enabled.

Now you can right click the instance to download the iTrace file:

Wait for download to complete, the iTrace file should open automatically in Visual Studio:

Scroll down until you reach the Exception Data section. You should see that you got a FileNotFoundException, caused because it couldn’t find the Dependencies assembly:

3. Fixing the dependency problem

This kind of problem is easily solved, but first we need to stop this deployment. Go back to the Windows Azure Activity Log and right-click the deployment. Choose “Cancel and remove”.

The problem is that we have a reference to the dependencies assembly, but when deployed to Azure it is not copied onto the instance. Go back to the solution and open the DebugThis project. Open the References folder and select the Dependencies assembly.

In the properties window set the “Copy Local” property to true.

Try redeploying again. Now we will have another problem, so wait for the instance to go Busy again…

Download the iTrace file again. Scroll down to the exceptions section, select the FormatException and click the Start Debugging button. IntelliTrace will put you on the offending line.

It is easy to see that the string.Format is missing an argument…

You can start the debugger by clicking the “Set Debugger Context Here” button in the gutter.

Now you can step back using intellitrace…

As you can see, IntelliTrace is great for figuring out this kind of problem, especially if your code works in the compute emulator, but doesn’t on the real Azure server instance…