Diederik Krols

The XAML Brewer

A Radial Gauge for Universal Windows Apps

This article presents a modern radial gauge Custom Control, hosted in a Portable Class Library for Universal Windows apps. Visual Studio Update 2 comes with the concept of Universal Apps that can run on Windows PC’s, tablets, phones and the XBox. For an introduction to this new breed of apps, check this crystal clear article by Jeff Prosise. I did not develop this custom control from scratch, I just migrated my own Radial Gauge for WinRT which was designed by Arturo Toledo and is available through NuGet as part of the WinRT Xaml Toolkit. Here’s how the Universal Gauge looks like in both the emulator and the simulator:

universal

I started by creating a new solution with a blank C# Universal App. The template creates three projects:

  • a Windows Store app project,
  • a Windows Phone project, and
  • a so-called Shared project, that contains all common source files and assets.

I didn’t want to create the custom control as a bunch of shared source files, so I added an empty Portable Class Library. Visual Studio 2013 Update 2 contains a project template for such a Universal PCL:

ClassLibrary

In the class library I created a new custom control. And yes, there’s again a template for that:

CustomControl

For more information on building customs controls from scratch, you may want to read this article of mine. It seems to apply to Universal apps too.

I copied the XAML style in the Themes folder and the C# class over from the Windows Phone version of the Radial Gauge, because that’s my most recent version. I had to just (re-)adapt a couple of namespaces, and tadaa: the radial gauge custom control was operational in less than 15 minutes, with a single code base for all XAML platforms (well, almost, not sure about WPF).

While playing with the source, I added a new property -TickSpacing- as suggested by Dennis Almond in a blog post comment. Thanks, Dennis!

/// <summary>
/// Gets or sets the tick spacing, in units.
/// </summary>
public int TickSpacing
{
    get { return (int)GetValue(TickSpacingProperty); }
    set { SetValue(TickSpacingProperty, value); }
} 

The TickSpacing allows you to specify the distance between the ticks. I used to divide the scale in 10 zones, but now you can choose your own interval for the ticks. Look at the slider-bound gauge on the right: it’s divided in 5 zones:

universal2

Here’s the updated list of configurable properties:

  • Minimum: minimum value on the scale (double)
  • Maximum: maximum value on the scale (double)
  • Value: the value to represent (double)
  • ValueStringFormat: StringFormat to apply to the displayed value (string)
  • Unit: unit measure to display (string)
  • TickSpacing: spacing -in value units- between ticks (int)
  • NeedleBrush: color of the needle (Brush)
  • TickBrush: color of the outer ticks (Brush)
  • ScaleWidth: thickness of the scale in pixels – relative to the control’s default size (double)
  • ScaleBrush: background color of the scale (Brush)
  • ScaleTickBrush: color of the ticks on the scale (Brush)
  • TrailBrush: color of the trail following the needle (Brush)
  • ValueBrush: color of the value text (Brush)
  • UnitBrush: color of the unit measure text (Brush)

And the taxonomy:

taxonomy

As an example, here’s Daffy’s gauge from the sample project:

<controls:RadialGauge Value="60"
                      Unit="Quacks"
                      NeedleBrush="#FFCC2B33"
                      ScaleBrush="Transparent"
                      TrailBrush="Black"
                      TickBrush="#FFFFAA00"
                      ValueBrush="White"
                      UnitBrush="White"
                      ScaleTickBrush="DimGray" />

And here’s the data bound gauge, featuring TickSpacing:

<controls:RadialGauge Minimum="{Binding Min}"
                      Maximum="{Binding Max}"
                      Value="{Binding Value, Mode=TwoWay}"
                      TickSpacing="100"
                      Unit="Bindings"
                      NeedleBrush="White"
                      ScaleBrush="LightGray"
                      TrailBrush="OrangeRed"
                      TickBrush="LightGray"
                      ValueBrush="White"
                      UnitBrush="White"
                      ScaleTickBrush="Transparent"
                      ScaleWidth="5" />

Let’s come back to the Universal app. The Store app and the Phone app have something in common: they’re absolutely empty. It shouldn't come as a surprise that the ViewModel and the BindableBase classes are easily sharable, since they're simple C# classes. But the whole user interface is actually shared between the platform-specific apps: it's Hub-based and entirely built with common Store/Phone controls. So all the code is shared: binary through the PCL, or as source in the Shared project. Needless to say I like this approach a lot…

Here’s the full project, it was built with Visual Studio 2013 Update 2 (RC): U2UC.WinUni.CustomControlSample.zip (303.6KB)

Enjoy!

XAML Brewer

Using OneDrive files in Windows Platform Apps – Part Deux

This article presents a handful of building blocks for some more advanced OneDrive use cases in Windows 8 apps, like:

  • transparently switching between a local folder and a OneDrive working folder,
  • synchronizing a local folder with a OneDrive folder,
  • sharing a OneDrive folder across devices,
  • sharing a OneDrive folder across apps, or even
  • sharing a OneDrive folder across platforms (Store app - Phone app - side loaded Enterprise app).

It elaborates on my previous article that showed how to access a OneDrive folder from a Windows 8 Store app from a purely technical point. This code is now hardened, and refactored into a more reusable model. Although all building blocks are implemented and tested, not all of the mentioned scenarios are implemented in the attached sample app. The sample app simulates a Store app that has OneDrive capability as an option (e.g. as an in-app purchase) and that can switch back to the local folder whenever the user feels to (e.g. to work offline). This is how it looks like:

OneDrive

The sample app allows you to do some basic file operations, and comes with a manual switch to toggle between the local folder and a working folder on your OneDrive. File and folder comparison and synchronization are not elaborated, but all constituents are in the source code.

The object model contains the following classes:

  • FileSystemBase: a base class that encapsulates the basic file system operations, regardless of where the working folder lives:
    • enumerating the files in the working folder,
    • saving a file,
    • reading a file, and
    • deleting a file.
  • IFile: an interface that contains the file properties that the app is interested in:
    • name of the file,
    • size of the file (useful for synchronization),
    • modification date of the file (useful for synchronization)
  • Device: a class that represents the physical device:
    • hardware identifier, and
    • internet connectivity.

Here’s the UML class diagram of the API:

API

The FileSystemBase class abstracts the file manipulations. It’s an abstract class with concrete virtual asynchronous methods that each throw an exception. I know that looks weird, but it’s the best way to enforce child classes to implement an expected asynchronous behavior (static, abstract, interface, and async don’t really work together in a class definition):

/// <summary>
/// Base Class for File Systems.
/// </summary>
public abstract class FileSystemBase
{
    /// <summary>
    /// Returns the list of Files in the Working Folder.
    /// </summary>
    public async virtual Task<List<IFile>> Files()
    {
        throw new NotImplementedException();
    }

    /// <summary>
    /// Saves the specified content into the Working Folder.
    /// </summary>
    public async virtual Task Save(string content, string fileName)
    {
        throw new NotImplementedException();
    }

    /// <summary>
    /// Returns the content of the specified file in the Working Folder.
    /// </summary>
    public async virtual Task<string> Read(string fileName)
    {
        throw new NotImplementedException();
    }

    /// <summary>
    /// Deletes the specified file from the Working Folder.
    /// </summary>
    public async virtual Task Delete(string fileName)
    {
        throw new NotImplementedException();
    }
}

Both concrete child classes –LocalDrive and OneDrive- have their own implementation of the virtual methods. The OneDrive class is of course a bit more complex than the LocalDrive class, since it needs a login procedure and it requires and extra identifier for the working folder and its files. Check the sample app for the source code, I'm not repeating it in this article since it’s just a rehash of my previous blog post.

The main viewmodel of the app uses a field to refer to the file system:

private FileSystemBase currentDrive;

The app will not show command buttons as long as it’s not connected to a file system. So I added an IsReady property to the viewmodel:

/// <summary>
/// Gets a value indicating whether this instance is ready (i.e. connected to a file system).
/// </summary>
public bool IsReady
{
    get { return this.currentDrive != null; }
}

Here’s the app in its waiting mode - it may take some time to connect to OneDrive the first time:WaitUntilReady

The last used file system is stored in the roaming settings, so it can be shared between devices:

/// <summary>
/// Gets or sets a value indicating whether we're using OneDrive or Local Folder.
/// </summary>
public bool UseOneDrive
{
    get { return this.useOneDrive; }

    set
    {
        if (value != this.useOneDrive)
        {
            if (value)
            {
                this.TryEnableOneDrive();
            }
            else
            {
                this.currentDrive = LocalDrive.Current;
                this.SetProperty(ref this.useOneDrive, value);
                ApplicationData.Current.RoamingSettings.Values["UseOneDrive"] = value;
                this.OnPropertyChanged("IsReady");
            }
        }
    }
}

When the user switches to OneDrive mode, we try to activate it. In case of a problem (e.g. the user did not consent, or the drive cannot be accessed), we switch back to local mode. The OneDrive initialization code is called from strictly synchronous code - a property setter and a constructor. It executes asynchronously and finishes with property change notifications. The XAML bindings will do the rest:

private void TryEnableOneDrive()
{
    bool success = true;
    CoreWindow.GetForCurrentThread().Dispatcher.RunAsync
        (
            CoreDispatcherPriority.Normal,
            async () =>
            {
                FileSystemBase fileSystem = null;

                try
                {
                    fileSystem = await OneDrive.GetCurrent();
                }
                catch (Exception)
                {
                    success = false;
                }
                finally
                {
                    if (fileSystem == null || !OneDrive.IsLoggedIn)
                    {
                        // Something went wrong, switch to local.
                        success = false;
                        this.currentDrive = LocalDrive.Current;
                    }
                    else
                    {
                        this.currentDrive = fileSystem;
                    }

                    this.useOneDrive = success;
                    ApplicationData.Current.RoamingSettings.Values["UseOneDrive"] = success;

                    // Need to explicitly notify to reset toggle button on error.
                    this.OnPropertyChanged("UseOneDrive");
                    this.OnPropertyChanged("IsReady");
                }
            }
        );
}

Here’s how the code is called from the constructor of the viewmodel:

this.useOneDrive = (bool)ApplicationData.Current.RoamingSettings.Values["UseOneDrive"];

if (this.useOneDrive)
{
    this.TryEnableOneDrive();
}
else
{
    this.currentDrive = LocalDrive.Current;
}

In that same constructor we also check if the app has been used from another device lately, because this might trigger a synchronization routine. The use case that I have in mind here, is an app that always saves locally but uploads at regular intervals to the user’s OneDrive. Such an app would want to be informed that the OneDrive folder was updated by another device:

if (ApplicationData.Current.RoamingSettings.Values["HardwareId"] != null)
{
    string previous = (string)ApplicationData.Current.RoamingSettings.Values["HardwareId"];

    if (previous != Device.Ashwid)
    {
        this.ShowToast("You seem to have used this app from another machine!", "ms-appx:///Assets/Warning.png");
    }
}

ApplicationData.Current.RoamingSettings.Values["HardwareId"] = Device.Ashwid;

Here’s that code in action:

OtherHardware

For checking the device id, a GUID in local settings would do the job in most scenarios. But the ASHWID allows you to share the working folder between apps on the same device (you would just have to override the default working folder name for this):

/// <summary>
/// Gets the Application Specific Hardware Identifier.
/// </summary>
/// <remarks>
/// Due to hardware drift, the returned value may change over time). 
/// See http://msdn.microsoft.com/en-us/library/windows/apps/jj553431.aspx. 
/// </remarks>
public static string Ashwid
{
    get
    {
        HardwareToken hwToken = HardwareIdentification.GetPackageSpecificToken(null);
        IBuffer hwID = hwToken.Id;
        byte[] hwIDBytes = hwID.ToArray();
        return hwIDBytes.Select(b => b.ToString()).Aggregate((b, next) => b + "," + next);
    }
}

Most of the file operations are ignorant of the whereabouts of the working folder:

private async void ReadFolder_Executed()
{
    this.files.Clear();
    foreach (var file in await this.currentDrive.Files())
    {
        this.files.Add(file);
    }
}

private async void ReadFile_Executed()
{
    if (this.selectedFile != null)
    {
        this.SelectedText = await this.currentDrive.Read(this.selectedFile.Name);
    }
}

private async void DeleteFile_Executed()
{
    if (this.selectedFile != null)
    {
        var task = this.currentDrive.Delete(this.selectedFile.Name);

        try
        {
            await task;

            this.ShowToast("File deleted.");
        }
        catch (Exception ex)
        {
            this.ShowToast("There was an error while deleting.", "ms-appx:///Assets/Warning.png");
        }

        this.ReadFolderCommand.Execute(null);
    }
}

While saving a file on OneDrive you can easily pull the internet cable or disable the Wifi, so the Save method was the ideal candidate to test some extra exception handling. If something goes wrong while saving, you may want to check the connection to the Internet. Unfortunately there is no way to ask the LiveConnectClient nor the LiveConnectSession whether the connection is still available. Actually it’s even worse: you can re-login successfully without a connection, you end up with a "false positive". Fortunately you can examine your Internet connectivity in other ways:

/// <summary>
/// Gets a value indicating whether this device is currently connected to the Internet.
/// </summary>
public static bool IsConnected
{
    get
    {
        try
        {
            return NetworkInformation.GetInternetConnectionProfile().GetNetworkConnectivityLevel() >= NetworkConnectivityLevel.InternetAccess;
        }
        catch (Exception)
        {
            return false;
        }
    }
}

In a production app, it would make sense to automatically switch to local mode after a failed Save operation against OneDrive. Apparently the LiveConnectClient enters an undetectable corrupt state after an unsuccessful BackgroundUploadAsync. The next time you try to save a file –and there’s still no Internet connection- the Live client throws an exception that you can’t catch. I’m sorry I didn’t find a workaround for this yet. Anyway, the app will die on you ungracefully in this particular scenario :-(

ClientException

Here’s the whole Save method (the one in the main view model):

private async void SaveFile_Executed()
{
    if (this.selectedFile != null)
    {
        var task = this.currentDrive.Save(this.selectedText, this.selectedFile.Name);

        try
        {
            await task;

            this.ShowToast("File saved.");
        }
        catch (Exception ex)
        {
            if (this.useOneDrive && !Device.IsConnected)
            {
                this.ShowToast("Internet connection is lost. You may want to switch to local storage.", "ms-appx:///Assets/Warning.png");
            }
            else
            {
                this.ShowToast("There was an error while saving.", "ms-appx:///Assets/Warning.png");
            }
        }
    }
}

Here’s one of the toasts that notify the user of a problem:

InternetWarning

Here’s the sample app; it has the “Transparent FileSystem API” in its Models folder. The code was written in Visual Studio 2013 Update 2 RC, but I did not use any of the new Shared Project/Universal App features yet: U2UC.WinRT.OneDriveSync.zip (1.4MB)

Don’t forget to associate the project with an app of yours, or it won’t work:

AssociateStoreApp

Enjoy!

Diederik

Using OneDrive files from a Windows 8 Store app

This article explains how to let a Windows Store app manage a list of files in a working folder on the user’s OneDrive, without continuously opening file pickers. Some apps need to store more personal user data than the roaming folder can handle. A folder on the user’s OneDrive is a nice place to store that data -e.g. as flat files containing serialized business objects- and share it across his devices. Your users installed your app through the Store, so they all have a Microsoft account. And every Microsoft account comes with a OneDrive folder somewhere in the Cloud. So why not make use of it?

I made a little app that shows you how to log on to OneDrive, create a working folder for your app, upload a file, enumerate the files in the folder, and read the content of a file. Here’s how it looks like:

onedrive_commands

An HTTP client and some JSON magic would suffice to communicate with the OneDrive services directly, but the Live SDK for Windows, Windows Phone and .NET comes with an object model that does this for you.

You can either locally install it, and then make a reference to it in your project:

Live_SDK_reference

Or you can get it all through Nuget:

LiveSDK_ Nuget

Under the hood this SDK of course still calls the REST service through HTTP, so you have to activate the Internet capability for your app (and make sure you publish a privacy policy):

Capabilities

Before you can use the API, your project needs to be associated with an app that is defined (not necessarily published) in the Store. This will update the manifest, and add an association file:

Associate

If you don’t associate your project with a Store app, then you may expect an exception:

App_not_configured

Before your user can access his OneDrive through your app, he or she needs to be authenticated. The following code calls the LiveAuthClient.LoginAsync method, which takes the list of scopes as a parameter. The scopes for this particular call include single sign-on, and read and write access to the OneDrive:

private async void SignIn_Executed()
{
    if (!this.isSignedIn)
    {
        try
        {
            LiveAuthClient auth = new LiveAuthClient();
            var loginResult = await auth.LoginAsync(new string[] { "wl.signin", "wl.skydrive", "wl.skydrive_update" });
            this.client = new LiveConnectClient(loginResult.Session);
            this.isSignedIn = (loginResult.Status == LiveConnectSessionStatus.Connected);
            await this.FetchfolderId();
        }
        catch (LiveAuthException ex)
        {
            Debug.WriteLine("Exception during sign-in: {0}", ex.Message);
        }
        catch (Exception ex)
        {
            // Get the code monkey's attention.
            Debugger.Break();
        }
    }
}

When the call executes, the system sign-in UI opens, unless the user has already signed into his Microsoft account and given consent for the app to use the requested scopes. In most situations, your end user is already logged in and will never have to type his user id and password, and he would see this consent screen only once:

Consent

The app then needs to create a working folder. I decided to just use the full name of the app itself as the name of the folder:

public string FolderName
{
    get { return Package.Current.Id.Name; }
}

Here’s how to create the folder -in the root of the user’s OneDrive- using a LiveConnectClient.PostAsync call:

private async void CreateFolder_Executed()
{
    try
    {
        // The overload with a String expects JSON, so this does not work:
        // LiveOperationResult lor = await client.PostAsync("me/skydrive", Package.Current.Id.Name);

        // The overload with a Dictionary accepts initializers:
        LiveOperationResult lor = await client.PostAsync("me/skydrive", new Dictionary<string, object>() { { "name", this.FolderName } });
        dynamic result = lor.Result;
        string name = result.name;
        string id = result.id;
        this.FolderId = id;
        Debug.WriteLine("Created '{0}' with id '{1}'", name, id);
    }
    catch (LiveConnectException ex)
    {
        if (ex.HResult == -2146233088)
        {
            Debug.WriteLine("The folder already existed.");
        }
        else
        {
            Debug.WriteLine("Exception during folder creation: {0}", ex.Message);
        }
    }
    catch (Exception ex)
    {
        // Get the code monkey's attention.
        Debugger.Break();
    }
}

The app needs to remember the id of the folder, because it is needed in the further calls. Therefore we store it in the roaming settings:

ApplicationDataContainer settings = ApplicationData.Current.RoamingSettings;
public string FolderId
{
    get { return this.settings.Values["FolderId"].ToString(); }
    private set { this.settings.Values["FolderId"] = value; }
}

In case you lose the folder id, you can fetch it with a LiveConnectClient.GetAsync call:

private async Task FetchfolderId()
{
    LiveOperationResult lor = await client.GetAsync("me/skydrive/files");
    dynamic result = lor.Result;
    this.FolderId = string.Empty;
    foreach (dynamic file in result.data)
    {
        if (file.type == "folder" && file.name == this.FolderName)
        {
            this.FolderId = file.id;
        }
    }
}

You can add files to the working folder with LiveConnectClient.BackGroundUploadAsync:

private async Task SaveAsFile(string content, string fileName)
{
    // String to UTF-8 Array
    byte[] byteArray = Encoding.UTF8.GetBytes(content);
    // Array to Stream
    MemoryStream stream = new MemoryStream(byteArray);
    // Managed Stream to Store Stream to File
    await client.BackgroundUploadAsync(
        this.FolderId,
        fileName,
        stream.AsInputStream(),
        OverwriteOption.Overwrite);
}

.Net developers would be tempted to apply Unicode encoding. Just hold your horses and stick to UTF-8. To convince you that it covers your needs, I added some French (containing accents and other decorations) and Chinese (containing whatever the Bing translator gave me) texts in the source code.

After clicking the ‘Save Files’ button in the sample app, the folder and its content become visible in the user’s File Explorer:

FileManager

Using LiveConnectClient.GetAsync you can read the folder’s content to enumerate the list of files in it:

private async void OpenFolder_Executed()
{
    try
    {
        // Don't forget '/files' at the end.
        LiveOperationResult lor = await client.GetAsync(this.FolderId + @"/files");
        dynamic result = lor.Result;
        this.files.Clear();
        foreach (dynamic file in result.data)
        {
            if (file.type == "file")
            {
                string name = file.name;
                string id = file.id;
                this.files.Add(new OneDriveFile() { Name = name, Id = id });
                Debug.WriteLine("Detected a file with name '{0}' and id '{1}'.", name, id);
            }
        }
    }
    catch (LiveConnectException ex)
    {
        Debug.WriteLine("Exception during folder opening: {0}", ex.Message);
    }
    catch (Exception ex)
    {
        // Get the code monkey's attention.
        Debugger.Break();
    }
}

The OneDriveFile class in this code snippet does not come from the API, but is just a lightweight custom class. My sample app is only interested in the name and id of each file, but the API has a lot more to offer:

/// <summary>
/// Represents a File on my OneDrive.
/// </summary>
public class OneDriveFile
{
    public string Name { get; set; }
    public string Id { get; set; }
}

After that OpenFolder call, the sample app displays the list of files:

onedrive_folder

With a file’s id, we can fetch its content through a LiveConnectClient.BackgroundDownloadAsync call:

private async void ReadFile_Executed()
{
    if (this.selectedFile != null)
    {
        // Don't forget '/content' at the end.
        LiveDownloadOperationResult ldor = await client.BackgroundDownloadAsync(this.selectedFile.Id + @"/content");
        // Store Stream to Managed Stream.
        var stream = ldor.Stream.AsStreamForRead(0);
        StreamReader reader = new StreamReader(stream);
        // Stream to UTF-8 string.
        this.SelectedText = reader.ReadToEnd();
    }
}

Here’s the result in the sample app:

onedrive_file

For the sake of completeness: LiveConnectClient also hosts asynchronous methods to move, copy, and delete files and folders.

This solution has many advantages:

  • You can store more data that the app’s roaming folder can handle.
  • The data is accessible across devices.
  • The end user is not confronted with logon screens or file pickers.
  • You don’t have to provide Cloud storage of your own.

There are some drawbacks too:

  • You’ll have to deal with the occasional latency, especially when uploading files to OneDrive.
  • If the client is not always connected, you might need a fallback mechanism to local storage (which uses a different file API).

Here’s the full source code of the sample app, it was created with Visual Studio 2013 for Windows 8.1. I cleared the app store association, so you’ll have to hook it to your own account: U2UC.WinRT.OneDriveSample.zip (5.1MB).

Enjoy!

Diederik

A Floating Behavior for Windows 8 Store apps

In my previous article I introduced a Floating Control for Windows Store apps, and hinted that it could be rewritten as a Behavior. Well, that’s exactly what I did. This article describes the Floating Behavior: it allows a ContentControl to be dragged around the screen through mouse or touch, while optionally keeping it within its parent and/or on screen. Here’s the Behavior in action; any similarity with the app from my previous article *is* intended:

FloatingBehavior

Blend Behaviors are very popular in WPF and Silverlight. They are classes that encapsulate interactive behavior that can be attached to visual elements and that is generally implemented by registering event handlers to that associated element. Behaviors were missing in WinRT until they were introduced with Windows 8.1 and Visual Studio 2013. For a more detailed introduction to Behaviors in WinRT 8.1 I refer to this article by Mark Smith.

In WinRT a Behavior is a class that implements the IBehavior interface. The classic Behavior<T> does not exist on this platform. But don’t worry: if you want to upgrade one of your legacy behaviors you can easily resurrect Behavior<T> yourself. There’s an example by Fons Sonnemans right here.

Here’s the class declaration for FloatingBehavior:

/// <summary>
/// Adds Floating Behavior to a ContentControl.
/// </summary>
public class FloatingBehavior : DependencyObject, IBehavior
{
    // ...
}

Apart from the mandatory interface implementations, the class comes with the exact same dependency properties (IsBoundByParent and IsBoundByScreen) and position calculations as the Floating Control.

Here’s how to connect the Behavior to a ContentControl in XAML:

<ContentControl>
    <interactivity:Interaction.Behaviors>
        <behaviors:FloatingBehavior IsBoundByParent="True" />
    </interactivity:Interaction.Behaviors>
    <!-- Content Here -->
</ContentControl>

Before you can define or use a Behavior, you need to reference the Behaviors SDK that comes with Blend:

BehaviorsSDK

When a Behavior is attached to a XAML element, the Attach method is called. Herein we check if the Behavior is attached to a ContentControl. If that’s the case, we register an event handler for the Loaded event; if not, we complain:

private ContentControl contentControl;

/// <summary>
/// Attaches to the specified object.
/// </summary>
public void Attach(DependencyObject associatedObject)
{
    this.contentControl = associatedObject as ContentControl;
    if (this.contentControl == null)
    {
        throw new Exception("Floating Behavior only applies to ContentControl.");
    }
    else
    {
        this.contentControl.Loaded += ContentControl_Loaded;
    }
}

It’s in that Loaded event that we register the core handlers for ManipulationDelta and SizeChanged. These two actually define the Floating behavior, and are explained the previous article. Before we hook these event handlers, we need to cram a Canvas and a Border between the ContentControl and its parent. These are necessary for the position adjustment calculations. The FloatingControl carries these decorations in its style template, but the FloatingBehavior needs to do the plumbing in C#. And it’s actually a tedious operation: you have to pull the ContentControl out of its parent, wrap it in a Border in a Canvas, and then plug it back into the parent - which may retrigger the Loaded event. The pull-out and plug-in operations unconveniently depend on the parent’s type. I added implementations for Panel, ContentControl, and Border [Why is Border not a ContentControl?] but I may be missing some potential host controls here:

/// <summary>
/// Handles the Loaded event of the ContentControl.
/// </summary>
private void ContentControl_Loaded(object sender, RoutedEventArgs e)
{
    // Make sure this only runs once.
    // The Loaded event is also fired when the contentcontrol is moved in the Visual Tree
    this.contentControl.Loaded -= ContentControl_Loaded;

    var parent = this.contentControl.Parent;
    if (parent is Panel)
    {
        var panel = parent as Panel;
        int i = panel.Children.IndexOf(this.contentControl);
        panel.Children.Remove(this.contentControl);
        panel.Children.Insert(i, this.Decorated);
    }
    else if (parent is ContentControl)
    {
        var cc = parent as ContentControl;
        cc.Content = null;
        cc.Content = this.Decorated; ;
    }
    else if (parent is Border)
    {
        var border = parent as Border;
        border.Child = null;
        border.Child = this.Decorated;
    }
    else
    {
        throw new Exception("Unexpected parent, please call a code monkey.");
    }

    this.frame = GetClosestParentWithSize(this.border);

    // No parent.
    if (this.frame == null)
    {
        // We probably never get here.
        return;
    }

    this.frame.SizeChanged += Floating_SizeChanged;
}

Wrapping the control is simple. The Decorated property returns a Canvas:

/// <summary>
/// Initializes and returns the decorated control.
/// </summary>
private Canvas Decorated
{
    get
    {
        // Canvas
        var canvas = new Canvas();
        canvas.Height = 0;
        canvas.Width = 0;
        canvas.VerticalAlignment = VerticalAlignment.Top;
        canvas.HorizontalAlignment = HorizontalAlignment.Left;

        // Border
        this.border = new Border();
        this.border.ManipulationMode = ManipulationModes.TranslateX | ManipulationModes.TranslateY | ManipulationModes.TranslateInertia;
        this.border.ManipulationDelta += this.Border_ManipulationDelta;

        // Move Canvas properties from control to border.
        Canvas.SetLeft(border, Canvas.GetLeft(this.contentControl));
        Canvas.SetLeft(this.contentControl, 0);
        Canvas.SetTop(border, Canvas.GetTop(this.contentControl));
        Canvas.SetTop(this.contentControl, 0);

        // Move Margin to border.
        this.border.Padding = this.contentControl.Margin;
        this.contentControl.Margin = new Thickness(0);

        // Connect the dots
        this.border.Child = this.contentControl as UIElement;
        canvas.Children.Add(this.border);

        return canvas;
    }
}

When the Behavior is removed from the ContentControl, the Detach method from IBehavior is called. That’s where we remove the event handlers:

/// <summary>
/// Detaches this instance from its associated object.
/// </summary>
public void Detach()
{
    this.contentControl.Loaded -= ContentControl_Loaded;
    this.frame.SizeChanged -= Floating_SizeChanged;
}

The floating behavior only applies to ContentControl so I’m ignoring the AssociatedObject property in my code since it is not strongly typed. Here’s the implementation, just for the sake of completeness:

/// <summary>
/// Gets the <see cref="T:Windows.UI.Xaml.DependencyObject" /> to which the <seealso cref="T:Microsoft.Xaml.Interactivity.IBehavior" /> is attached.
/// </summary>
/// <remarks>Not used. We prefer the strongly type contentControl field.</remarks>
public DependencyObject AssociatedObject
{
    get
    {
        return this.contentControl;
    }
}

Here’s the Behavior in action when the app is resized. the restrained controls nicely stay within the green rectangle, or on screen:

FloatingBehaviorResized

I personally prefer the clean implementation of ‘floating’ as a Control over this Behavior version. But this example certainly proves that the new Windows 8.1 Store app Behaviors are capable of doing more complex things than you might have expected.

Here’s the source code, it was written in Visual Studio 2013 for Windows 8.1. The Behavior is in its own project: U2UC.WinRT.FloatingBehaviorSample.zip (736.5KB)

Enjoy!

Diederik

A Floating Control for Windows 8 Store apps

Windows 8 Store apps need to run on a huge number of screen resolutions. That makes positioning your controls not always an easy task. So why not let the end user decide where a control should be placed? This article describes how to build a XAML and C# ContentControl that can be dragged around (and off) the screen by using the mouse or touch. The control comes with dependency properties to optionally keep it on the screen or within the rectangle occupied by its parent control. These boundaries apply not only when the control is manipulated, but also when its parent resizes (e.g. when you open multiple apps or when the screen is rotated).

Here’s a screenshot of the attached sample app. The main page contains some instances of the so-called Floating control. Two of these are bound to their parent in the visual tree -the yellow rectangle-, one is bound by the screen, and the remaining one is entirely free to go where you send it:

floating

I created the Floating control as a custom control that inherits from ContentControl, but I think it could be built as a behavior too. Here’s how to use it in your XAML:

<controls:Floating IsBoundByParent="True">
    <!-- Your content comes here -->
</controls:Floating>
<controls:Floating IsBoundByScreen="True">
    <!-- Your content comes here -->
</controls:Floating>

The default style of the Floating control is defined in the Themes\Generic.xaml file:

<!-- Floating Control Style -->
<Style TargetType="local:Floating">
    <Setter Property="Template">
        <Setter.Value>
            <ControlTemplate TargetType="local:Floating">
                <!-- This Canvas never covers other controls -->
                <Canvas Background="Transparent"
                        Height="0"
                        Width="0"
                        VerticalAlignment="Top"
                        HorizontalAlignment="Left">
                    <!-- This Border handles the dragging -->
                    <Border x:Name="PART_Border"
                            ManipulationMode="TranslateX, TranslateY, TranslateInertia"  >
                        <ContentPresenter />
                    </Border>
                </Canvas>
            </ControlTemplate>
        </Setter.Value>
    </Setter>
</Style>

The heart of the control is a ContentPresenter that will contain whatever you put in it. It is hosted in a Border that responds to translation manipulations with inertia. That Border moves around within a Canvas that constitutes the outside of the Floating control. That Canvas has a zero height and width, so that it doesn’t cover other controls (e.g. other Floating controls within the same parent).

A Canvas doesn’t clip its content to its bounds, so the Border doesn’t really care about its parent being sizeless: it can be dragged around to everywhere. Unless we apply some restrictions: the Floating control comes with the boundary properties IsBoundByParent and IsBoundByScreen. These are defined as dependency properties:

/// <summary>
/// A Content Control that can be dragged around.
/// </summary>
[TemplatePart(Name = BorderPartName, Type = typeof(Border))]
public class Floating : ContentControl
{
    private const string BorderPartName = "PART_Border";

    public static readonly DependencyProperty IsBoundByParentProperty =
        DependencyProperty.Register("IsBoundByParent", typeof(bool), typeof(Floating), new PropertyMetadata(false));

    public static readonly DependencyProperty IsBoundByScreenProperty =
        DependencyProperty.Register("IsBoundByScreen", typeof(bool), typeof(Floating), new PropertyMetadata(false));

    private Border border;

    /// <summary>
    /// Initializes a new instance of the <see cref="Floating"/> class.
    /// </summary>
    public Floating()
    {
        this.DefaultStyleKey = typeof(Floating);
    }

    /// <summary>
    /// Gets or sets a value indicating whether the control is bound by its parent size.
    /// </summary>
    public bool IsBoundByParent
    {
        get { return (bool)GetValue(IsBoundByParentProperty); }
        set { SetValue(IsBoundByParentProperty, value); }
    }

    /// <summary>
    /// Gets or sets a value indicating whether the control is bound by the screen size.
    /// </summary>
    public bool IsBoundByScreen
    {
        get { return (bool)GetValue(IsBoundByScreenProperty); }
        set { SetValue(IsBoundByScreenProperty, value); }
    }
}

The control’s main job is to calculate the physical position of the Border against its parent Canvas. When the control is moved or when the parent resizes, it will adjust the Canvas.Left and Canvas.Top attached properties of the Border. An alternative approach would be to apply and configure a translation to that Border.

In the OnApplyTemplate we look for the Border in the style template, and register an event handler for ManipulationDelta:

protected override void OnApplyTemplate()
{
    // Border
    this.border = this.GetTemplateChild(BorderPartName) as Border;
    if (this.border != null)
    {
        this.border.ManipulationDelta += this.Border_ManipulationDelta;
    }
    else
    {
        // Exception
        throw new Exception("Floating Control Style has no Border.");
    }

    this.Loaded += Floating_Loaded;
}

In that same method we also apply some adjustments to drastically simplify the calculations. The Floating control may be hosted in a Canvas with a Top and Left, or it may be defined with a Margin around it. Since we’re controlling the position of the Border, not the Floating, I decided to let the Border take over these settings. Canvas properties are stolen from the Floating control, and the Margin outside the Floating is transformed into a Padding inside the Border:

// Move Canvas properties from control to border.
Canvas.SetLeft(this.border, Canvas.GetLeft(this));
Canvas.SetLeft(this, 0);
Canvas.SetTop(this.border, Canvas.GetTop(this));
Canvas.SetTop(this, 0);

// Move Margin to border.
this.border.Padding = this.Margin;
this.Margin = new Thickness(0);

When the control is loaded, we look up the parent to hook an event handler for the SizeChanged event:

private void Floating_Loaded(object sender, RoutedEventArgs e)
{
    FrameworkElement el = GetClosestParentWithSize(this);
    if (el == null)
    {
        return;
    }

    el.SizeChanged += Floating_SizeChanged;
}

Observe that we don’t just look for a resize of the control itself or its direct parent. That’s because these could have an actual height and width of zero – and hence would be ignored by SizeChanged. That seems to happen to Grid and Canvas controls –typical parents for a Floating- very often. So we’re actually looking for the closest parent with a real size:

/// <summary>
/// Gets the closest parent with a real size.
/// </summary>
private FrameworkElement GetClosestParentWithSize(FrameworkElement element)
{
    while (element != null && (element.ActualHeight == 0 || element.ActualWidth == 0))
    {
        element = element.Parent as FrameworkElement;
    }

    return element;
}

When the Border is moved around, we calculate its desired position, and then adjust it so it stays within the boundaries:

private void Border_ManipulationDelta(object sender, ManipulationDeltaRoutedEventArgs e)
{
    var left = Canvas.GetLeft(this.border) + e.Delta.Translation.X;
    var top = Canvas.GetTop(this.border) + e.Delta.Translation.Y;

    Rect rect = new Rect(left, top, this.border.ActualWidth, this.border.ActualHeight);
    AdjustCanvasPosition(rect);
}

When the parent is resized, we apply the same logic to the current position of the Border:

private void Floating_SizeChanged(object sender, SizeChangedEventArgs e)
{
    var left = Canvas.GetLeft(this.border);
    var top = Canvas.GetTop(this.border);

    Rect rect = new Rect(left, top, this.border.ActualWidth, this.border.ActualHeight);
    AdjustCanvasPosition(rect);
}

If one or both of the boundary properties is set, then we may need to apply a correction to the Canvas.Top and Canvas.Left of the Border. That’s what the following methods do:

/// <summary>
/// Adjusts the canvas position according to the IsBoundBy* properties.
/// </summary>
private void AdjustCanvasPosition(Rect rect)
{
    // No boundaries
    if (!this.IsBoundByParent && !this.IsBoundByScreen)
    {
        Canvas.SetLeft(this.border, rect.Left);
        Canvas.SetTop(this.border, rect.Top);

        return;
    }

    FrameworkElement el = GetClosestParentWithSize(this);

    // No parent
    if (el == null)
    {
        // We probably never get here.
        return;
    }

    var position = new Point(rect.Left, rect.Top); ;

    if (this.IsBoundByParent)
    {
        Rect parentRect = new Rect(0, 0, el.ActualWidth, el.ActualHeight);
        position = AdjustedPosition(rect, parentRect);
    }

    if (this.IsBoundByScreen)
    {
        var ttv = el.TransformToVisual(Window.Current.Content);
        var topLeft = ttv.TransformPoint(new Point(0, 0));
        Rect parentRect = new Rect(topLeft.X, topLeft.Y, Window.Current.Bounds.Width - topLeft.X, Window.Current.Bounds.Height - topLeft.Y);
        position = AdjustedPosition(rect, parentRect);
    }

    // Set new position
    Canvas.SetLeft(this.border, position.X);
    Canvas.SetTop(this.border, position.Y);
}

/// <summary>
/// Returns the adjusted the topleft position of a rectangle so that is stays within a parent rectangle.
/// </summary>
private Point AdjustedPosition(Rect rect, Rect parentRect)
{
    var left = rect.Left;
    var top = rect.Top;

    if (left < -parentRect.Left)
    {
        left = -parentRect.Left;
    }
    else if (left + rect.Width > parentRect.Width)
    {
        left = parentRect.Width - rect.Width;
    }

    if (top < -parentRect.Top)
    {
        top = -parentRect.Top;
    }
    else if (top + rect.Height > parentRect.Height)
    {
        top = parentRect.Height - rect.Height;
    }

    return new Point(left, top);
}

Here’s what happens when the app is resized or rotated, the bound Floating controls remain inside the box and/or on screen:

floating_resized

floating_rotated

Here’s the full source code. The Floating control is immediately reusable, since it lives in its own project: U2UC.WinRT.FloatingSample.zip (1.9MB)

Enjoy!

Diederik

LexDB performance tuning in a Windows 8 Store app

This article explains how to create fast queries against a LexDB database in a Windows 8 Store app, and how to keep these queries fast. LexDB is a lightweight, in-process object database engine. It is written in C# and can be used in .NET, Silverlight, Windows Phone, Windows Store, and Xamarin projects. For an introduction to using the engine in a Windows 8 Store app, I refer to a previous blog article of mine.

LexDB is an object database, it is not as relational as e.g. SQLite. But it still comes with the possibility of indexing, and it has commands to reorganize the stored data and release storage. This article zooms in on these features. Here’s a screenshot of the attached sample app. It requests for the size of the Person table to measure, and it comes with buttons to trigger a database reorganization (when you notice that the queries’ performance goes down) and to start measuring a SELECT statement using different indexes:

lex_screenshot

The app is of course a port of my previous blog post on SQLite. It stores the same business object (Person):

/// <summary>
/// Represents a person.
/// </summary>
internal class Person
{
    /// <summary>
    /// Gets or sets the identifier.
    /// </summary>
    public int Id { get; set; }

    /// <summary>
    /// Gets or sets the name.
    /// </summary>
    public string Name { get; set; }

    /// <summary>
    /// Gets or sets the description.
    /// </summary>
    public string Description { get; set; }

    /// <summary>
    /// Gets or sets the status.
    /// </summary>
    public int Status { get; set; }

    /// <summary>
    /// Gets or sets the day of birth.
    /// </summary>
    public DateTime BirthDay { get; set; }

    /// <summary>
    /// Gets or sets the picture.
    /// </summary>
    public byte[] Picture { get; set; }
}

When I migrated the database code from SQLite to LexDB I immediately noticed that SELECT statements in LexDB are pretty fast, but INSERT and DELETE statements are an order of magnitude slower, and run slower as the table size increases. That’s definitely something to consider, but as long as you stick to asynchronous calls, and as long as your app does not do any bulk operations, it’s nothing to really worry about. The performance of INSERT and DELETE statements surely had an impact on my sample app: the original app simply recreated the table when the target size was changed. With LexDB this isn’t an option anymore: the app now adds only the missing objects, or removes them. This allows you to gradually build up a large test table. So when you set a new size for the table, please use relatively small increments (depending on your hardware): you’d be amazed how long it takes to insert 1000 new objects into the table!

Here’s the initialization of the database. It logs the whereabouts of the physical files so you can monitor these. And it creates a table with two indexes: an index only on name, and a ‘covering’ index on name and status:

static Dal()
{
    // Reveal the location of the database folder
    Debug.WriteLine(string.Format(@"Databases are stored at {0}\Lex.Db\.", ApplicationData.Current.LocalFolder.Path));

    // Create database
    db = new DbInstance("storage");

    // Define table mapping
    db.Map<Person>().Automap(p => p.Id, true).WithIndex("ix_Person_Name", p => p.Name).WithIndex("ix_Person_Name_Status", p => p.Name, p => p.Status);

    // Initialize database
    db.Initialize();
}

Here’s the benchmark query that retrieves a filtered list of persons:

public static List<Person> GetPersons()
{
    return db.Table<Person>().Where(p => p.Name == "James" && p.Status == 2).ToList();
}

Since LexDB is an object database, any query against the Person table will return a list of fully populated Person instances. There’s no way to do any projection to return only Id, Name, and Status. [Maybe there is a way, but I didn’t find it. After all, the official documentation is a short series of blog posts and the source code.] All query plans will eventually end up in the base table.

If you want to use an index for a query, then you have to tell it to the engine upfront:

public static List<Person> GetPersonsIndex()
{
    // Horrible performance
    // return db.Table<Person>().IndexQueryByKey("ix_Person_Name", "James").ToLazyList().Where(p => p.Value.Status == 2).Select(p => p.Value).ToList();

    return db.Table<Person>().IndexQueryByKey("ix_Person_Name", "James").ToList().Where(p => p.Status == 2).ToList();
}

The IndexQueryByKey returns the primary keys of the requested objects (WHERE name=?), and the query plan continues in the base table (file) to filter out the remaining objects (WHERE status=?). That’s why I didn’t notice any performance improvements: in most cases, the raw query ran even faster.

lex_slow_index

So unless you’re looking for a very scarce value, a regular index on a LexDB table will NOT be very helpful. The same is true of course in SQL Server: indexes with a low density will be ignored. But in a relational database it’s the engine that decides whether or not to use an index, here the decision is up to you.

So let’s verify the impact of a covering index. Mind that the term ‘covering’ here only applies to the WHERE-clause, since there’s no way to skip the pass through the base table:

public static List<Person> GetPersonsCoveringIndex()
{
    return db.Table<Person>().IndexQueryByKey("ix_Person_Name_Status", "James", 2).ToList();
}

As you see in the above app screenshots, the ‘covered’ query runs faster than the original query in all cases. But on the other hand: the difference is not that significant and will probably not be noticed by the end users. I can imagine that the real added value of these indexes will appear in more complex queries (e.g. when joining multiple tables).

Let’s jump to the administrative part. The fragmentation caused by INSERT, UPDATE and DELETE statements has a bad influence on indexes: they get gradually slower over time. In most relational databases this is a relatively smooth process. It can be stopped by reorganizing or rebuilding the index. In LexDB this also happens, but the degradation is a less than smooth process. If you add and remove a couple of times a block of let’s say 1000 objects, then you’ll observe only subtle changes in the response time of the SELECT statements. Very suddenly the basic SELECT statement runs about ten times slower, while the indexed queries continue to do their job with the same response time.

Lex_before_fragmentation

At that moment, it’s time to rebuild the base table, and optionally release storage:

public static void Reorganize()
{
    Debug.WriteLine("Before compacting:");
    LogPersonInfo();

    // Reorganizes the file (huge impact on performance).
    db.Table<Person>().Compact();

    // For the sake of completeness.
    db.Flush<Person>();
    db.Flush();

    Debug.WriteLine("After compacting:");
    LogPersonInfo();
}

private static void LogPersonInfo()
{
    var info = db.Table<Person>().GetInfo();
    Debug.WriteLine(string.Format("* Person Data Size: {0}", info.DataSize));
    Debug.WriteLine(string.Format("* Person Effective Data Size: {0}", info.EffectiveDataSize));
    Debug.WriteLine(string.Format("* Person Index Size: {0}", info.IndexSize));
}

Immediately, everything is back to normal:

Lex_after_fragmentation

Compacting a fragmented table is a very rapid operation (less than a second) and it has an immediate result on the queries.

LexDB_Compacting

Compacting the data is something you may want to do on start-up, e.g. in an extended splash screen routine.

Even though LexDB is not as advanced as SQLite, it comes with the necessary infrastructure to get some speed and maintain it.

That’s all for today. Here’s the code, it was written in Visual Studio 2013 for Windows 8.1: U2UC.WinRT.LexDbIndexing.zip (852.7KB)

Enjoy!

Diederik

SQLite performance tuning in a Windows 8 Store app

This article explains how to monitor and optimize a SQLite query in a Windows 8 Store app by adding indexes and/or rewriting the query. I’ll be using the WinRT SQLite wrapper from the Visual Studio Gallery. I assume that you know how to install and use it, but feel free to check a previous blog post of mine for an introduction. As usual I created a small app. It allows you to select the number of records to insert in a table of Persons. The app then measures the execution time of a SELECT statement against that table, using four different strategies. Here’s how the app looks like:

screenshot_02032014_084315

This is the definition of Person, a more or less representative business model class:

/// <summary>
/// Represents a person.
/// </summary>
internal class Person
{
    /// <summary>
    /// Gets or sets the identifier.
    /// </summary>
    [PrimaryKey, AutoIncrement]
    public int Id { get; set; }

    /// <summary>
    /// Gets or sets the name.
    /// </summary>
    [MaxLength(64)]
    public string Name { get; set; }

    /// <summary>
    /// Gets or sets the description.
    /// </summary>
    public string Description { get; set; }

    /// <summary>
    /// Gets or sets the status.
    /// </summary>
    /// <remarks>Is an enum in the viewmodel.</remarks>
    public int Status { get; set; }

    /// <summary>
    /// Gets or sets the day of birth.
    /// </summary>
    public DateTime BirthDay { get; set; }

    /// <summary>
    /// Gets or sets the picture.
    /// </summary>
    /// <remarks>Is a blob in the database.</remarks>
    public byte[] Picture { get; set; }
}

The query that we’re going to monitor uses projection (not all columns are fetched – no SELECT *) as well as filtering (not all rows are fetched – there’s a WHERE clause). We’re only interested in the Id, Name, and Status columns of the persons named James with a status of 2. I started with the following query:

public static List<Person> GetPersonsOriginal()
{
    using (var db = new SQLiteConnection(DbPath))
    {
        // Activate Tracing
        db.Trace = true;

        var result = from p in db.Table<Person>()
                     where p.Name == "James" && p.Status == 2
                     select new Person() { Id = p.Id, Name = p.Name, Status = p.Status };

        return result.ToList();
    }
}

To my big surprise I noticed that the non-selected fields (e.g. Description and Image) were filled too in the result set. The projection step was clearly not executed:

allfields

Next, I tried to return an anonymous type:

public static List<object> GetPersonsAnonymous()
{
    using (var db = new SQLiteConnection(DbPath))
    {
        // Activate Tracing
        db.Trace = true;

        var result = from p in db.Table<Person>()
                     where p.Name == "James" && p.Status == 2
                     select new { Id = p.Id, Name = p.Name, Status = p.Status };

        return result.ToList<object>();
    }
}

Unfortunately it threw an exception:

anonymous

I should have known: the WinRT wrapper relies heavily on LINQ but the SQLite runtime itself is written in unmanaged C, so it’s allergic to anonymous .NET classes.

It turns out that when using the SQLite wrapper to run a query that returns a result set, you have to provide the signature of that result. The LINQ query calls Query<T> in the SQLite class and needs to provide the type parameter. Since I just needed a couple of columns, I created a Result class to host the query’s return values:

internal class Result
{
    /// <summary>
    /// Gets or sets the identifier.
    /// </summary>
    public int Id { get; set; }

    /// <summary>
    /// Gets or sets the name.
    /// </summary>
    public string Name { get; set; }

    /// <summary>
    /// Gets or sets the status.
    /// </summary>
    public int Status { get; set; }
}

Here’s how the second version of the query looks like:

public static List<Result> GetPersons()
{
    using (var db = new SQLiteConnection(DbPath))
    {
        // Activate Tracing
        db.Trace = true;

        var result = from p in db.Table<Person>()
                     where p.Name == "James" && p.Status == 2
                     select new Result() { Id = p.Id, Name = p.Name, Status = p.Status };

        return result.ToList();
    }
}

I must admit that -even for a very large table- the results come back very rapidly. I guess that most apps do not require any SQLite performance tuning. Asynchronous SELECT statements will suffice, and these are provided by the WinRT wrapper. SQLite is so fast because it is a genuine relational database: its data is not simply serialized, but stored and (proactively) cached in fixed-size pages that are hooked into B-trees (for indexes) or B+-trees (for tables). Here’s an overview of the internal mechanisms, the illustration comes out of the definitive guide to SQLite.

SQLiteArchitecture

As a first optimization, I decorated the database with an index on the Name column:

public static void CreateIndex()
{
    using (var db = new SQLiteConnection(DbPath))
    {
        // Activate Tracing
        db.Trace = true;

        db.Execute("CREATE INDEX `ix_person_name` ON `Person` (`Name` ASC)");
    }
}

The query ran about three times faster. That’s not bad.

Then I added an index on all requested fields, a so-called covering index. Theoretically this is the fastest way to get the data: everything is found in the index itself, there’s no need to read the table. Here’s the covering index definition:

public static void CreateCoveringIndex()
{
    using (var db = new SQLiteConnection(DbPath))
    {
        // Activate Tracing
        db.Trace = true;

        db.Execute("CREATE INDEX `ix_person_name_status` ON `Person` (`Name` ASC, 'Status' ASC)");
    }
}

On average, this query runs four times faster than the original. It’s getting better.

Then I activated the tracing and started to add the extra measurements, e.g. on the creation of the indexes. The database trace revealed the query for the so-called covering index:

select * from "Person" where (("Name" = ?) and ("Status" = ?))

That’s still a SELECT *, so the LINQ-based query was actually selecting ALL of the columns. It turns out that the SQLite wrapper calls Query<T> with the table type (Person) as T. I expected it would use the projected type (Result). That’s the reason why all fields were populated in the very first query. I wanted to measure a real covering index, so I decided to call Query<T> myself, with a custom query (SQL-based rather than LINQ-based) that fetches only the projected columns:

public static List<Result> GetPersons2()
{
    using (var db = new SQLiteConnection(DbPath))
    {
        // Activate Tracing
        db.Trace = true;

        var result = db.Query<Result>("select Id, Name, Status from Person where ((Name = ?) and (Status = ?))", "James", 2);

        return result.ToList();
    }
}

[Instead of providing the Result type, I think I could have created a TableMapping programmatically. TableMapping smells like SQLite’s version of a strongly typed ADO.NET dataset. When I come to think of it: it would be nice to have a designer for this in Visual Studio.]

Anyway, this last query runs about five times faster than the original one, and the trace was not showing a SELECT * anymore:

select Id, Name, Status from Person where ((Name = ?) and (Status = ?))

But still I wanted to double check the used strategy: SQLite has the “EXPLAIN QUERY PLAN” syntax to reveal the actual query plan. After some vain calls, I decided to apply the same approach as for the custom SELECT query: define a type for the result set, and call Query<T>. Neither in SQLite nor in the wrapper did I find a way to discover the signature of the result. It would be nice to have something like the SET FMTONLY from SQL Server. Fortunately I found a description of the results in the official SQLite documentation. Here’s the structure of a SQLite query plan line:

public class QueryPlanLine
{
    public int selectid { get; set; }

    public int order { get; set; }

    public int from { get; set; }

    public string detail { get; set; }
}

Here’s a method to log a query plan:

public static void GetQueryPlan(string query)
{
    using (var db = new SQLiteConnection(DbPath))
    {
        // Activate Tracing
        db.Trace = true;

        // Get query plan
        List<QueryPlanLine> queryPlan = db.Query<QueryPlanLine>(string.Format("explain query plan {0}", query));
        foreach (var line in queryPlan)
        {
            Debug.WriteLine(line.detail);
        }
    };
}

Here’s the list of different query plans for the four tested strategies: it ranges from a table scan to a covering index:

tracing

Achievement unlocked! :-)

Here’s the app again after it went through the different optimizations. It is clearly getting faster at each step:

screenshot_02032014_084453

Just remember that in a busy database the indexes need to be rebuilt/reorganized from time to time. That can be done in SQLite with the REINDEX statement. Here’s how to rebuild the covering index:

public static void ReorganizeCoveringIndex()
{
    using (var db = new SQLiteConnection(DbPath))
    {
        // Activate Tracing
        db.Trace = true;

        db.Execute("REINDEX `ix_person_name_status`");
    }
}

I guess you can build an auto-tuning mechanism to discover slow indexes with the ANALYZE statement, and then rebuild these. But it’s probably easier rebuild the whole database from time to time *and* release disk space whilst doing that. The magic word in SQLite for that is: VACUUM.

public static void RebuildDatabase()
{
    using (var db = new SQLiteConnection(DbPath))
    {
        // Activate Tracing
        db.Trace = true;

        db.Execute("VACUUM");
    }
}

Although SQLite is impressively fast, I was able to make a simple query on a single table more than 5 times faster with just a small effort. I assume that you could achieve even better results on a complex join. You can easily performance tune SQLite queries by creating indexes and/or taking control of the SELECT statement yourself.

Here’s the code, it was written in Visual Studio 2013 for Windows 8.1: U2UC.WinRT.SQLiteIndexing.zip (257.7KB)

Enjoy!

Diederik

Choosing the right serialization engine for your Windows Store app

Most Windows 8 Store apps spend a significant time in the saving and retrieving of local data. The local file system is often used as main storage. But even the apps that come with server side storage, often need to use local storage: to host a cache for when there’s no network available or whilst the server data is still downloading. If you don’t use a third-party local database (like SQLite) then you have to manage the persistence (i.e. the serialization and deserialization of your objects) yourself. This article introduces you to the 4 main serialization engines that are available to Windows 8 Store apps: the 3 native ones (XmlSerializer, DataContractSerializer, and JsonSerializer), together with one popular serializer that’s not in the framework: the Json.NET serializer.
I built a little benchmarking app to compare these serializers. It measures the duration and validates the result of a workflow that consists of serializing, locally saving, and deserializing a list of business objects. The app lets you select the number of objects to be processed, the location where the file should be saved (Local Folder or Roaming Folder), and whether or not to compare the deserialized objects with the original ones.

Here’s how the app looks like:
serializers
The Item class represents the business object. It has properties of different types: strings, datetimes, an enumeration, a link to another business object (called SubItem), and a calculated property that we don’t want to serialize. My intention is to use this app for testing real app models, by replacing this Item class with the appropriate model class(es). An extension class contains methods to generate test data for the instances, and to compare the content of two instances (overriding Equals might be an alternative).

We’re testing 4 candidate serialization engines here, but maybe I’m missing some. So to be able to extend the list of engines in the benchmark, I hooked them into a framework. The app talks to the serializers through an abstract base class:

Abstract Serialization Base
  1. /// <summary>
  2. /// Abstract base class for serializers.
  3. /// </summary>
  4. /// <typeparam name="T">The type of the instance.</typeparam>
  5. public abstract class AbstractSerializationBase<T> where T : class
  6. {
  7.     /// <summary>
  8.     /// Serializes the specified instance.
  9.     /// </summary>
  10.     /// <param name="instance">The instance.</param>
  11.     /// <returns>The size of the serialized instance, in KB.</returns>
  12.     public abstract Task<int> Serialize(T instance);
  13.  
  14.     /// <summary>
  15.     /// Deserializes the instance.
  16.     /// </summary>
  17.     /// <returns>The instance.</returns>
  18.     public abstract Task<T> Deserialize();
  19.  
  20.     /// <summary>
  21.     /// Gets or sets the name of the file.
  22.     /// </summary>
  23.     public string FileName { get; set; }
  24.  
  25.     /// <summary>
  26.     /// Gets or sets the folder.
  27.     /// </summary>
  28.     public StorageFolder Folder { get; set; }
  29. }

Each engine gets a concrete subclass. Here’s a short overview of the technologies:

XmlSerializer

Here’s how to serialize and deserialize with the XmlSerializer:

XmlSerializer
  1. public override async Task<int> Serialize(T instance)
  2. {
  3.     XmlSerializer serializer = new XmlSerializer(typeof(T));
  4.     StringWriter stringWriter = new StringWriter();
  5.     serializer.Serialize(stringWriter, instance);
  6.     string content = stringWriter.ToString();
  7.     StorageFile file = await this.Folder.CreateFileAsync(this.FileName, CreationCollisionOption.ReplaceExisting);
  8.     await FileIO.WriteTextAsync(file, content);
  9.  
  10.     return content.Length / 1024;
  11. }
  12.  
  13. public override async Task<T> Deserialize()
  14. {
  15.     StorageFile file = await this.Folder.GetFileAsync(this.FileName);
  16.     string content = await FileIO.ReadTextAsync(file);
  17.     XmlSerializer serializer = new XmlSerializer(typeof(T));
  18.             
  19.     return (T)serializer.Deserialize(new StringReader(content));
  20. }


The XmlSerializer saves all public properties in the object graph, except the ones that are decorated with XmlIgnore:

XmlIgnore Attribute
  1. // Calculated - should not be serialized
  2. [XmlIgnore]
  3. public TimeSpan Duration
  4. {
  5.     get
  6.     {
  7.         return this.End - this.Start;
  8.     }
  9. }

 

The XML that is produced by this serializer, can be defined through attributes, but that is outside the scope of this article. The XmlSerializer is the only one that does NOT fire the methods that are flagged with OnSerializing, OnSerialized, OnDeserializing and/or OnDeserialized attributes. That may be a showstopper in some scenarios. Also notice that the XmlSerializer by default removes the insignificant white space, even in element content. If you don’t like that, you can change the setting. It’s not a configuration of the serializer itself: you have to add an extra field in each class to be serialized. He’re a snippet from the Item and SubItem classes:

Setting 'xml:space' value
  1. // Tell the XmlSerializer to preserve white space.
  2. [XmlAttribute("xml:space")]
  3. public string Space = "preserve";


The XmlSerializer is not the fastest, nor does it generate the smallest files. But in every scenario that I went through, it has beaten all the other engines hands down when it came to deserialization. On average, the XmlSerializer deserializes twice as fast as the competition. So when your app needs to read a large amount of data from local storage at startup, then you should choose this one.

DataContractSerializer

The DataContractSerializer is the one used by WCF. Here’s how to serialize and deserialize with it:

DataContractSerializer
  1. public override async Task<int> Serialize(T instance)
  2. {
  3.     DataContractSerializer serializer = new DataContractSerializer(typeof(T));
  4.     string content = string.Empty;
  5.     using (var stream = new MemoryStream())
  6.     {
  7.         serializer.WriteObject(stream, instance);
  8.         stream.Position = 0;
  9.         content = new StreamReader(stream).ReadToEnd();
  10.     }
  11.  
  12.     StorageFile file = await this.Folder.CreateFileAsync(this.FileName, CreationCollisionOption.ReplaceExisting);
  13.     await FileIO.WriteTextAsync(file, content);
  14.  
  15.     return content.Length / 1024;
  16. }
  17.  
  18. public override async Task<T> Deserialize()
  19. {
  20.     StorageFile file = await this.Folder.GetFileAsync(this.FileName);
  21.     var inputStream = await file.OpenReadAsync();
  22.     DataContractSerializer serializer = new DataContractSerializer(typeof(T));
  23.            
  24.     return (T)serializer.ReadObject(inputStream.AsStreamForRead());
  25. }

 

It will serialize all public properties that are decorated with the DataMember attribute:

DataMember Attribute
  1. [DataMember]
  2. public string Name { get; set; }


During the process, it will fire the OnSerializing, OnSerialized, OnDeserializing and OnDeserialized methods. It serializes the fastest. The files are smaller than the ones produced by the XmlSerializer, but not significantly.

JsonSerializer

Here’s how to serialize and deserialize with the JsonSerializer:

Native JsonSerializer
  1. public override async Task<int> Serialize(T instance)
  2. {
  3.     var serializer = new DataContractJsonSerializer(instance.GetType());
  4.     string content = string.Empty;
  5.     using (MemoryStream stream = new MemoryStream())
  6.     {
  7.         serializer.WriteObject(stream, instance);
  8.         stream.Position = 0;
  9.         content = new StreamReader(stream).ReadToEnd();
  10.     }
  11.  
  12.     StorageFile file = await this.Folder.CreateFileAsync(this.FileName, CreationCollisionOption.ReplaceExisting);
  13.     await FileIO.WriteTextAsync(file, content);
  14.  
  15.     return content.Length / 1024;
  16. }
  17.  
  18. public override async Task<T> Deserialize()
  19. {
  20.     StorageFile file = await this.Folder.GetFileAsync(this.FileName);
  21.     string content = await FileIO.ReadTextAsync(file);
  22.     var bytes = Encoding.Unicode.GetBytes(content);
  23.     var serializer = new DataContractJsonSerializer(typeof(T));
  24.  
  25.     return (T)serializer.ReadObject(new MemoryStream(bytes));
  26. }


It uses the same DataMember attribute as the DataContractSerializer to flag the properties to be serialized. During the process, it will fire the OnSerializing, OnSerialized, OnDeserializing and OnDeserialized methods. It serializes a bit faster than the XmlSerializer, and the saved files are undoubtedly smaller (although not always significantly). If you save in the Roaming Folder –with its limited storage- you should consider using Json (but keep on reading: there’s another Json serializer in the benchmark). Unfortunately the JsonSerializer is by far the slowest when it comes to deserialization, and it crashes on uninitialized DateTime values. The.NET default of DateTime.MinValue is beyond its range:
serializer_json_datetime

So one way or another you have to make sure that your DateTime values remain in the Json range. This is what I did on the constructor of the business class:

Json DateTime Range
  1. public Item()
  2. {
  3.     // To support Json serialization
  4.     this.Start = DateTime.MinValue.ToUniversalTime();
  5.     this.End = DateTime.MinValue.ToUniversalTime();
  6. }

I’m definitely not feeling comfortable with this.

Json.NET Serializer

Before you can use this serializer, you have to add a reference to the Json.NET assembly (e.g. through NuGet). Here’s how to serialize and deserialize with it:

NewtonSoft Json.NET Serializer
  1. public override async Task<int> Serialize(T instance)
  2. {
  3.     string content = string.Empty;
  4.     var serializer = new JsonSerializer();
  5.     // Lots of possible configurations:
  6.     // serializer.PreserveReferencesHandling = PreserveReferencesHandling.All;
  7.     // Nice for debugging:
  8.     // content = JsonConvert.SerializeObject(instance, Formatting.Indented);
  9.     content = JsonConvert.SerializeObject(instance);
  10.     StorageFile file = await this.Folder.CreateFileAsync(this.FileName, CreationCollisionOption.ReplaceExisting);
  11.     await FileIO.WriteTextAsync(file, content);
  12.  
  13.     return content.Length / 1024;
  14. }
  15.  
  16. public override async Task<T> Deserialize()
  17. {
  18.     StorageFile file = await this.Folder.GetFileAsync(this.FileName);
  19.     string content = await FileIO.ReadTextAsync(file);
  20.            
  21.     return JsonConvert.DeserializeObject<T>(content);
  22. }

When defining the properties to be serialized, you have the choice between opting in (serialized properties must be tagged with JsonProperty OR DataMember) or opting out (all public properties are serialized, except the ones flagged with JsonIgnore). Here’s an extract from the Item class again:

Json Attributes
  1. [JsonObject(MemberSerialization.OptIn)]
  2. public class Item
  3. {
  4.     [JsonProperty]
  5.     public DateTime Start { get; set; }
  6.  
  7.     // Calculated - should not be serialized
  8.     [JsonIgnore]
  9.     public TimeSpan Duration
  10.     {
  11.         get
  12.         {
  13.             return this.End - this.Start;
  14.         }
  15.     }
  16.  
  17.     // ...
  18. }

During the process, the Json.NET serializer will fire the OnSerializing, OnSerialized, OnDeserializing and OnDeserialized methods. It’s fast and it generates the smallest files. It’s also the most configurable: it comes with a lot of extra attributes and configuration settings. All of that comes with a price of course: the Newtonsoft.Json.dll adds more than 400KB to you app package. If you ask me, that’s a small price…

Conclusions

Of course you have to test these serialization engines against your own data and workload. But here are at least some general observations:

  • For small amounts of data, it doesn’t matter which technology you use. But adding Json.NET would just make your package larger.
  • Try to stick to one technology in your app. If you’re already using Json to fetch your server data, then use the same serializer to save locally.
  • If you’re dealing with large amounts of data, prepare to handle OutOfMemory exceptions. Unsurprisingly these are thrown when you run out of memory:
    serializer_out_of_memory
  • But OutOfMemory exception is also thrown when you run out of storage. I didn’t find any documentation of Local Storage limitations, but I do get exceptions when trying to allocate more than 100MB:
    serializer_out_of_storage
  • If on startup you need to deserialize large amount of data, prefer the XmlSerializer.
  • If you need one or more of the On[De]Serializ[ing][ed] methods, then don’t use the XmlSerializer.
  • If you need to store and retrieve local data, but none of the serialization engines covers your requirements, then normalization and indexing is what you need. Well, it’s time for a real database then.

Code

Here’s the full code for the sample app. It was written in Visual Studio 2013, for Windows 8.1: U2UConsult.WinRT.SerializationSample.zip (3MB)

Enjoy,
Diederik

Using the Windows 8.1 Hub as an ItemsControl

This article presents a Windows 8.1 Hub control with ItemsSource and ItemTemplate properties, making it easily bindable and more MVVM-friendly. The Hub control has become the main host on the startup screen of many Windows Store apps: it’s flexible but still presents a standard look-and-feel with the title and section headers at the appropriate location, it nicely scrolls horizontally, and it comes with semantic zoom capabilities (well, at least with some help). Although it visually presents a list of items, it’s not an ItemsControl: the Hub control expects you to provide its sections and their corresponding data template more or less manually. Let’s solve that issue and create a Hub control that’s closer to an ItemsControl:

I initially started by hooking up attached properties to the existing Hub class, but then I discovered that this class is not sealed. So I created a subclass, named ItemsHub:

public class ItemsHub : Hub
{
	// ...
}

Then I just added some of the missing properties as dependency properties (using the propdp snippet): ItemTemplate as a DataTemplate, and ItemsSource as an IList.

public DataTemplate ItemTemplate
{
    get { return (DataTemplate)GetValue(ItemTemplateProperty); }
    set { SetValue(ItemTemplateProperty, value); }
}

public static readonly DependencyProperty ItemTemplateProperty =
    DependencyProperty.Register("ItemTemplate", typeof(DataTemplate), typeof(ItemsHub), new PropertyMetadata(null, ItemTemplateChanged));

public IList ItemsSource
{
    get { return (IList)GetValue(ItemsSourceProperty); }
    set { SetValue(ItemsSourceProperty, value); }
}

public static readonly DependencyProperty ItemsSourceProperty =
    DependencyProperty.Register("ItemsSource", typeof(IList), typeof(ItemsHub), new PropertyMetadata(null, ItemsSourceChanged));

When ItemTemplate is assigned or changed, we iterate over all Hub sections to apply the template to each of them:

private static void ItemTemplateChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
    ItemsHub hub = d as ItemsHub;
    if (hub != null)
    {
        DataTemplate template = e.NewValue as DataTemplate;
        if (template != null)
        {
            // Apply template
            foreach (var section in hub.Sections)
            {
                section.ContentTemplate = template;
            }
        }
    }
}

When ItemsSource is assigned or changed, we repopulate the sections and their headers from the source IList, and re-apply the data template (you should not make assumptions on the order in which the dependency properties are assigned):

private static void ItemsSourceChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
    ItemsHub hub = d as ItemsHub;
    if (hub != null)
    {
        IList items = e.NewValue as IList;
        if (items != null)
        {
            hub.Sections.Clear();
            foreach (var item in items)
            {
                HubSection section = new HubSection();
                section.DataContext = item;
                section.Header = item;
                DataTemplate template = hub.ItemTemplate;
                section.ContentTemplate = template;
                hub.Sections.Add(section);
            }
        }
    }
}

Instead of defining a HeaderPath property, or creating a HeaderTemplate, I decided to fall back on the default template (a text block) and to assign the whole item to the section header. Now all you need to do to show a descent header is overriding the ToString method in the (View)Model class:

public override string ToString()
{
    return this.Name;
}

Here’s how to create an ItemsHub control in XAML and define the bindings to its new properties in a light-weight MVVM style:

<Page.DataContext>
    <local:MainPageViewModel />
</Page.DataContext>

<Page.Resources>
    <DataTemplate x:Key="DataTemplate">
        <Image Source="{Binding Image}" />
    </DataTemplate>
</Page.Resources>

<Grid>
    <local:ItemsHub Header="Hub ItemsControl Sample"
                    ItemTemplate="{StaticResource DataTemplate}"
                    ItemsSource="{Binding Manuals}" />
</Grid>

Here’s the result in the attached sample app. Each Hub section represents a business object (an Ikea Instruction Manual) from a collection in the ViewModel:

I focused on the properties that make the most sense in MVVM apps, but -as I mentioned- the framework Hub class is not sealed. So you can use this same technique to add other useful properties like Items and DataTemplateSelector.

Here’s the full code, it was written with Visual Studio 2013 for Windows 8.1: U2UConsult.WinRT.HubItemsControl.zip (759.57 kb)

Enjoy!
Diederik

First time MVP

It is with great pride that I announce that I was presented with the Microsoft Most Valuable Professional Award for the very first time. For the next 12 months, I’m an MVP in the Client Development category, which has an impressively strong Belgian representation.

I would like to thank Microsoft for this unique privilege, and the Belgian MEET Team for the inspiring peer pressure of the last two years.

Community rocks!
Diederik