Using Model-View-ViewModel with WPF

In this blog post I want to show my way of implementing the Model-View-ViewModel pattern for WPF. I hope it serves as a simple example for those of you who want to start using it. The advantage of MVVM is that the view, which is a WPF thing, doesn’t contain any code. Instead the view uses a lot of data binding to bind itself to the data, but also to the functionality through commands. Because the model itself should not be dependent on any specific technology, we use a viewmodel, which is an adapter, to add the model things like extra data and commands. So again, the view doesn’t contain any code, the viewmodel does. This means that it is possible to download the view from a database, so you can vary the view per company/user, or make it very easy to update the view. Second advantage is that is makes it very easy to test your functionality using unit tests, which I think is the most important advantage… Use INotifyPropertyChanged So let’s get started. First of all you should understand INotifyPropertyChanged. To better understand the importance of INotifyPropertyChanged, you should understand how databinding works, for example in WPF. In WPF any control can data bind any of its dependency properties to any data through its data binding mechanism. The control doesn’t need to know about the data object (actually it can’t know the type of the data object because the control is usually written several years before the data object :) Dependency properties have this special feature, namely that you can register for the changed event of the dependency property. So when someone changes the control’s property, the databinding object will be notified and it can then update the data object’s property. INotifyPropertyChanged does the same thing. A databinding object from WPF knows about this interface and will query the data object for it. When it has this interface, the databinding object registers itself for any changed events, and updates the control’s property. This way you get two-way data binding, when one side changes, the other side gets updated as well. Likewise should you use ObservableCollection<T> for any collections, so these can notify any controls that the list has changed. Implementing INotifyPropertyChanged is simple. Start by defining a base class that implements the interface: 1: public class Entity : INotifyPropertyChanged 2: { 3: public event PropertyChangedEventHandler PropertyChanged; 4:  5: public virtual void RaisePropertyChanged(string propName) 6: { 7: if (PropertyChanged != null) 8: PropertyChanged(this, new PropertyChangedEventArgs(propName)); 9: } 10: } INotifyPropertyChanged requires you to implement the PropertyChanged event, and trigger it each time a property changes. This base class does it all… Then any object requiring it can derive from this base class; both model and viewmodel should derive and raise the PropertyChanged event when they change. In this example I’m going to use a Person class: 1: public class Person : Entity 2: { 3: public class Properties 4: { 5: public const string Name = "Name"; 6: public const string Age = "Age"; 7: } 8:  9: private string name; 10:  11: public string Name 12: { 13: get { return name; } 14: set 15: { 16: if (name != value) 17: { 18: name = value; 19: RaisePropertyChanged(Person.Properties.Name); 20: } 21: } 22: } 23:  24: private int age; 25:  26: public int Age { 27: get { return age; } 28: set { 29: if (age != value) { 30: age = value; 31: RaisePropertyChanged(Person.Properties.Age); 32: } 33: } 34: } 35: } There has been much discussion going on on how to implement the setter of your property, mainly in how to pass the property name to the event. I like to use a nested “Properties” class that contains a const string for each property name. This way I find it easy to refactor my property since there are no more strings. Other people like to use reflection but I hate the runtime overhead this brings. This technique also makes is easier to register for PropertyChanged events from other objects. If each class has a nested Properties class, you can easily code find the name of each property using intelli-sense. The Model. Your model should be exactly that. The representation of your business data, such as lists of things. In my example the model is very simple; it contains a list of Person objects. In real life these would the several lists of interrelated objects, such as products, customers, their orders, etc… The model can also have methods implementing business functionality. Any changes to the model should trigger events notifying any controls of changes… 1: public class PeopleModel /* : Entity */ 2: { 3: private ObservableCollection<Person> people 4: = new ObservableCollection<Person>() 5: { // Let's add some data to make it easier to verify data binding in view 6: new Person { Name = "Wesley", Age = 31 }, 7: new Person { Name = "Dimitri", Age = 28 }, 8: new Person { Name = "Gert", Age = 24 }, 9: new Person { Name = "Peter", Age = 24 } 10: }; 11:  12: public ObservableCollection<Person> People 13: { 14: get { return people; } 15: } 16:  17: /* and many more other collections and objects */ 18: } Please note that normally the model would also implement INotifyPropertyChanged, but because this model doesn’t need it, I haven’t gone to the trouble… Uncommenting the Entity base class will not change any functionality in this example so if you want, go right ahead! The ViewModel For me the ViewModel is an adapter, that only exposes stuff so the view can only access what is necessary. For example the view will contain a list of people, will have a current selected person and two fields so you can create a new person. The ViewModel will use data binding to couple itself to the view, but doesn’t know any specific things about the controls used. It does however implement INotifyPropertyChanged so changes to the ViewModel will update the view: 1: public class PersonVM : Entity 2: { 3: private PeopleModel model; 4:  5: public PersonVM(PeopleModel m) { model = m; } 6:  7: public ObservableCollection<Person> People 8: { 9: get { return model.People; } 10: } 11:  12: private string name; 13: public string Name 14: { 15: get { return name; } 16: set 17: { 18: if (name != value) 19: { name = value; RaisePropertyChanged("Name"); } 20: } 21: } 22:  23: private int age; 24: public int Age 25: { 26: get { return age; } 27: set 28: { 29: if (age != value) 30: { age = value; RaisePropertyChanged("Age"); } 31: } 32: } 33: 34: private Person current; 35:  36: public Person Current 37: { 38: get { return current; } 39: set 40: { 41: if (current != value) 42: { 43: current = value; 44: RaisePropertyChanged("Current"); 45: } 46: } 47: } 48:  49: public void DeleteCurrentPerson() 50: { 51: model.People.Remove(Current); 52: Current = model.People[0]; 53: } 54:  55: public ICommand DeleteCommand 56: { 57: get { return new RelayCommand(DeleteCurrentPerson); } 58: } 59:  60: public ICommand AddCommand 61: { 62: get { return new RelayCommand(AddPerson); } 63: } 64:  65: public void AddPerson() 66: { 67: Person newPerson = new Person { Name = this.Name, Age = this.Age }; 68: model.People.Add(newPerson); 69: Current = newPerson; 70: } 71: } So this ViewModel has a People property, which is the list we’re going to work with, a Current property to allow us to access the currently selected person (we’ll databind the selected item property of the list control to it), and a Name and Age property for entering new people. These properties will be data bound to two text boxes, so the user can enter stuff in the text boxes, but the command (which is on the viewmodel level) can access these properties, again not knowing which controls are being used. NOTE: the RaisePropertyChanged is not implemented here without the PersonVM.Properties.Property technique. That is because for this demo I also created a code snippet to make it easier to type in properties that implement INotifyPropertyChanged. It’s very easy to create your own, I’ve actually uploaded a little video on how to do this. The View If you’d ask me what is the best thing in WPF, then I would say DataTemplate! Our UI is going to databind to the ViewModel and we’re going to associate the View using a DataTemplate. WPF will automatically select our view when it databinds to our ViewModel. You can do this as follows: 1: <Window x:Class="ModelViewViewModelLab.Window1" 2: xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 3: xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 4: xmlns:viewModels="clr-namespace:ModelViewViewModelLab.ViewModels" 5: xmlns:views="clr-namespace:ModelViewViewModelLab.Views" 6: Title="Window1" Height="300" Width="300"> 7: <Window.Resources> 8: <!-- voor elk soort van viewmodel mappen we dit naar een view --> 9: <DataTemplate DataType="{x:Type viewModels:PersonVM }"> 10: <views:PersonView /> 11: </DataTemplate> 12: </Window.Resources> 13: <Grid> 14: <ContentControl Content="{Binding}" /> 15: </Grid> 16: </Window> This way the view will be created with its DataContext set to the view model, so we can use DataContext relative data binding: 1: <UserControl x:Class="ModelViewViewModelLab.Views.PersonView" 2: xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 3: xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 4: xmlns:model="clr-namespace:ModelViewViewModelLab.Models" 5: Height="300" Width="300"> 6: <UserControl.Resources> 7: <DataTemplate DataType="{x:Type model:Person}"> 8: <StackPanel Orientation="Vertical"> 9: <StackPanel Orientation="Horizontal"> 10: <TextBox Text="{Binding Name}" /> 11: <TextBlock Text=" - " /> 12: <TextBox Text="{Binding Age}" /> 13: </StackPanel> 14: <ProgressBar Width="100" Height="16" Value="{Binding Age}" /> 15: </StackPanel> 16: </DataTemplate> 17: </UserControl.Resources> 18: <Grid> 19: <Grid.ColumnDefinitions> 20: <ColumnDefinition /> 21: <ColumnDefinition /> 22: </Grid.ColumnDefinitions> 23: <!-- People listbox --> 24: <ListBox ItemsSource="{Binding People}" 25: Name="theList" 26: SelectedItem="{Binding Current}" /> 27: <!-- Command part --> 28: <StackPanel Orientation="Vertical" Grid.Column="1"> 29: <ContentControl Content="{Binding Current}" /> 30: <!-- New person --> 31: <TextBox Text="{Binding Name}" /> 32: <TextBox Text="{Binding Age}" /> 33: <Button Content="Add" Command="{Binding AddCommand, Mode=OneWay}" /> 34: <Button Content="Delete" Command="{Binding DeleteCommand, Mode=OneWay}"/> 35: </StackPanel> 36: </Grid> 37: </UserControl> Note the buttons in the view; these are data bound to the RelayCommand instances on the view model. Clicking these will execute the associated command. 1: public class RelayCommand : ICommand 2: { 3: Action action; 4:  5: public RelayCommand(Action execute) 6: { 7: action = execute; 8: } 9:  10: public bool CanExecute(object parameter) 11: { 12: return true; 13: } 14:  15: public event EventHandler CanExecuteChanged; 16:  17: public void Execute(object parameter) 18: { 19: action(); 20: } 21: } The Commands RelayCommand is a (too) simple implementation of the command pattern in WPF, but it allows us to easily execute a method on the model or viewModel. Our viewModel uses it to expose two commands: add and delete: 1: public ICommand DeleteCommand 2: { 3: get { return new RelayCommand(DeleteCurrentPerson); } 4: } 5:  6: public ICommand AddCommand 7: { 8: get { return new RelayCommand(AddPerson); } 9: } Thanks to delegates this is easy! The commands each execute a method on the viewModel: 1: public void AddPerson() 2: { 3: Person newPerson = new Person { Name = this.Name, Age = this.Age }; 4: model.People.Add(newPerson); 5: Current = newPerson; 6: } AddPerson requires a couple of inputs, the name and age of the new person to add; its gets these from the Name and Age properties on the ViewModel. How did these get filled in? The view has a Textbox data bound to each! 1: public void DeleteCurrentPerson() 2: { 3: model.People.Remove(Current); 4: Current = model.People[0]; 5: } DeleteCurrentPerson will remove the currently selected person from the list of people. Again the Current property is data bound to the selectedItem property of the listbox, so when you select something from the listbox, the current property will be automatically updated. After deleting the current person, we select the first person from the list. Carefull, this will give a nice bug when the last person is removed from the list, but fixing this should be easy… So, MVVM. That’s it. Of course you could add a lot using other patterns, but that is for another time…

Configuring your WFC and WF4 services using AppFabric

Configuring your services Normally I configure my services using Visual Studio (and type-ing in the configuration as Xml) or using the WCF Service Configuration tool. AppFabric also allows you to configure your services, directly from IIS (making it a nice integrated experience!). The difference is in that AppFabric exposes the stuff an it-pro needs to look at the health of an application more… Developers are more interested in making it work, productions is more interested in keeping it working… <grin> You can setup a persistance and tracking store (actually databases) to make your services trackable and durable. This will also make it easier (or less hard) to see why your service is no longer functioning the way it should. Of course you can still setup the System.Diagnostics tracing, but this again is more for developers. Some preparations are needed AppFabric uses the net.pipe protocol to manage your services (through standard endpoints), so you might need to enable this on IIS. Select your site, then select Edit Bindings… The following window should open:   If net.pipe is not listed, hit the Add… button and select net.pipe. Use * for Binding information. Then go to your service and select Advanced Settings… The dialog should open: Add net.pipe to the list of Enabled Protocols. You might also need to run the AppFabric system-level configuration. To do this, go to Start->All Programs->Windows Server AppFabric->Configure AppFabric. The configuration utility should launch: Hit Next: Here you can configure the monitoring and persistance databases. Check the Set Monitoring configuration check-box, select the account you want to use and the provider (there is one default, which will store everything in a SQL server database: Hit the Configure… button. Configure as follows (replacing Peter-PC with your domain/machine name): Hit Ok. Check the results. You might get an ‘the database already exists’ kind of warning. Simply continue… Continue through the wizard… Configuring a WCF service In IIS, select your site or service (and most of this can also be done at other levels), and then in the actions pane select “Manage WCF and WF Service->Configure…” This opens the “Configure WCF and WF for Site” dialog: Check the “Enable metadata over HTTP” to set the ServiceMetadata behavior. Over to the Monitoring tab:   Keep the checkbox checked if you want monitoring records written to the monitoring database. Using the level you can change from monitoring everything to nothing… You can also configure the usual WCF tracing and message logging here. The throttling tab allows you to limit the number of requests and service instances, while the security tab allows you to set/change the service certificate. A later post will be about configuring WCF and Workflow services…

Windows Server AppFabric Beta 2: Deploying services

Microsoft released Visual Studio 2010 RC a while ago, but unfortunately this broke Windows Server AppFabric beta 1. Luckily march 1 MS released beta 2, which works with VS 2010 RC. I’ve installed it and will now try to show you a couple of things. So what is AppFabric? To be honest, there is another AppFabric, the one for Azure, and that is not the one I am talking about. What is Windows Server AppFabric? AppFabric makes installing, administering, monitoring and fixing problems in WCF 4 (!Yes, only starting at .NET 4) services a lot easier by extending IIS and WAS (Windows Activation Services, which are used to host non-HTTP WCF services in IIS). It also adds a distributed caching mechanism (also known as Project Velocity) to make it easier to scale ASP.NET and WCF services. If you’re familiar with BizTalk 2006/2009, you’ll know that the BizTalk Administration application shows you each BizTalk application’s health, what went wrong, how many were executed, etc… AppFabric gives you the same but now for WCF and WF 4 services. Look at this screen shot: As you can see, after installing AppFabric IIS is now showing these new icons: AppFabric Dashboard, Endpoints and Services. The dashboard will show you the current status of your services (running, stopped, with errors, etc…). Endpoints and Services allow you to list and configure the endpoints and services. Deploying using AppFabric AppFabric Hosting Services provides easier deployment of services: First thing you need to do is to package your WCF service. To do this go to the project properties and select the new Package/Publish tab: You can now select where to create the package (and if your want it as a .zip file) and how to deploy it in IIS. Now you need to create the deployment package in Visual Studio 2010: Now we can import the application using IIS: This will open the Import Application Package dialog. Use the browse button to open the package .zip file: Hit next: And Next again: Note that the service name is taken from the package properties. Hit next again, hopefully your services will deploy successfully. To verify this, you can go to your site in IIS: And click on Services. You should see your service listed (for example I have three services running here): You can also click on Endpoints to see the list of endpoints: Because of WCF default endpoints you get 4 different endpoints per service; you can modify which types of default endpoints you want, but that is now what I’ll be showing you here.   A note on using your own application pool with AppFabric During my experiments I created a new AppPool for my services. When testing my service, I would always get the following error: HTTP Error 503. The service is unavailable. First I thought the solution would be easy. My application pool was stopped. Starting it should fix the problem. But it didn’t. So I investigated a little further. Seems that my new application pool tries to use .NET 4 version 21006 (beta 2?). I could see this in the Event Viewer: The worker process failed to pre-load .Net Runtime version v4.0.21006. I think something didn’t (un)install during my migration to .NET 4 RC. So I’m now using the ASP.NET 4 application pool…

When being lazy is (finally) good

In this blog post I want to talk about .NET 4 new Lazy<T> class. First of all, why would you need something called Lazy? You can use it for data access for example; when you load a row from a database parent table. Would you need to load the child rows automatically, or delay until they’re required. Some systems will delay load automatically, or load all they can (but what then when the child rows have other relations to grandchild rows, etc…). This kind of delayed loading of data is just what Lazy<T> (or Lazy(Of T) when using VB.NET) supports. It’s a great type to use when you have an object which is very expensive to create, and you only want to create it on first use. Let’s start with an example; let’s say you have this big-ass class: 1: class BigAndExpensive 2: { 3: string s = ""; 4:  5: public string GetTheData() 6: { 7: return s; 8: } 9:  10: public BigAndExpensive() 11: { 12: Console.WriteLine("BigAndExpensive is being created..."); 13: for (int i = 0; i < 10000; i++) 14: s = s + "."; 15: Console.WriteLine("BigAndExpensive is finally created..."); 16: } 17: } As you can see, creating is very expensive (it will actually consume about 10 Gb of memory, triggering a lot of garbace collects). Let’s create an instance of this class without, then with Lazy<T> and look at the performance: 1: BigAndExpensive be; 2: Lazy<BigAndExpensive> lbe; 3:  4: using (new MeasureDuration("Not using Lazy evaluation")) 5: { 6: be = new BigAndExpensive(); 7: } 8: using (new MeasureDuration("Accessing non-lazy object's method")) 9: { 10: string s = be.GetTheData(); 11: } 12: using (new MeasureDuration("Using Lazy evaluation")) 13: { 14: lbe = new Lazy<BigAndExpensive>(false); 15: } 16: using (new MeasureDuration("Accessing lazy object's method")) 17: { 18: string s = lbe.Value.GetTheData(); 19: } 20: using (new MeasureDuration("Again accessing lazy object's method")) 21: { 22: string s = lbe.Value.GetTheData(); 23: } In order to use the Lazy<T> object you have to get it’s value property. When the lazy loaded value hasn’t yet been created, accessing the Value will create it. The MeasureDuration class is a little timer taking advantage of the using statement: 1: class MeasureDuration : IDisposable 2: { 3: Stopwatch sw; 4: string what; 5:  6: public MeasureDuration(string what) 7: { 8: this.what = what; 9: sw = new Stopwatch(); 10: sw.Start(); 11: } 12:  13: public void Dispose() 14: { 15: sw.Stop(); 16: Console.WriteLine("Measured duration of -{0}- took {1} ticks ({2} ms)" 17: , what, sw.ElapsedTicks, sw.ElapsedMilliseconds); 18: } 19:  20: } The output I get on machine looks like this: As you can see, creating a Lazy object is very fast, but of course as you can expect, using it the first time is just as expensive due to the creating process. Using it the second time is again very fast. Now go back to the code, and look for the Lazy<T> constructor. Change the false argument to true: 1: lbe = new Lazy<BigAndExpensive>(true); This will make the instantiation process of the actual instance thread-safe. This means it will be a little slower, but only during construction. Is it worth the price? If you’re using multiple threads YES YES YES! Now let’s try to see what happens when many threads access an unprotected Lazy object (never be lazy AND unprotected :)) This is the code: 1: private static void UsingLazyObjectsFromMultipleThreads() 2: { 3: Lazy<BigAndExpensive> createMeOncePlease = new Lazy<BigAndExpensive>(isThreadSafe:false); 4:  5: ManualResetEvent youMayBegin = new ManualResetEvent(false); 6: AutoResetEvent done = new AutoResetEvent(false); 7:  8: // create a lot of threads that will use our object all at once 9: for (int i = 0; i < 20; i++) 10: { 11: Thread t = new Thread(() => 12: { 13: youMayBegin.WaitOne(); 14: Console.WriteLine("Thread {0} getting data", Thread.CurrentThread.ManagedThreadId); 15: using (new MeasureDuration("Multithreading")) 16: createMeOncePlease.Value.GetTheData(); 17: done.Set(); 18: }); 19: t.Start(); 20: } 21: youMayBegin.Set(); 22: // wait for all threads to complete 23: for (int i = 0; i < 20; i++) 24: done.WaitOne(); 25:  26: } I’ve now used the named argument feature of C# 4.0. In this case it make the code a lot clearer doesn’t it? So what does the code do. It creates 20 threads which all first wait for the “youMayBegin” event. This way all threads will start running at the same time. Then they each access the “createMeOncePlease” lazy instance, so some of them will start to create the instance (because it hasn’t yet been created). Then they will all signal that they’re done so the main thread can stop too. So let’s run the code (making sure the isThreadSafe is set to false). I get this: This is bad. Very bad. Instead of calling the constructor of my very expensive object once, it calls it several times. why? Think about lazy’s possible thread-unsafe implementation: 1: class Lazy<T> where T : class, new() 2: { 3: T instance = null; 4:  5: public T Value 6: { 7: get 8: { 9: if (instance == null) 10: instance = new T(); 11: return instance; 12: } 13: } 14: } When you run the if statement on multiple thread, each will evaluate to true, then each will create an object and overwrite instance’s value. So what is the solution? Simply pass true for the isThreadSafe argument. Running this code once more looks like this on my machine: Good. My expensive object only get’s created once. But why are the calls soo expensive after all. That is because when we access Value, only one thread will be allowed to create the instance, but the other Value calls will need to wait for the first one to complete. If you insert another call using Value you’ll see the speed is very fast. If you only need initialization to be thread-safe, or only access to the object in a thread-safe you you can also use the contructor taking a LazyThreadSafetyMode enumeration: 1: None = 0, 2: PublicationOnly = 1, 3: ExecutionAndPublication = 2 What if your expensive class requires special construction, like a special constructor? Then you can use another constructor of Lazy<T>, one that takes a delegate( Func<T> ) so you can create your object your way. 1: Lazy<BigAndExpensive> createMeOncePlease = 2: new Lazy<BigAndExpensive>(() => new BigAndExpensive());

Gated Check In with Multiple Build Definitions

As promised in my previous blog on Gated Check In, in this blog I’ll discuss using multiple builds with gated check in. So what happens when you check in (using gated check in) and there are multiple build definitions targeting the solution? Well, Gated Check In will then allow you to choose between the different build definitions; for the moment Team Build does not allow you to filter the build definitions that are shown: In my example I’ve created a second build definition that doesn’t run tests, so you can select this one when you need to check in code without running tests…

Techdays 2010

I’m happy to say I’ll be speaking at TechDays 2010 in Belgium and DevDays 2010 in the Netherlands. In Belgium I’ll be doing one session on “What is new in WCF 4” and one session on “Workflow Foundation 4”. In the Netherlands I’ll do on session on “What is new in WCF4” and one on “Developing for Windows 7 with the Windows API code pack”.

Planning, Running and measuring tests with Visual Studio 2010 Ultimate and Test and Lab Manager

So what does Visual Studio 2010 bring for testers? A whole lot! Especially the new test environment, where you can create a test plan to validate the quality of the software you’re building. A test plan is a collection of test cases, which you can then run. While running the system keeps track of a whole lot of things, including code coverage and IntelliTrace information. And finally this application allows you to examine the combined results of all tests, to see how your development effort is doing. So again: A test plan is a collection of test suites, used to test a certain iteration in your project. A test suite is a collection of test cases to test a certain part of your project, and a test case is a test for a project feature. A test case is basically an UI test running in a specific environment. To model this, Test and Lab manager also allows you to define a test configuration, which is a certain environment for your code, for example Windows 7 with IE8, or Windows XP SP2 with IE7. Running Test and Lab Manager First time you run this application it will ask for your Team Foundation Server (TFS) and after that, for the Project Collection and Team Project: Ok, next step it will ask you for a test plan; since there are none you will have to create a new one: Then click “Select plan >”. This opens the test plan. if you want you can change some of the properties of this test plan by clicking on the properties tab: Here you can change the iteration for example, or the test settings.   With test settings you can change how your tests will be executed. For example in the Local Test Run settings you can change the diagnostic data adapter, which record data from your test run. For example is you want an action recording, event log entries, etc… You can also use it to emulate certain environments, for example running low on memory. You can even create your own data adapter. This first page allows you to change the name, and choose between running a manual or automated test. On the second page you can define the roles. You define a role for each tier to use to run tests and collect data. One the local settings you can only have one role, your own physical machine. If you want to use roles, you will also have to define environments. Here you can define what kind of data you want to gather. For example, the Action Recording will record each action taken during the test, making it easy for another person (typically a developer) to understand what happened during running the test. The Action log is a text version of the recording. IntelliTrace will allow a developer to load the IntelliTrace log and step back in the code, to see how the error came to be. Go back to the Contents tab. Now we’re ready to test some of our user stories (from now on I will be using User Stories, but this could just as easily be called a requirement or anything else to denote a certain needed functionality of your software system). Test cases are grouped into test suites. Per test plan you have a default test suite, but you can also create a test suite for a specific user story, copy a test suite from another test plan, create a nested suite or create one from a query: If you select “Add requirement"to plan” you will need to select a user story. This user story is now associated with this test suite. This will allow reporting to figure out which tests have ran for which user story. Now we’re ready to add a test case, so click the New button. The New Test Case dialog should show: Add the four steps (please note that we’re doing this for the sake of the demo, normally your requirement should not be to crash :) ). Save it. Click on the Tested User Stories tab: the User Story should be there. Save and Close. A couple of other things you can now do is to assign it to a tester, and/or change the configurations: Let’s run the test. First open the Test tab: From here you can see all test cases for a test suite: Select the test case and click on the run button. The Test Runner should open: Check the Create action recording button and hit Start Test. Do as the test case says, and try to use a repeatable way to start the application (for example using the start menu). The first and second step should not be a problem, so click the drop down boxes to the right of each step to indicate success: Now make the third step fail. You’ll need to enter a comment why it failed. Then click the End Test hyperlink button. You’ll see test center gather some information. On top of the test runner window you’ll see a toolbar. Click on the Create Bug button: You’ll see that most of the fields are automatically filled in. You might want to change the Severity for example, and maybe assign it to a certain tester, but that is all… Click Save. Check out the information included in the System Info, and All Links tab. Close it. Remember me asking you to start the application using some repeatable way? Well, try and click on the play button. Your test should replay again. This makes it easier to re-test later. Coming back to Test and Lab Manager. The test case failed so now the UI updates to show this: If you have more than one test, the bar will color depending on how many succeeded, failed or are still in progress. Later you can come back and verify if a bug still exists by using the Verify Bugs tab: Go back to the Plan tab, then open the properties. Now you should see an overview: Back to Visual Studio, so close the Test and Lab Manager. Go to the Team Explorer and open the My Bugs query. You should see the bug we created before. Open it: Go to All Links, and open the IntelliTrace log (.tdlog). With it you can now replay the bug, just like in my previous post on IntelliTrace. If this doesn’t work, go back to the test settings. You might also like the Video feature, so you can replay the user’s actions!

WCF and large messages

This week I’ve been training a couple of people on how to use .NET, WCF4, Entity Framework 4 and other technologies to build an Enterprise Application. One of the things we did was return all rows from a table, and this table contains about 2500 rows. We’re using the Entity Framework 4 self-tracking entities which also serialize all changes made it the objects. And I kept getting this error: The underlying connection was closed: The connection was closed unexpectedly. Server stack trace:    at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason)    at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)    at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout)    at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) At first I thought it was something to do with the maximum message size, another kind of error you get when sending large messages. The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. Server stack trace:    at System.ServiceModel.Channels.HttpInput.ThrowMaxReceivedMessageSizeExceeded()    at System.ServiceModel.Channels.HttpInput.GetMessageBuffer()    at System.ServiceModel.Channels.HttpInput.ReadBufferedMessage(Stream inputStream) This one is easy to fix (although a little confusing because you have to configure the binding of the receiving side of the message, which is most of the time the client: But doing this didn’t help. So what was it? I knew the size of the message couldn’t be the problem, because I’d sent way bigger messages before. Maybe there was something in the contents that made the DataContractSerializer crash? Checking this is easy, I wrote a little code that would make the serializer write everything to a stream and see what happens. Works fine. Hmmm. What could it be? So I went over the list of properties of the DataContractSerializer. I has a MaxItemsInObjectGraph property. Maybe that was it, but how can I change this number? Looking at the behaviors I found it. What you need to do when you send a large number of objects is you have to increate this number, which is easy. At the server side you use the DataContractSerializer service behavior and set its value to a large enough number:   At the clients side you use the DataContractSerializer endpoint behavior. That fixed my problem.

Using the Visual Studio 2010 Historical Debugger to save and reproduce bugs

Visual Studio 2010 Ultimate now has IntelliTrace, which is a historical debugging feature. IntelliTrace will keep track of everything your code has been doing and then, when a bug arrives, you can back-track to the possible cause of the bug. You might think this is nothing new, but don’t forget it also tracks the value of any variable at that moment in time. With the usual debugger you will see the latest value, not the value at that moment. You can enable (and it is enabled by default) by going into Options->IntelliTrace->General If you want to try this, build a simple windows forms application with a Button and CheckBox. Then add code as follows: 1: Public Class Form1 2:  3: Private message As String 4: Private flag As Boolean 5:  6: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click 7: flag = FlagCheckBox.Checked 8: Doh() 9: End Sub 10:  11: Private Sub Doh() 12: If flag = True Then 13: message = "This should work" 14: Else 15: message = Nothing 16: End If 17: flag = True 18: ShowMessage(message) 19: End Sub 20:  21: Private Sub ShowMessage(ByVal message As String) 22:  23:  24: ' Throw exception if message is empty 25: Dim len As Integer = message.Length 26:  27: End Sub 28: End Class Run it with the checkbox unchecked. This should cause the application to stop in the debugger: Your code should look like this: See those special arraws in the gutter (the gutter is the section where you click to add/remove breakpoints)? These will allow you to backtrack. You should also have a IntelliTrace window looking like this (two views are available): With these you can jump directly to one of your code’s stackframe. Let’s try stepping back to see what happened. Press the back arrow, or better yet, use the keyboard shortcut (on my machine that is Ctrl+Shift+F11): You should now be able to step back in time, for example look how my IDE looks like when stopped at the ShowMessage method: IntelliTrace keeps track of the values of the arguments and such, at each point in time. For example, a little higher in the code I changed the flag variable to true, while the bug is caused because it was false before. IntelliTrace will show me this. 1: Private Sub Doh() 2: If flag = True Then 3: message = "This should work" 4: Else 5: message = Nothing 6: End If 7: flag = True 8: ShowMessage(message) 9: End Sub IntelliTrace will save all of this information in a file (now with an extension called iTrace, notice the naming convention <GRIN>), and because you can open this file afterwards, all kinds of scenario’s can now be implemented. You can send the file to a colleague so he/she can help you debug the problem. So where is this file? Go back to Tools->Options->Advanced and copy the directory for later: Open this directory in file explorer and copy the file to where you like. You will have to stop debugging, otherwise Visual Studio keeps a lock on the file. Don’t stop Visual Studio, because VS throws away the log file when it quits. When you open the file again, Visual Studio will show the log’s summary: Open the Thread Lists and look for the thread you want to see (probably the main thread): Double click it and with a little patience the debugger will open (make sure Visual Studio can find the sources). Or better even, the Visual Studio Test and Lab Manager 2010 can do this for you. So even if you have a non-technical user who is testing some application, and finds a bug, the IntelliTrace file will be attached automatically!

Building an Enterprise Application with Entity Framework 4

Entity Framework 3 was a bit of a disappointment when it came to supporting enterprise applications. For me the major reason was the fact that entities used by EF required deriving from a class which is part of EF, thus coding the EF requirement into your Business Logic Layer (BLL) and presentation layer. EF 4 is still under development, but already they’re making a lot of compensation for this with their support for POCO (Plain Old Clr Object) and self tracking objects. POCO vs. Self Tracking A self tracking object is an object that has state where you can check what has happened to the object (unmodified, new, modified, deleted). The advantage of this kind of object is simplicity for the user, because the object does all the tracking. Of course this means more work building the object itself, but this can be solved using T4 templates. A POCO is really nothing more than a data carrier object, without any tracking support. Simplicity means maximum portability, especially if you use DataContracts. For the rest of this post I’ll be using self tracking objects, generated through EF 4. I’ll also be using the EF feature CTP 2. Using EF 4 to generate the self tracking objects Start by creating a new WinForms project (of WPF, Silverlight, whatever). Add another library project for your data access layer (DAL) and another one for your entities: Normally I would also add a business logic layer (BLL) but for simplicity I’ll leave it out for now. Now add a new Entity Data Model to your DAL project. Select the Northwind database, then select the Categories and Products tables: This way you end up with this model: Please note that my tables/entities each have an extra column, the Version column. This is a TimeStamp column used to detect concurrent updates. To tell EF to use this column for concurrency, set its Concurrency Mode property to Fixed. This is typically the best way to handle concurrent updates. Right-click your entity model’s background, then select the Add Code Generation Item… menu choice: This alternative code generation will add two T4 templates to the DAL project (using the .tt extension) The ProductsModel.Context.tt is an EF dependent template, so leave it in the DAL project, but the ProductsModel.Types.tt contains the EF independent types which actually are self-tracking entities. Move this template to the Entities project: Watch out, your project will not build until you set following references (diagram made with Visual Studio’s new UML Component Diagram) : This diagram also includes the BLL layer, our solution doesn’t, but if you want, feel free! If you’re using VB.NET, you should also add the Product.Entities namespace to your list of imports of the DAL project: Now you’re ready to implement the DAL layer, so add a ProductsDAL class as follows: 1: Public Class ProductsDAL 2:  3: Public Function GetListOfCategories() As List(Of Category) 4: Using db As New NorthwindEntities 5: Return db.Categories.ToList() 6: End Using 7: End Function 8:  9: Public Sub UpdateCategory(ByVal cat As Category) 10: Using db As New NorthwindEntities 11: db.Categories.ApplyChanges(cat) 12: db.SaveChanges() 13: End Using 14: End Sub 15:  16: End Class Now let’s add some controls and data bind them with Window Forms. For this I use the Data Sources window. Open it and add another data source. Select an object data source: Then select your Category of Product entity, which you should find in the Products.Entities project: Your data source window should now display your entities: Right-click category and select Customize… from the drop-down list. Now you can select Listbox as the control to use. Drag the Category entity to your form to create a listbox and bindingSource: Add two buttons, the Load and Save button. Implement the first button to retrieve the list of categories from the DAL: 1: Dim dal As New ProductsDAL 2: CategoryBindingSource.DataSource = dal.GetListOfCategories() And implement the save button as follows: 1: Dim dal As New ProductsDAL 2: Dim cat As Category = TryCast(CategoryBindingSource.Current, Category) 3: dal.UpdateCategory(cat) Run the solution and click on load. The listbox should fill with categories and the window should look like this (you might first want to copy the connectionstring in the DAL’s project .config to the form’s config): Note the “Change Tracking Enabled”. Check the checkbox if you want to update an object, this will enable the self tracking. Make a change, then click Save. This should update the database. Open two instances of this application. Load in both, then change the same record in both (with tracking enabled). Save both. The second Save should fail because of concurrency checking. Done!