Building a declarative WCF service using Workflow Foundation 4 and Content Based Correlation 29 March 2010 Peter-Himschoot WF 4, VS2010, .NET Development This blog post accompanies my session on Workflow Foundation 4 programming during the Belgian Tech Days (actually developers and IT-pro days :)). During this session I built a WCF service using Workflow Foundation 4, and I want to show you how to do this on your own… In the first part you’ll learn how to create a simple FlowChart workflow and test it, and then in the second part you’ll learn how to setup correlation so multiple players can play the game… Preparing the lab This lab starts with a new project, so start Visual Studio 2010 and create a new workflow service project (call it NumberGuessingGame): This creates a new solution with a workflow project. Remove Service1.xamlx, we’re going to add a new service with a better name. Right-click the project and add a new item. Select the WCF Workflow Service template and name it GuessIt.xamlx. Creating the initial FlowChart This adds a workflow service with a sequential activity containing a Receive and Send activity. Delete the sequential activity leaving the workflow empty. Open the toolbox and drag a FlowChart activity onto the workflow designer. This should look like this now: Next drag a Receive activity onto the designer, below Start. Name it Start A New Game. Select the receive activity and enter following the OperationName, ServiceContractName and CanCreateInstance properties in the properties window: Next right-click on the receive activity and select the Create SendReply option from the drop-down menu. This add a Send activity to the workflow. The Send is coupled to the receive activity through its Request property: Now connect the activities: Adding variables Now, in the workflow designer open the variables window and add a new variable called player of type String. This will hold the player name received through the “Start A New Game” Receive activity. You will also see a variable of type CorrelationHandle. This is used to connect several receive and send activities through correlation. Rename this to gameHandle.We’ll use this handle later to setup content-based correlation. Now add two new variables theNumber and guess, both of type Int32. The first is the number the user needs to guess, to you need to initialize it to a random number between 0 and 100. Use the Random type to do this: Go back to the first receive activity. Click on the content property. This opens the Content Definition window. Select the player as the message data (and set the Message type to String): Do the same for the send activity, but now use following expression (of type String): 1: player + " guess the number between 0 and 100" You might also get validation errors because you renamed the correlation handle to gameHandle. Change the CorrelationInitilizers property to use gameHandle (click on the … button). Testing the service Press F5 in Visual Studio. The WCF Test Client will start and automatically fetch meta-data from the service. Double click the Begin method. This opens a new tab allowing you to invoke the Begin operation on the service. Enter your name and click Invoke. In the bottom area you should see the result of invoking the begin method. This concludes the first step. Adding the data contracts Let’s add a couple of data contracts to the project, one for a game and another for a guess. Note that both contain the player name, this way our service will be able to distinguish between multiple games, as long as each player uses a unique player name: 1: [DataContract] 2: 3: public class Game 4: 5: { 6: 7: [DataMember] 8: 9: public string PlayerName { get; set; } 10: 11: [DataMember] 12: 13: public string Message { get; set; } 14: 15: } 16: 17: [DataContract] 18: 19: public class Guess 20: 21: { 22: 23: [DataMember(IsRequired = true)] 24: 25: public string PlayerName { get; set; } 26: 27: [DataMember(IsRequired = true)] 28: 29: public int Try { get; set; } 30: 31: } Make the “Send Game Started” send activity use a Game instance as the result. To do this you will need to add another variable of type Game, and initialize it to a new instance: Now add to assign activities after the Begin receive activity, and assign appropriate values to the game: Adding the guessing logic Add another variable, calling it guess of type Guess. Then add another receive activity after the send activity, but now with an operation name called Guess, taking guess variable as its content. Check if the guess was right using a decision shape. If so, congratulate the user. If not, decide whether the guess was too small or large. Confirm this back to the user using three SendReply activities (create each one by right-clicking the Guess receive activity and select Create SendReply). Here is an example of the result: Adding Content-based Correlation Of course we want to make a single player play the same game, but what if multiple players are playing on the same server? We need correlation. In this part we’re going to correlate the guess activity to the Begin activity using the player’s name. That is why our data contract both contain the player’s name (by coincidence the properties have the same name, but this is not required). To do correlation we need a correlation handle. This handle will act like a key to a workflow. If a request comes in, we’re going to find this key through part of the message, in our case the player’s name. So, if you haven’t done so, add a new variable to the workflow called gameHandle of type CorrelationHandle. The make sure that all the first sendReply activity is set to initialize this handle by ensuring the CorrelationInitializer is set like this: When the workflow sends this message, we’re saying that the key of the gameHandle is the player’s name. So when another message arrives, the workflow runtime can then see is the message contains a valid key (again the player’s name), use this key to find the right workflow instance, and then send the message to it. So next step is to set the Guess receive activity’s Correlates On like this: Now you should be able to play a game, making sure that the first and following messages all have the same player name. You can also start two or more games, each with their own player name!
ObservableCollection<T> now part of .NET 4 (No need to reference WPF) 22 March 2010 Peter-Himschoot .NET Development, VS2010, WPF/Silverlight ObservableCollection<T> is a generic collection added as part of WPF and Silverlight. WinForms has BindingList<T>. So writing code that targets both WinForms and WPF would mean using BindingList<T> (the common thing) and writing code targetting WPF and Silverlight would mean ObservableCollection<T>. So there would be no way to write code that targets all three platforms. Luckily now (in .NET 4) we can use ObservableCollection<T> anywhere because Microsoft made it part of System.dll. Nice! Two others were also moved here: ReadOnlyObservableCollection<T> and INotifyCollectionChanged. To double check if I could use these collections outside WPF projects I created a simple console application using them: 1: class Program 2: { 3: static void Main(string[] args) 4: { 5: ObservableCollection<string> noWpf = new ObservableCollection<string> { "Hello", "World" }; 6: INotifyCollectionChanged watchCollection = noWpf as INotifyCollectionChanged; 7: 8: 9: 10: if (watchCollection != null) 11: { 12: watchCollection.CollectionChanged += (sender, e) => { Console.WriteLine("Collection action = {0}", e.Action); }; 13: } 14: 15: noWpf.Add("Love it!"); 16: } 17: } Compiles. Runs. Could I use it in WinForms? So I created a simple WinForms application like this: 1: public partial class Form1 : Form 2: { 3: ObservableCollection<string> noWpf; 4: 5: public Form1() 6: { 7: InitializeComponent(); 8: 9: noWpf = new ObservableCollection<string> { "Hello", "World" }; 10: var bs = new BindingSource() { DataSource = noWpf }; 11: bs.ListChanged += (sender, e) => { MessageBox.Show(e.ListChangedType.ToString()); }; 12: listBox1.DataSource = bs; 13: } 14: 15: private void button1_Click(object sender, EventArgs e) 16: { 17: noWpf.Add("Test"); 18: } 19: } But when I click the button, which adds a new element to the observable collection, the listbox doesn’t update. It looks like winforms databinding doesn’t support ObservableCollection…
Configuring your WFC and WF4 services using AppFabric 19 March 2010 Peter-Himschoot VS2010, WCF, WF 4, .NET Development, AppFabric Configuring your services Normally I configure my services using Visual Studio (and type-ing in the configuration as Xml) or using the WCF Service Configuration tool. AppFabric also allows you to configure your services, directly from IIS (making it a nice integrated experience!). The difference is in that AppFabric exposes the stuff an it-pro needs to look at the health of an application more… Developers are more interested in making it work, productions is more interested in keeping it working… <grin> You can setup a persistance and tracking store (actually databases) to make your services trackable and durable. This will also make it easier (or less hard) to see why your service is no longer functioning the way it should. Of course you can still setup the System.Diagnostics tracing, but this again is more for developers. Some preparations are needed AppFabric uses the net.pipe protocol to manage your services (through standard endpoints), so you might need to enable this on IIS. Select your site, then select Edit Bindings… The following window should open: If net.pipe is not listed, hit the Add… button and select net.pipe. Use * for Binding information. Then go to your service and select Advanced Settings… The dialog should open: Add net.pipe to the list of Enabled Protocols. You might also need to run the AppFabric system-level configuration. To do this, go to Start->All Programs->Windows Server AppFabric->Configure AppFabric. The configuration utility should launch: Hit Next: Here you can configure the monitoring and persistance databases. Check the Set Monitoring configuration check-box, select the account you want to use and the provider (there is one default, which will store everything in a SQL server database: Hit the Configure… button. Configure as follows (replacing Peter-PC with your domain/machine name): Hit Ok. Check the results. You might get an ‘the database already exists’ kind of warning. Simply continue… Continue through the wizard… Configuring a WCF service In IIS, select your site or service (and most of this can also be done at other levels), and then in the actions pane select “Manage WCF and WF Service->Configure…” This opens the “Configure WCF and WF for Site” dialog: Check the “Enable metadata over HTTP” to set the ServiceMetadata behavior. Over to the Monitoring tab: Keep the checkbox checked if you want monitoring records written to the monitoring database. Using the level you can change from monitoring everything to nothing… You can also configure the usual WCF tracing and message logging here. The throttling tab allows you to limit the number of requests and service instances, while the security tab allows you to set/change the service certificate. A later post will be about configuring WCF and Workflow services…
Windows Server AppFabric Beta 2: Deploying services 14 March 2010 Peter-Himschoot AppFabric, VS2010, WCF, WF 4, .NET Development Microsoft released Visual Studio 2010 RC a while ago, but unfortunately this broke Windows Server AppFabric beta 1. Luckily march 1 MS released beta 2, which works with VS 2010 RC. I’ve installed it and will now try to show you a couple of things. So what is AppFabric? To be honest, there is another AppFabric, the one for Azure, and that is not the one I am talking about. What is Windows Server AppFabric? AppFabric makes installing, administering, monitoring and fixing problems in WCF 4 (!Yes, only starting at .NET 4) services a lot easier by extending IIS and WAS (Windows Activation Services, which are used to host non-HTTP WCF services in IIS). It also adds a distributed caching mechanism (also known as Project Velocity) to make it easier to scale ASP.NET and WCF services. If you’re familiar with BizTalk 2006/2009, you’ll know that the BizTalk Administration application shows you each BizTalk application’s health, what went wrong, how many were executed, etc… AppFabric gives you the same but now for WCF and WF 4 services. Look at this screen shot: As you can see, after installing AppFabric IIS is now showing these new icons: AppFabric Dashboard, Endpoints and Services. The dashboard will show you the current status of your services (running, stopped, with errors, etc…). Endpoints and Services allow you to list and configure the endpoints and services. Deploying using AppFabric AppFabric Hosting Services provides easier deployment of services: First thing you need to do is to package your WCF service. To do this go to the project properties and select the new Package/Publish tab: You can now select where to create the package (and if your want it as a .zip file) and how to deploy it in IIS. Now you need to create the deployment package in Visual Studio 2010: Now we can import the application using IIS: This will open the Import Application Package dialog. Use the browse button to open the package .zip file: Hit next: And Next again: Note that the service name is taken from the package properties. Hit next again, hopefully your services will deploy successfully. To verify this, you can go to your site in IIS: And click on Services. You should see your service listed (for example I have three services running here): You can also click on Endpoints to see the list of endpoints: Because of WCF default endpoints you get 4 different endpoints per service; you can modify which types of default endpoints you want, but that is now what I’ll be showing you here. A note on using your own application pool with AppFabric During my experiments I created a new AppPool for my services. When testing my service, I would always get the following error: HTTP Error 503. The service is unavailable. First I thought the solution would be easy. My application pool was stopped. Starting it should fix the problem. But it didn’t. So I investigated a little further. Seems that my new application pool tries to use .NET 4 version 21006 (beta 2?). I could see this in the Event Viewer: The worker process failed to pre-load .Net Runtime version v4.0.21006. I think something didn’t (un)install during my migration to .NET 4 RC. So I’m now using the ASP.NET 4 application pool…
Team System 2010: Easier project management with Team Project Collections 03 March 2010 Peter-Himschoot Team System, VS2010 Team System 2010 introduces the concept of team project collections (TPC). A team project collection is, as it says, a collection of team projects, which can be managed individually. You can backup, move, delete, etc… each collection individually. Each collection will also have its unique work item ids, check sets, etc… Team project collections also change the way team foundation stores its stuff. Before it would use a bunch of databases, now everything connected is stored in a single database. One database per collection. You can easily find the database because it’s called Tfs_<CollectionName>. You ‘ll also find the Tfs_Configuration database containing all configured project collections (and depending on your installation a database for the analysis services).: Project collections also solve a problem some of you might have encountered; TFS 2008 has an upper limit of 255 team projects. Now with TPC you just add another TPC when you reach the limit (I don’t know the limit of projects per collection, but I would assume it would be around the same…). A TPC can also easily be moved to another team server/farm, or to another SQL server on the same farm, as long as you keep it on the same type of SQL server (enterprise, express, …). Documentation states you cannot move it to another kind. The way to do this is to first open the Team Foundation Administration Console, and select the Team Projects Collections tree item. To the right you should see all your TPCs. Here you can also create new TPC's, but that should be obvious. So to move a TPC you should first stop the collection. You’ll be asked for a reason: And then you detach the collection: Next you go through a verification step: And then you click Complete: Now the TPC is no longer connected to TFS, but is still there in SQL server. So now you detach the database in SQL server, move it to another SQL server or TFS farm. You might first need to restart a couple of services, such as the build service. So after attaching the database in SQL server, we now need to attach the database to TFS; go back to Team Foundation Administration Console, and click the “Attach Collection” button. Now choose your SQL server instance, and you’ll see all candidate databases: Hit Next if you want to change properties such as the name/description: And next again to see an overview. So complete attaching, hit Verify to make sure everything is in order: And then hit attach. Eh voila! Now the TPC should be in the list: You can also split the projects in a single TPC to multiple TPCs (but not merge them, so be careful), but that will be for a later post.
When being lazy is (finally) good 28 February 2010 Peter-Himschoot VS2010, .NET Development In this blog post I want to talk about .NET 4 new Lazy<T> class. First of all, why would you need something called Lazy? You can use it for data access for example; when you load a row from a database parent table. Would you need to load the child rows automatically, or delay until they’re required. Some systems will delay load automatically, or load all they can (but what then when the child rows have other relations to grandchild rows, etc…). This kind of delayed loading of data is just what Lazy<T> (or Lazy(Of T) when using VB.NET) supports. It’s a great type to use when you have an object which is very expensive to create, and you only want to create it on first use. Let’s start with an example; let’s say you have this big-ass class: 1: class BigAndExpensive 2: { 3: string s = ""; 4: 5: public string GetTheData() 6: { 7: return s; 8: } 9: 10: public BigAndExpensive() 11: { 12: Console.WriteLine("BigAndExpensive is being created..."); 13: for (int i = 0; i < 10000; i++) 14: s = s + "."; 15: Console.WriteLine("BigAndExpensive is finally created..."); 16: } 17: } As you can see, creating is very expensive (it will actually consume about 10 Gb of memory, triggering a lot of garbace collects). Let’s create an instance of this class without, then with Lazy<T> and look at the performance: 1: BigAndExpensive be; 2: Lazy<BigAndExpensive> lbe; 3: 4: using (new MeasureDuration("Not using Lazy evaluation")) 5: { 6: be = new BigAndExpensive(); 7: } 8: using (new MeasureDuration("Accessing non-lazy object's method")) 9: { 10: string s = be.GetTheData(); 11: } 12: using (new MeasureDuration("Using Lazy evaluation")) 13: { 14: lbe = new Lazy<BigAndExpensive>(false); 15: } 16: using (new MeasureDuration("Accessing lazy object's method")) 17: { 18: string s = lbe.Value.GetTheData(); 19: } 20: using (new MeasureDuration("Again accessing lazy object's method")) 21: { 22: string s = lbe.Value.GetTheData(); 23: } In order to use the Lazy<T> object you have to get it’s value property. When the lazy loaded value hasn’t yet been created, accessing the Value will create it. The MeasureDuration class is a little timer taking advantage of the using statement: 1: class MeasureDuration : IDisposable 2: { 3: Stopwatch sw; 4: string what; 5: 6: public MeasureDuration(string what) 7: { 8: this.what = what; 9: sw = new Stopwatch(); 10: sw.Start(); 11: } 12: 13: public void Dispose() 14: { 15: sw.Stop(); 16: Console.WriteLine("Measured duration of -{0}- took {1} ticks ({2} ms)" 17: , what, sw.ElapsedTicks, sw.ElapsedMilliseconds); 18: } 19: 20: } The output I get on machine looks like this: As you can see, creating a Lazy object is very fast, but of course as you can expect, using it the first time is just as expensive due to the creating process. Using it the second time is again very fast. Now go back to the code, and look for the Lazy<T> constructor. Change the false argument to true: 1: lbe = new Lazy<BigAndExpensive>(true); This will make the instantiation process of the actual instance thread-safe. This means it will be a little slower, but only during construction. Is it worth the price? If you’re using multiple threads YES YES YES! Now let’s try to see what happens when many threads access an unprotected Lazy object (never be lazy AND unprotected :)) This is the code: 1: private static void UsingLazyObjectsFromMultipleThreads() 2: { 3: Lazy<BigAndExpensive> createMeOncePlease = new Lazy<BigAndExpensive>(isThreadSafe:false); 4: 5: ManualResetEvent youMayBegin = new ManualResetEvent(false); 6: AutoResetEvent done = new AutoResetEvent(false); 7: 8: // create a lot of threads that will use our object all at once 9: for (int i = 0; i < 20; i++) 10: { 11: Thread t = new Thread(() => 12: { 13: youMayBegin.WaitOne(); 14: Console.WriteLine("Thread {0} getting data", Thread.CurrentThread.ManagedThreadId); 15: using (new MeasureDuration("Multithreading")) 16: createMeOncePlease.Value.GetTheData(); 17: done.Set(); 18: }); 19: t.Start(); 20: } 21: youMayBegin.Set(); 22: // wait for all threads to complete 23: for (int i = 0; i < 20; i++) 24: done.WaitOne(); 25: 26: } I’ve now used the named argument feature of C# 4.0. In this case it make the code a lot clearer doesn’t it? So what does the code do. It creates 20 threads which all first wait for the “youMayBegin” event. This way all threads will start running at the same time. Then they each access the “createMeOncePlease” lazy instance, so some of them will start to create the instance (because it hasn’t yet been created). Then they will all signal that they’re done so the main thread can stop too. So let’s run the code (making sure the isThreadSafe is set to false). I get this: This is bad. Very bad. Instead of calling the constructor of my very expensive object once, it calls it several times. why? Think about lazy’s possible thread-unsafe implementation: 1: class Lazy<T> where T : class, new() 2: { 3: T instance = null; 4: 5: public T Value 6: { 7: get 8: { 9: if (instance == null) 10: instance = new T(); 11: return instance; 12: } 13: } 14: } When you run the if statement on multiple thread, each will evaluate to true, then each will create an object and overwrite instance’s value. So what is the solution? Simply pass true for the isThreadSafe argument. Running this code once more looks like this on my machine: Good. My expensive object only get’s created once. But why are the calls soo expensive after all. That is because when we access Value, only one thread will be allowed to create the instance, but the other Value calls will need to wait for the first one to complete. If you insert another call using Value you’ll see the speed is very fast. If you only need initialization to be thread-safe, or only access to the object in a thread-safe you you can also use the contructor taking a LazyThreadSafetyMode enumeration: 1: None = 0, 2: PublicationOnly = 1, 3: ExecutionAndPublication = 2 What if your expensive class requires special construction, like a special constructor? Then you can use another constructor of Lazy<T>, one that takes a delegate( Func<T> ) so you can create your object your way. 1: Lazy<BigAndExpensive> createMeOncePlease = 2: new Lazy<BigAndExpensive>(() => new BigAndExpensive());
IntelliSense improvements in Visual Studio 2010 24 February 2010 Peter-Himschoot VS2010 What developer today can live without intelli-sense? Of course I mean developers who have used intelli-sense before (if you don’t know something how can you miss something?). However finding a member in Visual Studio 2008 requires you to know the first letters of the class/method/… I’m quite sure you sometimes now a class contains a certain word, but can’t remember the beginning. Now the new and improved intelli-sense in Visual Studio 2010 allows you to see any member containing a certain substring. For example when you type “opt”, you’ll get this: But it even gets better. Try typing “AD”. Because .NET uses Pascal casing for all members, when you type the capital letters it will show you which members contain these same capital letters (and same order): Love it! This also works in other places, for example in the Navigate To window! With following code, 1: class WebDeveloper 2: { 3: } 4: 5: class Program 6: { 7: 8: 9: static void Main(string[] args) 10: { 11: WebDeveloper dev1 = new WebDeveloper(); 12: 13: } 14: } opening the Navigate To window (use Ctrl-comma for example, and typing WD in the search box will show like this: Oh, and selecting the WebDeveloper class will automatically highlight every other use of it: Cool! And don’t you hate it when you start typing a method name you haven’t declared yet. By default Visual Studio will list a couple of suggestions, and when you commit (by pressing space of “(“) Visual studio inserts its own suggestion (and then you need to Undo (Ctrl-Z) to make Visual Studio keep whatever you were typing. So for example, when you have a variable s of type string, and then you type s.c intelli-sense will show the following. If you now press space, by default you’ll get s.Clone. But now you have auto-completion suggestion mode. In this case Visual Studio will not insert its own suggestion, but will keep your typing instead. If you do want to insert the suggested method, simply press TAB instead of space of open bracket. From now on this will be my default mode of working…
Gated Check In with Multiple Build Definitions 05 February 2010 Peter-Himschoot Team System, VS2010, .NET Development As promised in my previous blog on Gated Check In, in this blog I’ll discuss using multiple builds with gated check in. So what happens when you check in (using gated check in) and there are multiple build definitions targeting the solution? Well, Gated Check In will then allow you to choose between the different build definitions; for the moment Team Build does not allow you to filter the build definitions that are shown: In my example I’ve created a second build definition that doesn’t run tests, so you can select this one when you need to check in code without running tests…
Techdays 2010 26 January 2010 Peter-Himschoot .NET Development, VS2010, WCF, WF 4 I’m happy to say I’ll be speaking at TechDays 2010 in Belgium and DevDays 2010 in the Netherlands. In Belgium I’ll be doing one session on “What is new in WCF 4” and one session on “Workflow Foundation 4”. In the Netherlands I’ll do on session on “What is new in WCF4” and one on “Developing for Windows 7 with the Windows API code pack”.
Planning, Running and measuring tests with Visual Studio 2010 Ultimate and Test and Lab Manager 24 January 2010 Peter-Himschoot VS2010, Team System, .NET Development So what does Visual Studio 2010 bring for testers? A whole lot! Especially the new test environment, where you can create a test plan to validate the quality of the software you’re building. A test plan is a collection of test cases, which you can then run. While running the system keeps track of a whole lot of things, including code coverage and IntelliTrace information. And finally this application allows you to examine the combined results of all tests, to see how your development effort is doing. So again: A test plan is a collection of test suites, used to test a certain iteration in your project. A test suite is a collection of test cases to test a certain part of your project, and a test case is a test for a project feature. A test case is basically an UI test running in a specific environment. To model this, Test and Lab manager also allows you to define a test configuration, which is a certain environment for your code, for example Windows 7 with IE8, or Windows XP SP2 with IE7. Running Test and Lab Manager First time you run this application it will ask for your Team Foundation Server (TFS) and after that, for the Project Collection and Team Project: Ok, next step it will ask you for a test plan; since there are none you will have to create a new one: Then click “Select plan >”. This opens the test plan. if you want you can change some of the properties of this test plan by clicking on the properties tab: Here you can change the iteration for example, or the test settings. With test settings you can change how your tests will be executed. For example in the Local Test Run settings you can change the diagnostic data adapter, which record data from your test run. For example is you want an action recording, event log entries, etc… You can also use it to emulate certain environments, for example running low on memory. You can even create your own data adapter. This first page allows you to change the name, and choose between running a manual or automated test. On the second page you can define the roles. You define a role for each tier to use to run tests and collect data. One the local settings you can only have one role, your own physical machine. If you want to use roles, you will also have to define environments. Here you can define what kind of data you want to gather. For example, the Action Recording will record each action taken during the test, making it easy for another person (typically a developer) to understand what happened during running the test. The Action log is a text version of the recording. IntelliTrace will allow a developer to load the IntelliTrace log and step back in the code, to see how the error came to be. Go back to the Contents tab. Now we’re ready to test some of our user stories (from now on I will be using User Stories, but this could just as easily be called a requirement or anything else to denote a certain needed functionality of your software system). Test cases are grouped into test suites. Per test plan you have a default test suite, but you can also create a test suite for a specific user story, copy a test suite from another test plan, create a nested suite or create one from a query: If you select “Add requirement"to plan” you will need to select a user story. This user story is now associated with this test suite. This will allow reporting to figure out which tests have ran for which user story. Now we’re ready to add a test case, so click the New button. The New Test Case dialog should show: Add the four steps (please note that we’re doing this for the sake of the demo, normally your requirement should not be to crash :) ). Save it. Click on the Tested User Stories tab: the User Story should be there. Save and Close. A couple of other things you can now do is to assign it to a tester, and/or change the configurations: Let’s run the test. First open the Test tab: From here you can see all test cases for a test suite: Select the test case and click on the run button. The Test Runner should open: Check the Create action recording button and hit Start Test. Do as the test case says, and try to use a repeatable way to start the application (for example using the start menu). The first and second step should not be a problem, so click the drop down boxes to the right of each step to indicate success: Now make the third step fail. You’ll need to enter a comment why it failed. Then click the End Test hyperlink button. You’ll see test center gather some information. On top of the test runner window you’ll see a toolbar. Click on the Create Bug button: You’ll see that most of the fields are automatically filled in. You might want to change the Severity for example, and maybe assign it to a certain tester, but that is all… Click Save. Check out the information included in the System Info, and All Links tab. Close it. Remember me asking you to start the application using some repeatable way? Well, try and click on the play button. Your test should replay again. This makes it easier to re-test later. Coming back to Test and Lab Manager. The test case failed so now the UI updates to show this: If you have more than one test, the bar will color depending on how many succeeded, failed or are still in progress. Later you can come back and verify if a bug still exists by using the Verify Bugs tab: Go back to the Plan tab, then open the properties. Now you should see an overview: Back to Visual Studio, so close the Test and Lab Manager. Go to the Team Explorer and open the My Bugs query. You should see the bug we created before. Open it: Go to All Links, and open the IntelliTrace log (.tdlog). With it you can now replay the bug, just like in my previous post on IntelliTrace. If this doesn’t work, go back to the test settings. You might also like the Video feature, so you can replay the user’s actions!