Configuring your WFC and WF4 services using AppFabric 19 March 2010 Peter-Himschoot VS2010, WCF, WF 4, .NET Development, AppFabric Configuring your services Normally I configure my services using Visual Studio (and type-ing in the configuration as Xml) or using the WCF Service Configuration tool. AppFabric also allows you to configure your services, directly from IIS (making it a nice integrated experience!). The difference is in that AppFabric exposes the stuff an it-pro needs to look at the health of an application more… Developers are more interested in making it work, productions is more interested in keeping it working… <grin> You can setup a persistance and tracking store (actually databases) to make your services trackable and durable. This will also make it easier (or less hard) to see why your service is no longer functioning the way it should. Of course you can still setup the System.Diagnostics tracing, but this again is more for developers. Some preparations are needed AppFabric uses the net.pipe protocol to manage your services (through standard endpoints), so you might need to enable this on IIS. Select your site, then select Edit Bindings… The following window should open: If net.pipe is not listed, hit the Add… button and select net.pipe. Use * for Binding information. Then go to your service and select Advanced Settings… The dialog should open: Add net.pipe to the list of Enabled Protocols. You might also need to run the AppFabric system-level configuration. To do this, go to Start->All Programs->Windows Server AppFabric->Configure AppFabric. The configuration utility should launch: Hit Next: Here you can configure the monitoring and persistance databases. Check the Set Monitoring configuration check-box, select the account you want to use and the provider (there is one default, which will store everything in a SQL server database: Hit the Configure… button. Configure as follows (replacing Peter-PC with your domain/machine name): Hit Ok. Check the results. You might get an ‘the database already exists’ kind of warning. Simply continue… Continue through the wizard… Configuring a WCF service In IIS, select your site or service (and most of this can also be done at other levels), and then in the actions pane select “Manage WCF and WF Service->Configure…” This opens the “Configure WCF and WF for Site” dialog: Check the “Enable metadata over HTTP” to set the ServiceMetadata behavior. Over to the Monitoring tab: Keep the checkbox checked if you want monitoring records written to the monitoring database. Using the level you can change from monitoring everything to nothing… You can also configure the usual WCF tracing and message logging here. The throttling tab allows you to limit the number of requests and service instances, while the security tab allows you to set/change the service certificate. A later post will be about configuring WCF and Workflow services…
Windows Server AppFabric Beta 2: Deploying services 14 March 2010 Peter-Himschoot AppFabric, VS2010, WCF, WF 4, .NET Development Microsoft released Visual Studio 2010 RC a while ago, but unfortunately this broke Windows Server AppFabric beta 1. Luckily march 1 MS released beta 2, which works with VS 2010 RC. I’ve installed it and will now try to show you a couple of things. So what is AppFabric? To be honest, there is another AppFabric, the one for Azure, and that is not the one I am talking about. What is Windows Server AppFabric? AppFabric makes installing, administering, monitoring and fixing problems in WCF 4 (!Yes, only starting at .NET 4) services a lot easier by extending IIS and WAS (Windows Activation Services, which are used to host non-HTTP WCF services in IIS). It also adds a distributed caching mechanism (also known as Project Velocity) to make it easier to scale ASP.NET and WCF services. If you’re familiar with BizTalk 2006/2009, you’ll know that the BizTalk Administration application shows you each BizTalk application’s health, what went wrong, how many were executed, etc… AppFabric gives you the same but now for WCF and WF 4 services. Look at this screen shot: As you can see, after installing AppFabric IIS is now showing these new icons: AppFabric Dashboard, Endpoints and Services. The dashboard will show you the current status of your services (running, stopped, with errors, etc…). Endpoints and Services allow you to list and configure the endpoints and services. Deploying using AppFabric AppFabric Hosting Services provides easier deployment of services: First thing you need to do is to package your WCF service. To do this go to the project properties and select the new Package/Publish tab: You can now select where to create the package (and if your want it as a .zip file) and how to deploy it in IIS. Now you need to create the deployment package in Visual Studio 2010: Now we can import the application using IIS: This will open the Import Application Package dialog. Use the browse button to open the package .zip file: Hit next: And Next again: Note that the service name is taken from the package properties. Hit next again, hopefully your services will deploy successfully. To verify this, you can go to your site in IIS: And click on Services. You should see your service listed (for example I have three services running here): You can also click on Endpoints to see the list of endpoints: Because of WCF default endpoints you get 4 different endpoints per service; you can modify which types of default endpoints you want, but that is now what I’ll be showing you here. A note on using your own application pool with AppFabric During my experiments I created a new AppPool for my services. When testing my service, I would always get the following error: HTTP Error 503. The service is unavailable. First I thought the solution would be easy. My application pool was stopped. Starting it should fix the problem. But it didn’t. So I investigated a little further. Seems that my new application pool tries to use .NET 4 version 21006 (beta 2?). I could see this in the Event Viewer: The worker process failed to pre-load .Net Runtime version v4.0.21006. I think something didn’t (un)install during my migration to .NET 4 RC. So I’m now using the ASP.NET 4 application pool…
When being lazy is (finally) good 28 February 2010 Peter-Himschoot VS2010, .NET Development In this blog post I want to talk about .NET 4 new Lazy<T> class. First of all, why would you need something called Lazy? You can use it for data access for example; when you load a row from a database parent table. Would you need to load the child rows automatically, or delay until they’re required. Some systems will delay load automatically, or load all they can (but what then when the child rows have other relations to grandchild rows, etc…). This kind of delayed loading of data is just what Lazy<T> (or Lazy(Of T) when using VB.NET) supports. It’s a great type to use when you have an object which is very expensive to create, and you only want to create it on first use. Let’s start with an example; let’s say you have this big-ass class: 1: class BigAndExpensive 2: { 3: string s = ""; 4: 5: public string GetTheData() 6: { 7: return s; 8: } 9: 10: public BigAndExpensive() 11: { 12: Console.WriteLine("BigAndExpensive is being created..."); 13: for (int i = 0; i < 10000; i++) 14: s = s + "."; 15: Console.WriteLine("BigAndExpensive is finally created..."); 16: } 17: } As you can see, creating is very expensive (it will actually consume about 10 Gb of memory, triggering a lot of garbace collects). Let’s create an instance of this class without, then with Lazy<T> and look at the performance: 1: BigAndExpensive be; 2: Lazy<BigAndExpensive> lbe; 3: 4: using (new MeasureDuration("Not using Lazy evaluation")) 5: { 6: be = new BigAndExpensive(); 7: } 8: using (new MeasureDuration("Accessing non-lazy object's method")) 9: { 10: string s = be.GetTheData(); 11: } 12: using (new MeasureDuration("Using Lazy evaluation")) 13: { 14: lbe = new Lazy<BigAndExpensive>(false); 15: } 16: using (new MeasureDuration("Accessing lazy object's method")) 17: { 18: string s = lbe.Value.GetTheData(); 19: } 20: using (new MeasureDuration("Again accessing lazy object's method")) 21: { 22: string s = lbe.Value.GetTheData(); 23: } In order to use the Lazy<T> object you have to get it’s value property. When the lazy loaded value hasn’t yet been created, accessing the Value will create it. The MeasureDuration class is a little timer taking advantage of the using statement: 1: class MeasureDuration : IDisposable 2: { 3: Stopwatch sw; 4: string what; 5: 6: public MeasureDuration(string what) 7: { 8: this.what = what; 9: sw = new Stopwatch(); 10: sw.Start(); 11: } 12: 13: public void Dispose() 14: { 15: sw.Stop(); 16: Console.WriteLine("Measured duration of -{0}- took {1} ticks ({2} ms)" 17: , what, sw.ElapsedTicks, sw.ElapsedMilliseconds); 18: } 19: 20: } The output I get on machine looks like this: As you can see, creating a Lazy object is very fast, but of course as you can expect, using it the first time is just as expensive due to the creating process. Using it the second time is again very fast. Now go back to the code, and look for the Lazy<T> constructor. Change the false argument to true: 1: lbe = new Lazy<BigAndExpensive>(true); This will make the instantiation process of the actual instance thread-safe. This means it will be a little slower, but only during construction. Is it worth the price? If you’re using multiple threads YES YES YES! Now let’s try to see what happens when many threads access an unprotected Lazy object (never be lazy AND unprotected :)) This is the code: 1: private static void UsingLazyObjectsFromMultipleThreads() 2: { 3: Lazy<BigAndExpensive> createMeOncePlease = new Lazy<BigAndExpensive>(isThreadSafe:false); 4: 5: ManualResetEvent youMayBegin = new ManualResetEvent(false); 6: AutoResetEvent done = new AutoResetEvent(false); 7: 8: // create a lot of threads that will use our object all at once 9: for (int i = 0; i < 20; i++) 10: { 11: Thread t = new Thread(() => 12: { 13: youMayBegin.WaitOne(); 14: Console.WriteLine("Thread {0} getting data", Thread.CurrentThread.ManagedThreadId); 15: using (new MeasureDuration("Multithreading")) 16: createMeOncePlease.Value.GetTheData(); 17: done.Set(); 18: }); 19: t.Start(); 20: } 21: youMayBegin.Set(); 22: // wait for all threads to complete 23: for (int i = 0; i < 20; i++) 24: done.WaitOne(); 25: 26: } I’ve now used the named argument feature of C# 4.0. In this case it make the code a lot clearer doesn’t it? So what does the code do. It creates 20 threads which all first wait for the “youMayBegin” event. This way all threads will start running at the same time. Then they each access the “createMeOncePlease” lazy instance, so some of them will start to create the instance (because it hasn’t yet been created). Then they will all signal that they’re done so the main thread can stop too. So let’s run the code (making sure the isThreadSafe is set to false). I get this: This is bad. Very bad. Instead of calling the constructor of my very expensive object once, it calls it several times. why? Think about lazy’s possible thread-unsafe implementation: 1: class Lazy<T> where T : class, new() 2: { 3: T instance = null; 4: 5: public T Value 6: { 7: get 8: { 9: if (instance == null) 10: instance = new T(); 11: return instance; 12: } 13: } 14: } When you run the if statement on multiple thread, each will evaluate to true, then each will create an object and overwrite instance’s value. So what is the solution? Simply pass true for the isThreadSafe argument. Running this code once more looks like this on my machine: Good. My expensive object only get’s created once. But why are the calls soo expensive after all. That is because when we access Value, only one thread will be allowed to create the instance, but the other Value calls will need to wait for the first one to complete. If you insert another call using Value you’ll see the speed is very fast. If you only need initialization to be thread-safe, or only access to the object in a thread-safe you you can also use the contructor taking a LazyThreadSafetyMode enumeration: 1: None = 0, 2: PublicationOnly = 1, 3: ExecutionAndPublication = 2 What if your expensive class requires special construction, like a special constructor? Then you can use another constructor of Lazy<T>, one that takes a delegate( Func<T> ) so you can create your object your way. 1: Lazy<BigAndExpensive> createMeOncePlease = 2: new Lazy<BigAndExpensive>(() => new BigAndExpensive());
Gated Check In with Multiple Build Definitions 05 February 2010 Peter-Himschoot Team System, VS2010, .NET Development As promised in my previous blog on Gated Check In, in this blog I’ll discuss using multiple builds with gated check in. So what happens when you check in (using gated check in) and there are multiple build definitions targeting the solution? Well, Gated Check In will then allow you to choose between the different build definitions; for the moment Team Build does not allow you to filter the build definitions that are shown: In my example I’ve created a second build definition that doesn’t run tests, so you can select this one when you need to check in code without running tests…
Techdays 2010 26 January 2010 Peter-Himschoot .NET Development, VS2010, WCF, WF 4 I’m happy to say I’ll be speaking at TechDays 2010 in Belgium and DevDays 2010 in the Netherlands. In Belgium I’ll be doing one session on “What is new in WCF 4” and one session on “Workflow Foundation 4”. In the Netherlands I’ll do on session on “What is new in WCF4” and one on “Developing for Windows 7 with the Windows API code pack”.
Planning, Running and measuring tests with Visual Studio 2010 Ultimate and Test and Lab Manager 24 January 2010 Peter-Himschoot VS2010, Team System, .NET Development So what does Visual Studio 2010 bring for testers? A whole lot! Especially the new test environment, where you can create a test plan to validate the quality of the software you’re building. A test plan is a collection of test cases, which you can then run. While running the system keeps track of a whole lot of things, including code coverage and IntelliTrace information. And finally this application allows you to examine the combined results of all tests, to see how your development effort is doing. So again: A test plan is a collection of test suites, used to test a certain iteration in your project. A test suite is a collection of test cases to test a certain part of your project, and a test case is a test for a project feature. A test case is basically an UI test running in a specific environment. To model this, Test and Lab manager also allows you to define a test configuration, which is a certain environment for your code, for example Windows 7 with IE8, or Windows XP SP2 with IE7. Running Test and Lab Manager First time you run this application it will ask for your Team Foundation Server (TFS) and after that, for the Project Collection and Team Project: Ok, next step it will ask you for a test plan; since there are none you will have to create a new one: Then click “Select plan >”. This opens the test plan. if you want you can change some of the properties of this test plan by clicking on the properties tab: Here you can change the iteration for example, or the test settings. With test settings you can change how your tests will be executed. For example in the Local Test Run settings you can change the diagnostic data adapter, which record data from your test run. For example is you want an action recording, event log entries, etc… You can also use it to emulate certain environments, for example running low on memory. You can even create your own data adapter. This first page allows you to change the name, and choose between running a manual or automated test. On the second page you can define the roles. You define a role for each tier to use to run tests and collect data. One the local settings you can only have one role, your own physical machine. If you want to use roles, you will also have to define environments. Here you can define what kind of data you want to gather. For example, the Action Recording will record each action taken during the test, making it easy for another person (typically a developer) to understand what happened during running the test. The Action log is a text version of the recording. IntelliTrace will allow a developer to load the IntelliTrace log and step back in the code, to see how the error came to be. Go back to the Contents tab. Now we’re ready to test some of our user stories (from now on I will be using User Stories, but this could just as easily be called a requirement or anything else to denote a certain needed functionality of your software system). Test cases are grouped into test suites. Per test plan you have a default test suite, but you can also create a test suite for a specific user story, copy a test suite from another test plan, create a nested suite or create one from a query: If you select “Add requirement"to plan” you will need to select a user story. This user story is now associated with this test suite. This will allow reporting to figure out which tests have ran for which user story. Now we’re ready to add a test case, so click the New button. The New Test Case dialog should show: Add the four steps (please note that we’re doing this for the sake of the demo, normally your requirement should not be to crash :) ). Save it. Click on the Tested User Stories tab: the User Story should be there. Save and Close. A couple of other things you can now do is to assign it to a tester, and/or change the configurations: Let’s run the test. First open the Test tab: From here you can see all test cases for a test suite: Select the test case and click on the run button. The Test Runner should open: Check the Create action recording button and hit Start Test. Do as the test case says, and try to use a repeatable way to start the application (for example using the start menu). The first and second step should not be a problem, so click the drop down boxes to the right of each step to indicate success: Now make the third step fail. You’ll need to enter a comment why it failed. Then click the End Test hyperlink button. You’ll see test center gather some information. On top of the test runner window you’ll see a toolbar. Click on the Create Bug button: You’ll see that most of the fields are automatically filled in. You might want to change the Severity for example, and maybe assign it to a certain tester, but that is all… Click Save. Check out the information included in the System Info, and All Links tab. Close it. Remember me asking you to start the application using some repeatable way? Well, try and click on the play button. Your test should replay again. This makes it easier to re-test later. Coming back to Test and Lab Manager. The test case failed so now the UI updates to show this: If you have more than one test, the bar will color depending on how many succeeded, failed or are still in progress. Later you can come back and verify if a bug still exists by using the Verify Bugs tab: Go back to the Plan tab, then open the properties. Now you should see an overview: Back to Visual Studio, so close the Test and Lab Manager. Go to the Team Explorer and open the My Bugs query. You should see the bug we created before. Open it: Go to All Links, and open the IntelliTrace log (.tdlog). With it you can now replay the bug, just like in my previous post on IntelliTrace. If this doesn’t work, go back to the test settings. You might also like the Video feature, so you can replay the user’s actions!
WCF and large messages 23 January 2010 Peter-Himschoot WCF, VS2010, .NET Development, Entity Framework This week I’ve been training a couple of people on how to use .NET, WCF4, Entity Framework 4 and other technologies to build an Enterprise Application. One of the things we did was return all rows from a table, and this table contains about 2500 rows. We’re using the Entity Framework 4 self-tracking entities which also serialize all changes made it the objects. And I kept getting this error: The underlying connection was closed: The connection was closed unexpectedly. Server stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) At first I thought it was something to do with the maximum message size, another kind of error you get when sending large messages. The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. Server stack trace: at System.ServiceModel.Channels.HttpInput.ThrowMaxReceivedMessageSizeExceeded() at System.ServiceModel.Channels.HttpInput.GetMessageBuffer() at System.ServiceModel.Channels.HttpInput.ReadBufferedMessage(Stream inputStream) This one is easy to fix (although a little confusing because you have to configure the binding of the receiving side of the message, which is most of the time the client: But doing this didn’t help. So what was it? I knew the size of the message couldn’t be the problem, because I’d sent way bigger messages before. Maybe there was something in the contents that made the DataContractSerializer crash? Checking this is easy, I wrote a little code that would make the serializer write everything to a stream and see what happens. Works fine. Hmmm. What could it be? So I went over the list of properties of the DataContractSerializer. I has a MaxItemsInObjectGraph property. Maybe that was it, but how can I change this number? Looking at the behaviors I found it. What you need to do when you send a large number of objects is you have to increate this number, which is easy. At the server side you use the DataContractSerializer service behavior and set its value to a large enough number: At the clients side you use the DataContractSerializer endpoint behavior. That fixed my problem.
Using the Visual Studio 2010 Historical Debugger to save and reproduce bugs 22 January 2010 Peter-Himschoot .NET Development, VS2010 Visual Studio 2010 Ultimate now has IntelliTrace, which is a historical debugging feature. IntelliTrace will keep track of everything your code has been doing and then, when a bug arrives, you can back-track to the possible cause of the bug. You might think this is nothing new, but don’t forget it also tracks the value of any variable at that moment in time. With the usual debugger you will see the latest value, not the value at that moment. You can enable (and it is enabled by default) by going into Options->IntelliTrace->General If you want to try this, build a simple windows forms application with a Button and CheckBox. Then add code as follows: 1: Public Class Form1 2: 3: Private message As String 4: Private flag As Boolean 5: 6: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click 7: flag = FlagCheckBox.Checked 8: Doh() 9: End Sub 10: 11: Private Sub Doh() 12: If flag = True Then 13: message = "This should work" 14: Else 15: message = Nothing 16: End If 17: flag = True 18: ShowMessage(message) 19: End Sub 20: 21: Private Sub ShowMessage(ByVal message As String) 22: 23: 24: ' Throw exception if message is empty 25: Dim len As Integer = message.Length 26: 27: End Sub 28: End Class Run it with the checkbox unchecked. This should cause the application to stop in the debugger: Your code should look like this: See those special arraws in the gutter (the gutter is the section where you click to add/remove breakpoints)? These will allow you to backtrack. You should also have a IntelliTrace window looking like this (two views are available): With these you can jump directly to one of your code’s stackframe. Let’s try stepping back to see what happened. Press the back arrow, or better yet, use the keyboard shortcut (on my machine that is Ctrl+Shift+F11): You should now be able to step back in time, for example look how my IDE looks like when stopped at the ShowMessage method: IntelliTrace keeps track of the values of the arguments and such, at each point in time. For example, a little higher in the code I changed the flag variable to true, while the bug is caused because it was false before. IntelliTrace will show me this. 1: Private Sub Doh() 2: If flag = True Then 3: message = "This should work" 4: Else 5: message = Nothing 6: End If 7: flag = True 8: ShowMessage(message) 9: End Sub IntelliTrace will save all of this information in a file (now with an extension called iTrace, notice the naming convention <GRIN>), and because you can open this file afterwards, all kinds of scenario’s can now be implemented. You can send the file to a colleague so he/she can help you debug the problem. So where is this file? Go back to Tools->Options->Advanced and copy the directory for later: Open this directory in file explorer and copy the file to where you like. You will have to stop debugging, otherwise Visual Studio keeps a lock on the file. Don’t stop Visual Studio, because VS throws away the log file when it quits. When you open the file again, Visual Studio will show the log’s summary: Open the Thread Lists and look for the thread you want to see (probably the main thread): Double click it and with a little patience the debugger will open (make sure Visual Studio can find the sources). Or better even, the Visual Studio Test and Lab Manager 2010 can do this for you. So even if you have a non-technical user who is testing some application, and finds a bug, the IntelliTrace file will be attached automatically!
Building an Enterprise Application with Entity Framework 4 20 January 2010 Peter-Himschoot .NET Development, VS2010, Entity Framework Entity Framework 3 was a bit of a disappointment when it came to supporting enterprise applications. For me the major reason was the fact that entities used by EF required deriving from a class which is part of EF, thus coding the EF requirement into your Business Logic Layer (BLL) and presentation layer. EF 4 is still under development, but already they’re making a lot of compensation for this with their support for POCO (Plain Old Clr Object) and self tracking objects. POCO vs. Self Tracking A self tracking object is an object that has state where you can check what has happened to the object (unmodified, new, modified, deleted). The advantage of this kind of object is simplicity for the user, because the object does all the tracking. Of course this means more work building the object itself, but this can be solved using T4 templates. A POCO is really nothing more than a data carrier object, without any tracking support. Simplicity means maximum portability, especially if you use DataContracts. For the rest of this post I’ll be using self tracking objects, generated through EF 4. I’ll also be using the EF feature CTP 2. Using EF 4 to generate the self tracking objects Start by creating a new WinForms project (of WPF, Silverlight, whatever). Add another library project for your data access layer (DAL) and another one for your entities: Normally I would also add a business logic layer (BLL) but for simplicity I’ll leave it out for now. Now add a new Entity Data Model to your DAL project. Select the Northwind database, then select the Categories and Products tables: This way you end up with this model: Please note that my tables/entities each have an extra column, the Version column. This is a TimeStamp column used to detect concurrent updates. To tell EF to use this column for concurrency, set its Concurrency Mode property to Fixed. This is typically the best way to handle concurrent updates. Right-click your entity model’s background, then select the Add Code Generation Item… menu choice: This alternative code generation will add two T4 templates to the DAL project (using the .tt extension) The ProductsModel.Context.tt is an EF dependent template, so leave it in the DAL project, but the ProductsModel.Types.tt contains the EF independent types which actually are self-tracking entities. Move this template to the Entities project: Watch out, your project will not build until you set following references (diagram made with Visual Studio’s new UML Component Diagram) : This diagram also includes the BLL layer, our solution doesn’t, but if you want, feel free! If you’re using VB.NET, you should also add the Product.Entities namespace to your list of imports of the DAL project: Now you’re ready to implement the DAL layer, so add a ProductsDAL class as follows: 1: Public Class ProductsDAL 2: 3: Public Function GetListOfCategories() As List(Of Category) 4: Using db As New NorthwindEntities 5: Return db.Categories.ToList() 6: End Using 7: End Function 8: 9: Public Sub UpdateCategory(ByVal cat As Category) 10: Using db As New NorthwindEntities 11: db.Categories.ApplyChanges(cat) 12: db.SaveChanges() 13: End Using 14: End Sub 15: 16: End Class Now let’s add some controls and data bind them with Window Forms. For this I use the Data Sources window. Open it and add another data source. Select an object data source: Then select your Category of Product entity, which you should find in the Products.Entities project: Your data source window should now display your entities: Right-click category and select Customize… from the drop-down list. Now you can select Listbox as the control to use. Drag the Category entity to your form to create a listbox and bindingSource: Add two buttons, the Load and Save button. Implement the first button to retrieve the list of categories from the DAL: 1: Dim dal As New ProductsDAL 2: CategoryBindingSource.DataSource = dal.GetListOfCategories() And implement the save button as follows: 1: Dim dal As New ProductsDAL 2: Dim cat As Category = TryCast(CategoryBindingSource.Current, Category) 3: dal.UpdateCategory(cat) Run the solution and click on load. The listbox should fill with categories and the window should look like this (you might first want to copy the connectionstring in the DAL’s project .config to the form’s config): Note the “Change Tracking Enabled”. Check the checkbox if you want to update an object, this will enable the self tracking. Make a change, then click Save. This should update the database. Open two instances of this application. Load in both, then change the same record in both (with tracking enabled). Save both. The second Save should fail because of concurrency checking. Done!
Using the Visual Studio 2010 layer diagram to verify your solution 17 January 2010 Peter-Himschoot .NET Development, Team System Visual Studio 2010 adds a whole new series of modeling tools to the product, including UML modeling. In this post I want to discuss the Layer diagram and its relation to building enterprise applications. The Layer Diagram Visual studio 2010 now comes with new modeling tools, which you can reach through the architecture menu from Visual Studio 2010 (I’m using the Ultimate edition): This will open the Add New Diagram dialog: Select Layer Diagram. If you don’t yet have a Modeling Project in your solution, another dialog will open so you can name your new modeling project. You should now see an empty layer diagram, so let’s add some layers. Open the toolbox: Double click the Layer item in the toolbox (doing this will lock the toolbox into adding layers, if you single click it you will need to go back to the toolbox for each layer) and click the layer diagram three times to add three layers, stacked vertically. Name the top one Presentation, the middle one BLL (Business Logic Layer) and the bottom one DAL (Data Access Layer). So now we have three layers. Now let’s add a couple of dependencies. I want the presentation layer to use the BLL layer, but not reversed, so let’s model this by adding a dependency from the Presentation layer to the BLL. Let’s also do the same for the BLL to the DAL: The Enterprise Application Skeleton Having a layer diagram is nice, but how can it help you with your development? The layer diagram can help by verifying your solution and projects comply with the layer diagram. As an example I want to use the Enterprise Skeleton solution I use for enterprise development. In its simple version there are 4 projects (excluding the modeling project): one for the presentation layer, one for the BLL and one for the DAL. The fourth project contains the data carrier objects that travel between layers; the DAL is responsible for retrieving and storing these objects, the BLL will check these objects using the business rules and the presentation layer will display these objects. Let’s use this solution with the layer diagram. Adding layer verification to your solution Open Visual Studio with this kind of solution and with your layer diagram (part of a modeling project in your solution). Now drag and drop each project to its appropriate layer. Drop the Products.Common onto the DAL layer (this is a mistake which we’ll discover soon with the verification feature). Each time you do this you will see a counter appear on the layer, counting the number of projects mapped to the layer… However, you cannot see here which project is mapped to each layer. Back to the architecture menu. Open “Windows –> Layer Explorer”, which displays the mappings: If you select a specific layer in the layer diagram, this window will show only projects mapped to it. You can do the same by: right-click on a layer, select View Links. Validation the solution Right-click the layer diagram and select Validate Architecture: Visual studio will start to analyze your code and may return a bunch of validation errors: Open the error list window to see these errors: From here you can do several things: create a work item to correct the error, suppress the error, or investigate the error: The screenshot illustrates the last action. With this you can go to the method or type involved in the validation error. The problem we are facing here is actually because I put the Products.Common project into the DAL. But the presentation layer also needs the types in this project, but we’re not allowing referencing the DAL from the presentation layer. We could add this dependency (please don’t), or we can fix the problem. Fixing the problem Add another layer to the layer diagram, calling it Common. This layer will contain all types common to the other layers, so add dependencies from these layers to the common layer. Finally drag the Products.Common project to the common layer. One more thing: we need to remove the Products.Common project from the DAL layer. Select this layer, in the Layer Explorer you should see two projects. Remove the common project. Now verify again. You should have 0 errors. Automating Validation This validation is a powerful tool to prevent layering mistakes from creeping into your solution. So can we automatically run this whenever we build? Yes we can! One way is to set the Validate Architecture property of the modeling project to true. Then each time you build the solution the layer diagram will verify. This can take same time for each build, so it is better to do this using Team build. You can enable validation using the /p:ValidateArchitecture=true option. To do this, open your build definition and set the “MSBuild Arguments” property like this: Generating dependencies from your solution Another thing you can do is to add the layer dependencies from your solution. Right-click the layer diagram after adding your layer (and which you mapped to your projects). Then select the Generate Dependencies menu. What else is there? What else can you do with the layer diagram? Look at the previous screen shot. If you’re connected to VSTS you can link and create work items.