Getting Started Developing on Azure: Creating the Hosted Service and Storage Account 20 June 2011 Peter-Himschoot .NET Development, Azure, VS2010 This is the next post on Getting Started developing on Azure with Visual Studio 2010 and Installing the Azure Management Certificates. Starting to develop on Azure with Visual Studio can be a lot to eat the first time. Getting Visual Studio ready, including installing the Management Certificates and so on is not a simple task (the first time anyway). So that is why I made this little walk-through series on starting to develop on Azure… In the first part you’ll build and run your first project on the Azure Compute Emulator, which is the local test version of the cloud. In the second part you’ll get Visual Studio ready to directly publish your solution in the cloud by installing the management certificates and in this part you will create the Hosted Service and storage account to actually deploy from Visual Studio. 1.3 Creating the Hosted Service Go back to the Azure Management Portal (windows.azure.com), and now click on the Hosted Services tab. To deploy we also need a hosted service, so click the New Hosted Service button to create one. This opens the Create a New Hosted Service dialog: Enter a name for your service and URL prefix. The prefix should be world-wide unique, so try adding your company name or something unique to you in it. If the chosen name is already taken you will be notified like this: Now we need to choose in which data-center the service should be deployed. We can also do this using an affinity group. An affinity group is an easy way to keep everything together (services can communicate with one another more efficiently (and less cost) if they are running in the same data center) so choose affinity group and select Create a new affinity group… from the dropdown list. Enter a name and datacenter; best take one near to your expected customers: North Europe is the datacenter in Amsterdam, which is closest to where I live, so I took that one. Feel free to take another… Click OK. We’ll deploy using Visual Studio, so before you click OK, select Do not deploy. Now click OK. 1.1.5 Creating the Storage Account Finally, before we can deploy we also need to create a Storage Account. Visual Studio will upload the package to storage and then instruct Azure to deploy it from storage. Go back to the Management Portal (windows.azure.com)l. Now go the the Storage Accounts tab. Click on the New Storage Account button. This will open the Create a New Storage Account dialog: Enter a unique (lowercase only) URL and use the same affinity group you used in the previous step. Hit OK. 1.1.6 Ready to deploy the web site Now we are ready to deploy! First we need to remove the local development trace listener from web.config configuration because it is not available in the cloud, but leave the diagnostic monitor listener: Code Snippet <listeners> <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics"> <filter type="" /> </add> </listeners> Go back to Visual Studio, right click your azure project, and select Publish… Now select your hosted service and storage account, and hit OK (you might need to cancel and re-open this window to refresh it). Deployment should start and after a little while you should see the Windows Azure Activity Log: Wait until this is complete, it may take several minutes or longer depending on bandwidth… Click on the WebSite URL, the site should open. 1.1.7 Using Server Explorer Open Server Explorer. With the Azure SDK tools installed, you can look here at Azure Compute and Azure Storage. Click on the Windows Azure Compute tree item, then select Add Deployment Environment… Select your management certificate to list all hosted services there. Open the tree item and select Staging or Production. Click OK. From now on you can look at this environment from Visual Studio Server Explorer: You can try the same for storage. Later we will look at how we can use this to debug applications with Intelli-Trace. The end. My Next post (next Wednesday) is going to be on remote debugging worker roles running in the cloud…
Installing the Azure Management Certificates 16 June 2011 Peter-Himschoot .NET Development, Azure, VS2010 This is the next post on Getting Started developing on Azure with Visual Studio 2010. Starting to develop on Azure with Visual Studio can be a lot to eat the first time. Getting Visual Studio ready, including installing the Management Certificates and so on is not a simple task (the first time anyway). So that is why I made this little walk-through series on starting to develop on Azure… In the first part you’ll build and run your first project on the Azure Compute Emulator, which is the local test version of the cloud. In this part you’ll get Visual Studio ready to directly publish your solution in the cloud by installing the management certificates and in part 3 you will create the Hosted Service and storage account to actually deploy from Visual Studio. 1.2 Deploying your solution to the cloud Now we’re ready to deploy to the actual cloud environment. Start by logging on to the Azure Management Portal, which is at http://windows.azure.com . Open the Hosted Services tab, and select Management Certificates: If you just started with Azure development you will not have any certificate here. To make Visual Studio integrate better with the management portal (actually with the management API’s) we need to upload a certificate here so your Visual Studio can publish projects to the cloud. This is easily done with Visual Studio. 1.1.3 Installing the Management Certificate: Go back to Visual Studio with the cloud project open. Right-click the cloud project and select Publish… The Deploy Windows Azure project dialog should open: With this dialog you can publish your project. The first option will create the package, but then you have to deploy it using the Management Portal. The second option will do this for you. In the first drop-down you need to select a Management Certificate, or create a new one. Select Add… from the Credentials drop-down. The Windows Azure Project Management Authentication dialog opens: Select <Create…> from the first drop-down. Enter a friendly name for your certificate. I call mine AzureManagment. Now click the View button. This will allow use to export the certificate’s public key using the Details… tab: Click on the Copy to File button. This opens the Certificate Export wizard. Click next. Choose “Do not export the private key”: Click Next> twice, then choose a filename for the certificate. Hit Next> then Finish. Your certificate should be exported now. Go back to the management portal (windows.azure.com), to the Management Certificates. Click on the Add Certificate button and select your previously exported file. Click Ok and wait for the import to complete. Your certificate should be added, and now you need to copy the subscription-id back to Visual Studio. It is right there in the properties window of the certificate: So copy it, go to Visual Studio to where the Windows Azure Project Management Authentication is waiting. Copy the subscription-id and name it: Click OK. In the next blog post we will add a hosted service and storage account and deploy the solution…
Getting started developing on Azure with Visual Studio 2010 14 June 2011 Peter-Himschoot Azure, VS2010, .NET Development Starting to develop on Azure with Visual Studio can be a lot to eat the first time. Getting Visual Studio ready, including installing the Management Certificates and so on is not a simple task (the first time anyway). So that is why I made this little walk-through series on starting to develop on Azure… In this part you’ll build and run your first project on the Azure Compute Emulator, which is the local test version of the cloud. In part 2 you’ll get Visual Studio ready to directly publish your solution in the cloud by installing the management certificates and in part 3 you will create the Hosted Service and storage account to actually deploy from Visual Studio. 1.1 Azure Lab – Getting started developing In this walk-through you will learn how to develop with Visual Studio 2010 on Azure. To be able to do this you need to have the latest Azure SDK with Visual Studio tools installed (SDK 1.4 at time of writing), and you should also have a valid Azure account. 1.1.1 Creating the Azure project Start Visual Studio 2010 and create a new cloud project, calling it MyFirstCloudProject: Click on Ok. The New Windows Azure Project dialog should open: Click on the ASP.NET Web Role and click the > button. Then rename the project by clicking on the rename button: Call it MyFirstWebRole: Click OK. Add a button to the web form: Implement its click event as follows: Code Snippet System.Diagnostics.Trace.WriteLine("Hello from Azure!"); To enable Tracing to the Windows Compute Emulator, add following configuration to your web.config (line 6 to 9 should be added): Code Snippet <listeners> <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics"> <filter type="" /> </add> <add type="Microsoft.ServiceHosting.Tools.DevelopmentFabric.Runtime.DevelopmentFabricTraceListener, Microsoft.ServiceHosting.Tools.DevelopmentFabric.Runtime, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="DevFabricListener"> <filter type="" > </filter> </add> </listeners> Open the Solution Explorer and make sure that the Azure project is the start project. Run your solution by pressing F5. Note that Visual Studio will package your code and upload it to the Compute Emulator. If not already running, the compute emulator (the little Azure icon in the system tray) will also be started. Right-click the Azure icon in the system tray and choose “Show Compute Emulator UI”. After a while your browser will display the web site and the emulator will display loggings: This window shows you output as your web role gets started, etc… You can add some more by clicking the button. This should write something to the emulator: Try placing a breakpoint on the button’s event handler. Click the button. Visual Studio should stop on the breakpoint. If your code was a little more complicated this would allow you to debug it. In the next blog we will look at installing the management certificates to run this same code in the cloud, deploying it with Visual Studio 2010.
Creating and Using Custom Performance Counters in Windows Azure 20 May 2011 Peter-Himschoot Azure, .NET Development, VS2010 Building software, especially software running on servers, requires some way to look “inside” the running application. Using the debugger is one way, but you cannot use a debugger on production applications. A better way is to use performance counters. These give you a way to see things, like how hard the CPU working, but also how many orders have been processed by your system. The first performance counter is provided by the system, the latter you can build yourself. Before Azure SDK 1.3 you couldn’t create your own performance counters because your code doesn’t get write access to the region of the registry where you register your custom performance counters. But with elevated startup tasks this is easy. In the blog post I will show you how you can create a startup task to create custom performance counters, and how to use them in your role. Commence by creating a new Visual Studio Cloud project. Add a single worker role. We’ll use this role to illustrate using a performance counter. Add another project, now a Console project (call it InstallPerfCounters). We’ll use this console application as the startup task. Implement the InstallPerfCounters console project as follows: Code Snippet class Program { static void Main(string[] args) { const string categoryName = "U2U"; const string categoryHelp = "U2U demo counters"; if (!PerformanceCounterCategory.Exists(categoryName)) { var counters = new System.Diagnostics.CounterCreationDataCollection(); counters.Add(new CounterCreationData { CounterName = "# secs/running", CounterHelp = "How long has this been running", CounterType = PerformanceCounterType.NumberOfItems32 }); var category = PerformanceCounterCategory.Create(categoryName, categoryHelp, PerformanceCounterCategoryType.MultiInstance, counters); } } } This uses the same kind of code you would use anywhere else to create new performance counters. Now we need to install this as a startup task. Add a folder to the worker role project, calling it startup. We need to add two files here, one the console project we just made, and a command file. To copy the executable, let’s make first sure we’re using the release version at all time. Open the configuration manager: Select Release as the configuration: Build your project, ensuring everything compiles nicely. Now right-click the startup folder from the worker role project and select Add Existing Item… Browse to the release folder of the console project, select the executable and choose Add As Link from the drop-down: This should add the executable to the startup folder. Select it, and select Copy Always in the properties folder: Now we are ready to add the command file. Don’t use Visual Studio for this, because it will add the Byte Order Mark which is not supported by Azure. The easiest way to do this is by right-clicking the startup folder, and select “Open Folder in Windows Explorer”. Then right-click the folder’s contents, and add a new text document: Rename it to installcmd.cmd. Go back to Visual Studio. In Solution Explorer select “Show All Files”: The installcmd.cmd should appear, you can now right-click it and select “Include in project”. Edit it to the following contents: Code Snippet %~dp0InstallPerfCounters.exe /q /log %~dp0pc_install.htm exit /b 0 Now open the ServiceDefinition.csdef file from your cloud project and add a startup task: Code Snippet <Startup> <Task commandLine="startup\installCmd.cmd" executionContext="elevated" taskType="simple" /> </Startup> This should take care of installing the performance counter. Now let’s use it in our worker role. First we need to create the performance counter instance, and then update it. In this simple example we’ll make the counter increment once each second. So implement the worker role’s run as follows: Code Snippet public override void Run() { // This is a sample worker implementation. Replace with your logic. Trace.TraceInformation("UsingPerfCounters entry point called"); const string categoryName = "U2U"; PerformanceCounter secsRunning = new PerformanceCounter() { CategoryName = categoryName, CounterName = "# secs/running", MachineName = "." /* current machine */, InstanceName = Environment.MachineName, ReadOnly = false }; var counterExists = PerformanceCounterCategory.Exists(categoryName); while (true) { Thread.Sleep(TimeSpan.FromSeconds(1)); if (counterExists) { secsRunning.Increment(); } Trace.WriteLine("Working", "Information"); } } Publish this solution in Azure, not forgetting to turn on Remove desktop. Also note that I turn in IntelliTrace, whichh is great for debugging those nasty deployment problems… When you complete publishing you can now remote desktop to the instance and use PerfMon to look at your custom performance counter. Or you can use Azure Diagnostics….
Silverlight and the Windows Azure AppFabric Service Bus 08 February 2011 Peter-Himschoot Azure, WPF/Silverlight, .NET Development, WCF This blog post will show you how to allow a Silverlight application to call a service over the Windows Azure AppFabric Service Bus. The problem you need to solve is that Silverlight will look for a “clientaccesspolicy.xml” at the root uri of the service. When I tried it myself I couldn’t find any “how to” on this topic so I decided to turn this into a blog post. If anyone else has this blogged, sorry I am such a bad internet searcher . So, you’ve just build a nice Silverlight application that uses some WCF service you’re hosting locally. You’ve done all the steps to make it work on your machine, including the “clientaccesspolicy.xml” to enable cross-domain communication. The only thing is that you want to keep hosting the service locally and/or move it to another machine without updating the Silverlight client. You’ve heard that the Windows Azure Service Bus allows you to do this more easily so you decide to use it. This is your current service configuration (notice the localhost address!). Code Snippet <service name="SilverlightAndAppFabric.TheService" > <endpoint name="HELLO" address="http://localhost:1234/rest" behaviorConfiguration="REST" binding="webHttpBinding" bindingConfiguration="default" contract="SilverlightAndAppFabric.IHello" /> </service> What you now need to do is to move it to the AppFabric Service bus. This is easy. Of course you need to get a subscription for Windows Azure and set up the AppFabric service bus… Look for somewhere else on this, there’s lots of this around. Then you change the address, binding and behavior like this: You need an endpoint behavior, because your service needs to authenticate to the service bus (so they can send you the bill): Code Snippet <endpointBehaviors> <behavior name="REST"> <webHttp /> <transportClientEndpointBehavior> <clientCredentials> <sharedSecret issuerName="owner" issuerSecret="---your secret key here please---" /> </clientCredentials> </transportClientEndpointBehavior> </behavior> </endpointBehaviors> You (might) need a binding configuration to allow clients to access your service anonymously: Code Snippet <webHttpRelayBinding> <binding name="default" > <security relayClientAuthenticationType="None"> </security> </binding> </webHttpRelayBinding> And of course you need to change the endpoint to use the WebHttpRelayBinding: Code Snippet <endpoint name="HELLO" address="https://u2utraining.servicebus.windows.net/rest" behaviorConfiguration="REST" binding="webHttpRelayBinding" bindingConfiguration="default" contract="SilverlightAndAppFabric.IHello" /> This should to the trick. Yes, when you try the REST service using Internet Explorer you get back the intended result. Now you update the address in your Silverlight application to use the service bus endpoint: This is the old call: Code Snippet wc.DownloadStringAsync(new Uri("http://localhost:1234/rest/hello")); And you change it to: Code Snippet wc.DownloadStringAsync(new Uri("https://u2utraining.servicebus.windows.net/rest/hello")); Please note the switch to https and the service bus address. You run your Silverlight client and it fails with some strange security error! The problem is that Silverlight will try to access the clientaccesspolicy.xml file from your new address. Since this is now the service bus this will not work. To solve it you simply add another REST endpoint that will return the clientaccesspolicy from this Uri. Start with the service contract: Code Snippet [ServiceContract] public interface IClientAccessPolicy { [OperationContract] [WebGet(UriTemplate = "clientaccesspolicy.xml")] Message GetPolicyFile(); } Implement it: Code Snippet public Message GetPolicyFile() { WebOperationContext.Current.OutgoingRequest.ContentType = "text/xml"; using (FileStream stream = File.Open("clientaccesspolicy.xml", FileMode.Open)) { using (XmlReader xmlReader = XmlReader.Create(stream)) { Message m = Message.CreateMessage(MessageVersion.None, "", xmlReader); using (MessageBuffer buffer = m.CreateBufferedCopy(1000)) { return buffer.CreateMessage(); } } } } And make sure it returns the right policy. This is what gave me a lot of headache, so here it is: Code Snippet <?xml version="1.0" encoding="utf-8"?> <access-policy> <cross-domain-access> <policy> <allow-from http-request-headers="*"> <domain uri="http://*"/> <domain uri="https://*"/> </allow-from> <grant-to> <resource path="/" include-subpaths="true"/> </grant-to> </policy> </cross-domain-access> </access-policy> Pay special attention to the allow-from element. By default this will allow SOAP calls, not REST calls. For explanations read the documentation. You might want to edit it anyway. Now add a similar REST endpoint, making sure the clientaccesspolicy is at the root level: Code Snippet <endpoint name="CLIENTACCESSPOLICY" address="https://u2utraining.servicebus.windows.net" behaviorConfiguration="REST" binding="webHttpRelayBinding" bindingConfiguration="default" contract="SilverlightAndAppFabric.IClientAccessPolicy" /> Done! A working example (you will have to change the client credentials to your own) can be downloaded from the U2U site here.
Managing your TFS work items 19 January 2011 Peter-Himschoot .NET Development, Team System Telerik just released a new version of their free Work Item Manager (WIM), which allows you to work with work items yet another way
Azure Inter-role communication using callback instead of queues 20 December 2010 Peter-Himschoot Azure, WCF, VS2010, .NET Development I’m currently playing with Azure and the Azure training kit, and I learned something cool today. When you work with Azure you can setup multiple worker roles for your Azure application. If you want to make these roles talk to one another you can use the queuing mechanism which is part of Azure. But you can also use WCF dual interface mechanism. Imagine you want to build a chat application using Azure and WCF. In this case you define a worker role that exposes a dual interface like this: Code Snippet [ServiceContract( Namespace = "urn:WindowsAzurePlatformKit:Labs:AzureTalk:2009:10", CallbackContract = typeof(IClientNotification), SessionMode = SessionMode.Required)] public interface IChatService { /// <summary> /// Called by client to announce they are connected at this chat endpoint. /// </summary> /// <param name="userName">The user name of the client.</param> /// <returns>The ClientInformation object for the new session.</returns> [OperationContract(IsInitiating = true)] ClientInformation Register(string userName); /// <summary> /// Sends a message to a user. /// </summary> /// <param name="message">The message to send.</param> /// <param name="sessionId">The recipient's session ID.</param> [OperationContract(IsInitiating = false)] void SendMessage(string message, string sessionId); /// <summary> /// Returns a list of connected clients. /// </summary> /// <returns>The list of active sessions.</returns> [OperationContract(IsInitiating = false)] IEnumerable<ClientInformation> GetConnectedClients(); } The IClientNotification interface is the call-back interface which is implemented by the client of the service. The client calls the Register method on the server, which then can keep track of the client by using the callback interface: Code Snippet IClientNotification callback = OperationContext.Current.GetCallbackChannel<IClientNotification>(); If you host the service as a single instance, all clients will register to the same service, so this service can easily track each client. But if you grow and start using multiple instances of your service in Azure, each client will register to a single instance, so each instance will know only about its own clients. If two clients, registered to different instances, want to communicate, the services will have to handle the communication for this. The solution is easy, make them also expose the IClientNotification callback service interface using internal endpoints, so they can communicate to each other: Code Snippet public class ChatService : IChatService, IClientNotification Of course each service will have to be able to find the other service instances. This you can do with the RoleEnvironment class, which you can use in your worker role class: Code Snippet var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["NotificationService"]); This does require the worker role to define an internal endpoint. You can do this in Visual Studio 2010. Select the worker role’s properties, go to the endpoint tab and enter an internal endpoint: The rest of the code is straightforward (of your comfortable with WCF that is) and can be found in the Azure Training Kit (look for the Worker Role Communication lab).
Looking at generated queries for LINQ 2 SQL and EF with Visual Studio 2010 Intelli-Trace 14 November 2010 Peter-Himschoot VS2010, .NET Development When you use LINQ to SQL or Entity Framework you might wonder from time to time what the actual SQL is that was generated by the runtime. Both frameworks have their specific way to allow you to look at this, but require some extra code. Of course you shouldn’t forget to remove this code. Visual Studio 2010 Ultimate gives you another way to look at the generated SQL, without having to change to code. When you turn on intelli-trace, the intelli-trace log will show you the SQL. For example running some LINQ to SQL code will show up as: The ADO.NET entries show the actual query being executed. Note that older queries will also show in here, so if you forgot to look at the query at time of execution, you can scroll back to it later!
How to implement INotifyPropertyChanged without strings 14 June 2010 Peter-Himschoot .NET Development INotifyPropertyChanged is an interface which is very important to do proper databinding and is heavily used in the MVVM pattern. However, to implement it you have to raise the PropertyChanged event whenever a data-bound property changes value, and you have to pass the name of the property to it. This results in string based programming and this generally is not good for maintenance. Some people create a RaisePropertyChanged method in the base class and then invoke this method in each property setter. Again this is not ideal because you cannot always derive from this base class, especially if you already have a base class. In this post I want to show you an alternative way of implementing INotifyPropertyChanged that doesn’t require a base class (you simple implement the INotifyPropertyChanged on each class) with a nice, string-less way of raising the PropertyChanged event. For example, look at this class: 1: public class Customer : INotifyPropertyChanged 2: { 3: private string firstName; 4: 5: public string FirstName 6: { 7: get { return firstName; } 8: set 9: { 10: if (!object.Equals(value, firstName)) 11: { 12: firstName = value; 13: PropertyChanged.Raise(this, o => o.FirstName); 14: } 15: } 16: } 17: } As you can see, the PropertyChanged.Raise does all the work, and you pass in the sender and the property using a Lambda expression. Simple and no strings. To make things even simpler, I use a code snippet that implements the property directly for me: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <CodeSnippet Format="1.0.0" 3: xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"> 4: <Header> 5: <Title>NPC</Title> 6: <Author>Peter</Author> 7: <Shortcut>npc</Shortcut> 8: <Description>NotifyPropertyChangedProperty</Description> 9: <SnippetTypes> 10: <SnippetType>SurroundsWith</SnippetType> 11: <SnippetType>Expansion</SnippetType> 12: </SnippetTypes> 13: </Header> 14: <Snippet> 15: <Declarations> 16: <Literal> 17: <ID>type</ID> 18: <Default>int</Default> 19: </Literal> 20: <Literal> 21: <ID>field</ID> 22: <Default>prop</Default> 23: </Literal> 24: <Literal> 25: <ID>name</ID> 26: <Default>Prop</Default> 27: </Literal> 28: </Declarations> 29: <Code Language="CSharp"> 30: <![CDATA[ 31: [System.Diagnostics.DebuggerBrowsable(System.Diagnostics.DebuggerBrowsableState.Never)] 32: private $type$ $field$; 33: 34: public $type$ $name$ { 35: get { return $field$; } 36: set { 37: if( ! object.Equals( value, $field$ ) ) 38: { 39: $field$ = value; 40: PropertyChanged.Raise( this, o => o.$name$ ); 41: } 42: } 43: } 44: ]]> 45: </Code> 46: </Snippet> 47: </CodeSnippet> So how does it work? Here is the definition of the Raise Extension method: 1: public static void Raise<T, P>( 2: this PropertyChangedEventHandler pc 3: , T source 4: , Expression<Func<T, P>> pe) 5: { 6: if (pc != null) 7: { 8: pc.Invoke(source, 9: new PropertyChangedEventArgs(((MemberExpression)pe.Body).Member.Name)); 10: } 11: } This extension method has three arguments, the first two should be clear. The last is a LINQ expression. When you invoke it the compiler converts this into a tree of objects, which represents the lambda expression o => o.FirstName, and passes it as an argument. This tree of objects is then traversed to find the MemberExpression which contains the name of the property. So the overhead is the creation and traversal of this small tree of objects. So it is a little less fast then passing a string, but the code is compile-time safe and supported by Visual Studio refactoring. This technique is generally known as static reflection.
Pex and Code Contracts 10 April 2010 Peter-Himschoot VS2010, .NET Development I’m currently experimenting with Pex, Moles and Code Contracts, and I wondered what effect code contracts have on Pex tests. So I build this simple piece of code: 1: public class Demo 2: { 3: public int Foo(int a, int b) 4: { 5: if (a < 0) 6: return a; 7: else if (a > 0) 8: return b; 9: else 10: return a + b; 11: } 12: } Then I make Pex generate its Parameterized Unit Tests (PUTs). This generates the following test: 1: [PexClass(typeof(Demo))] 2: [PexAllowedExceptionFromTypeUnderTest(typeof(InvalidOperationException))] 3: [PexAllowedExceptionFromTypeUnderTest(typeof(ArgumentException), AcceptExceptionSubtypes = true)] 4: [TestClass] 5: public partial class DemoTests 6: { 7: /// <summary>Test stub for Foo(Int32, Int32)</summary> 8: [PexMethod] 9: public int Foo( 10: [PexAssumeUnderTest]Demo target, 11: int a, 12: int b 13: ) 14: { 15: int result = target.Foo(a, b); 16: return result; 17: // TODO: add assertions to method DemoTests.Foo(Demo, Int32, Int32) 18: } 19: } I just leave the code, right-click on The Foo method of the DemoTests class, choose “Run Pex explorations” and I get this: As you can see the exploration calls my code with negative, zero and positive values. What happens when I add a contract stating that a should be positive? 1: public class Demo 2: { 3: public int Foo(int a, int b) 4: { 5: Contract.Requires(a > 0); 6: if (a < 0) 7: return a; 8: else if (a > 0) 9: return b; 10: else 11: return a + b; 12: } 13: } Only line 5 was added, but if I run the pex explorations again I get: The effect is that Pex now doesn’t explore negative numbers, because it can deduce from the contract not to even try. What if I use a contract to state that b should be negative? 1: Contract.Requires(b < 0); Again Pex sees this and explores my code with negative b: One more. When I change my code to do a little more with b, like this: 1: public int Foo(int a, int b) 2: { 3: Contract.Requires(a > 0); 4: Contract.Requires(b < 0 || b > 10); 5: if (a < 0) 6: return a; 7: else if (a > 0) 8: { 9: if (b > a) 10: return a; 11: else 12: return b; 13: } 14: else 15: return a + b; 16: } and when I run the explorations again: So, you can guide Pex by supplying contracts on the arguments of your methods.