Silverlight and the Windows Azure AppFabric Service Bus

This blog post will show you how to allow a Silverlight application to call a service over the Windows Azure AppFabric Service Bus. The problem you need to solve is that Silverlight will look for a “clientaccesspolicy.xml” at the root uri of the service. When I tried it myself I couldn’t find any “how to” on this topic so I decided to turn this into a blog post. If anyone else has this blogged, sorry I am such a bad internet searcher . So, you’ve just build a nice Silverlight application that uses some WCF service you’re hosting locally. You’ve done all the steps to make it work on your machine, including the “clientaccesspolicy.xml” to enable cross-domain communication. The only thing is that you want to keep hosting the service locally and/or move it to another machine without updating the Silverlight client. You’ve heard that the Windows Azure Service Bus allows you to do this more easily so you decide to use it. This is your current service configuration (notice the localhost address!). Code Snippet <service name="SilverlightAndAppFabric.TheService" >   <endpoint name="HELLO"             address="http://localhost:1234/rest"             behaviorConfiguration="REST"             binding="webHttpBinding"             bindingConfiguration="default"             contract="SilverlightAndAppFabric.IHello" /> </service> What you now need to do is to move it to the AppFabric Service bus. This is easy. Of course you need to get a subscription for Windows Azure and set up the AppFabric service bus… Look for somewhere else on this, there’s lots of this around. Then you change the address, binding and behavior like this: You need an endpoint behavior, because your service needs to authenticate to the service bus (so they can send you the bill): Code Snippet <endpointBehaviors>   <behavior name="REST">     <webHttp />     <transportClientEndpointBehavior>       <clientCredentials>         <sharedSecret           issuerName="owner"           issuerSecret="---your secret key here please---" />       </clientCredentials>     </transportClientEndpointBehavior>   </behavior> </endpointBehaviors> You (might) need a binding configuration to allow clients to access your service anonymously: Code Snippet <webHttpRelayBinding>   <binding name="default" >     <security relayClientAuthenticationType="None">     </security>   </binding> </webHttpRelayBinding>   And of course you need to change the endpoint to use the WebHttpRelayBinding: Code Snippet <endpoint name="HELLO"           address="https://u2utraining.servicebus.windows.net/rest"           behaviorConfiguration="REST"           binding="webHttpRelayBinding"           bindingConfiguration="default"           contract="SilverlightAndAppFabric.IHello" />   This should to the trick. Yes, when you try the REST service using Internet Explorer you get back the intended result. Now you update the address in your Silverlight application to use the service bus endpoint: This is the old call: Code Snippet wc.DownloadStringAsync(new Uri("http://localhost:1234/rest/hello"));   And you change it to: Code Snippet wc.DownloadStringAsync(new Uri("https://u2utraining.servicebus.windows.net/rest/hello"));   Please note the switch to https and the service bus address. You run your Silverlight client and it fails with some strange security error! The problem is that Silverlight will try to access the clientaccesspolicy.xml file from your new address. Since this is now the service bus this will not work. To solve it you simply add another REST endpoint that will return the clientaccesspolicy from this Uri. Start with the service contract: Code Snippet [ServiceContract] public interface IClientAccessPolicy {   [OperationContract]   [WebGet(UriTemplate = "clientaccesspolicy.xml")]   Message GetPolicyFile(); } Implement it: Code Snippet public Message GetPolicyFile() {   WebOperationContext.Current.OutgoingRequest.ContentType = "text/xml";     using (FileStream stream = File.Open("clientaccesspolicy.xml", FileMode.Open))   {     using (XmlReader xmlReader = XmlReader.Create(stream))     {       Message m = Message.CreateMessage(MessageVersion.None, "", xmlReader);       using (MessageBuffer buffer = m.CreateBufferedCopy(1000))       {         return buffer.CreateMessage();       }     }   } }   And make sure it returns the right policy. This is what gave me a lot of headache, so here it is: Code Snippet <?xml version="1.0" encoding="utf-8"?> <access-policy>   <cross-domain-access>     <policy>       <allow-from http-request-headers="*">         <domain uri="http://*"/>         <domain uri="https://*"/>       </allow-from>       <grant-to>         <resource path="/" include-subpaths="true"/>       </grant-to>     </policy>   </cross-domain-access> </access-policy>   Pay special attention to the allow-from element. By default this will allow SOAP calls, not REST calls. For explanations read the documentation. You might want to edit it anyway. Now add a similar REST endpoint, making sure the clientaccesspolicy is at the root level: Code Snippet <endpoint name="CLIENTACCESSPOLICY"           address="https://u2utraining.servicebus.windows.net"           behaviorConfiguration="REST"           binding="webHttpRelayBinding"           bindingConfiguration="default"           contract="SilverlightAndAppFabric.IClientAccessPolicy" />   Done! A working example (you will have to change the client credentials to your own) can be downloaded from the U2U site here.

Azure Inter-role communication using callback instead of queues

I’m currently playing with Azure and the Azure training kit, and I learned something cool today. When you work with Azure you can setup multiple worker roles for your Azure application. If you want to make these roles talk to one another you can use the queuing mechanism which is part of Azure. But you can also use WCF dual interface mechanism. Imagine you want to build a chat application using Azure and WCF. In this case you define a worker role that exposes a dual interface like this: Code Snippet [ServiceContract(     Namespace = "urn:WindowsAzurePlatformKit:Labs:AzureTalk:2009:10",     CallbackContract = typeof(IClientNotification),     SessionMode = SessionMode.Required)] public interface IChatService {   /// <summary>   /// Called by client to announce they are connected at this chat endpoint.   /// </summary>   /// <param name="userName">The user name of the client.</param>   /// <returns>The ClientInformation object for the new session.</returns>   [OperationContract(IsInitiating = true)]   ClientInformation Register(string userName);     /// <summary>   /// Sends a message to a user.   /// </summary>   /// <param name="message">The message to send.</param>   /// <param name="sessionId">The recipient's session ID.</param>   [OperationContract(IsInitiating = false)]   void SendMessage(string message, string sessionId);     /// <summary>   /// Returns a list of connected clients.   /// </summary>   /// <returns>The list of active sessions.</returns>   [OperationContract(IsInitiating = false)]   IEnumerable<ClientInformation> GetConnectedClients(); } The IClientNotification interface is the call-back interface which is implemented by the client of the service. The client calls the Register method on the server, which then can keep track of the client by using the callback interface: Code Snippet IClientNotification callback =   OperationContext.Current.GetCallbackChannel<IClientNotification>();   If you host the service as a single instance, all clients will register to the same service, so this service can easily track each client. But if you grow and start using multiple instances of your service in Azure, each client will register to a single instance, so each instance will know only about its own clients. If two clients, registered to different instances, want to communicate, the services will have to handle the communication for this. The solution is easy, make them also expose the IClientNotification callback service interface using internal endpoints, so they can communicate to each other: Code Snippet public class ChatService : IChatService, IClientNotification   Of course each service will have to be able to find the other service instances. This you can do with the RoleEnvironment class, which you can use in your worker role class: Code Snippet var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances                 .Where(instance => instance != current)                 .Select(instance => instance.InstanceEndpoints["NotificationService"]);   This does require the worker role to define an internal endpoint. You can do this in Visual Studio 2010. Select the worker role’s properties, go to the endpoint tab and enter an internal endpoint: The rest of the code is straightforward (of your comfortable with WCF that is) and can be found in the Azure Training Kit (look for the Worker Role Communication lab).

Looking at generated queries for LINQ 2 SQL and EF with Visual Studio 2010 Intelli-Trace

When you use LINQ to SQL or Entity Framework you might wonder from time to time what the actual SQL is that was generated by the runtime. Both frameworks have their specific way to allow you to look at this, but require some extra code. Of course you shouldn’t forget to remove this code. Visual Studio 2010 Ultimate gives you another way to look at the generated SQL, without having to change to code. When you turn on intelli-trace, the intelli-trace log will show you the SQL. For example running some LINQ to SQL code will show up as: The ADO.NET entries show the actual query being executed. Note that older queries will also show in here, so if you forgot to look at the query at time of execution, you can scroll back to it later!

How to implement INotifyPropertyChanged without strings

INotifyPropertyChanged is an interface which is very important to do proper databinding and is heavily used in the MVVM pattern. However, to implement it you have to raise the PropertyChanged event whenever a data-bound property changes value, and you have to pass the name of the property to it. This results in string based programming and this generally is not good for maintenance. Some people create a RaisePropertyChanged method in the base class and then invoke this method in each property setter. Again this is not ideal because you cannot always derive from this base class, especially if you already have a base class. In this post I want to show you an alternative way of implementing INotifyPropertyChanged that doesn’t require a base class (you simple implement the INotifyPropertyChanged on each class) with a nice, string-less way of raising the PropertyChanged event. For example, look at this class: 1: public class Customer : INotifyPropertyChanged 2: { 3: private string firstName; 4:  5: public string FirstName 6: { 7: get { return firstName; } 8: set 9: { 10: if (!object.Equals(value, firstName)) 11: { 12: firstName = value; 13: PropertyChanged.Raise(this, o => o.FirstName); 14: } 15: } 16: } 17: } As you can see, the PropertyChanged.Raise does all the work, and you pass in the sender and the property using a Lambda expression. Simple and no strings. To make things even simpler, I use a code snippet that implements the property directly for me: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <CodeSnippet Format="1.0.0" 3: xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"> 4: <Header> 5: <Title>NPC</Title> 6: <Author>Peter</Author> 7: <Shortcut>npc</Shortcut> 8: <Description>NotifyPropertyChangedProperty</Description> 9: <SnippetTypes> 10: <SnippetType>SurroundsWith</SnippetType> 11: <SnippetType>Expansion</SnippetType> 12: </SnippetTypes> 13: </Header> 14: <Snippet> 15: <Declarations> 16: <Literal> 17: <ID>type</ID> 18: <Default>int</Default> 19: </Literal> 20: <Literal> 21: <ID>field</ID> 22: <Default>prop</Default> 23: </Literal> 24: <Literal> 25: <ID>name</ID> 26: <Default>Prop</Default> 27: </Literal> 28: </Declarations> 29: <Code Language="CSharp"> 30: <![CDATA[ 31: [System.Diagnostics.DebuggerBrowsable(System.Diagnostics.DebuggerBrowsableState.Never)] 32: private $type$ $field$; 33: 34: public $type$ $name$ { 35: get { return $field$; } 36: set { 37: if( ! object.Equals( value, $field$ ) ) 38: { 39: $field$ = value; 40: PropertyChanged.Raise( this, o => o.$name$ ); 41: } 42: } 43: } 44: ]]> 45: </Code> 46: </Snippet> 47: </CodeSnippet> So how does it work? Here is the definition of the Raise Extension method: 1: public static void Raise<T, P>( 2: this PropertyChangedEventHandler pc 3: , T source 4: , Expression<Func<T, P>> pe) 5: { 6: if (pc != null) 7: { 8: pc.Invoke(source, 9: new PropertyChangedEventArgs(((MemberExpression)pe.Body).Member.Name)); 10: } 11: } This extension method has three arguments, the first two should be clear. The last is a LINQ expression. When you invoke it the compiler converts this into a tree of objects, which represents the lambda expression o => o.FirstName, and passes it as an argument. This tree of objects is then traversed to find the MemberExpression which contains the name of the property. So the overhead is the creation and traversal of this small tree of objects. So it is a little less fast then passing a string, but the code is compile-time safe and supported by Visual Studio refactoring. This technique is generally known as static reflection.

Pex and Code Contracts

I’m currently experimenting with Pex, Moles and Code Contracts, and I wondered what effect code contracts have on Pex tests. So I build this simple piece of code: 1: public class Demo 2: { 3: public int Foo(int a, int b) 4: { 5: if (a < 0) 6: return a; 7: else if (a > 0) 8: return b; 9: else 10: return a + b; 11: } 12: } Then I make Pex generate its Parameterized Unit Tests (PUTs). This generates the following test: 1: [PexClass(typeof(Demo))] 2: [PexAllowedExceptionFromTypeUnderTest(typeof(InvalidOperationException))] 3: [PexAllowedExceptionFromTypeUnderTest(typeof(ArgumentException), AcceptExceptionSubtypes = true)] 4: [TestClass] 5: public partial class DemoTests 6: { 7: /// <summary>Test stub for Foo(Int32, Int32)</summary> 8: [PexMethod] 9: public int Foo( 10: [PexAssumeUnderTest]Demo target, 11: int a, 12: int b 13: ) 14: { 15: int result = target.Foo(a, b); 16: return result; 17: // TODO: add assertions to method DemoTests.Foo(Demo, Int32, Int32) 18: } 19: } I just leave the code, right-click on The Foo method of the DemoTests class, choose “Run Pex explorations” and I get this: As you can see the exploration calls my code with negative, zero and positive values. What happens when I add a contract stating that a should be positive? 1: public class Demo 2: { 3: public int Foo(int a, int b) 4: { 5: Contract.Requires(a > 0); 6: if (a < 0) 7: return a; 8: else if (a > 0) 9: return b; 10: else 11: return a + b; 12: } 13: } Only line 5 was added, but if I run the pex explorations again I get:   The effect is that Pex now doesn’t explore negative numbers, because it can deduce from the contract not to even try. What if I use a contract to state that b should be negative? 1: Contract.Requires(b < 0); Again Pex sees this and explores my code with negative b: One more. When I change my code to do a little more with b, like this: 1: public int Foo(int a, int b) 2: { 3: Contract.Requires(a > 0); 4: Contract.Requires(b < 0 || b > 10); 5: if (a < 0) 6: return a; 7: else if (a > 0) 8: { 9: if (b > a) 10: return a; 11: else 12: return b; 13: } 14: else 15: return a + b; 16: } and when I run the explorations again: So, you can guide Pex by supplying contracts on the arguments of your methods.

Overloading & Co/Contra-variance could break your code!

Co and contra-variance were introduced to VB.NET and C# to make working with certain classes more natural (“because it should work”). But beware, I was experimenting a bit with this and found following possible breaking change. I started with these two classes: C# 1: public class Person 2: { 3: // ... 4: } 5:  6: public class Vip : Person 7: { 8: // ... 9: } VB.NET 1: Public Class Person 2: ' 3: End Class 4:  5: Public Class Vip 6: Inherits Person 7: ' 8: End Class Then I added a collection of people: C# 1: public class PersonList 2: { 3: public void Add(object obj) { 4: // ... 5: } 6:  7: public void Add(Person p) 8: { 9: // ... 10: } 11:  12: public void Add(IEnumerable<Person> people) 13: { 14: foreach (Person p in people) 15: { 16: // ... 17: } 18: } 19: } VB.NET 1: Public Class PersonList 2: Public Sub Add(ByVal obj As Object) 3: ' 4: End Sub 5:  6: Public Sub Add(ByVal person As Person) 7: ' 8: End Sub 9:  10: Public Sub Add(ByVal list As IEnumerable(Of Person)) 11: ' 12: End Sub 13: End Class Next I create a PersonList collection and then add a list of Vip’s: C# 1: class Program 2: { 3: static void Main(string[] args) 4: { 5: PersonList people = new PersonList(); 6:  7: List<Vip> others = new List<Vip> { 8: new Vip(), new Vip() 9: }; 10:  11: people.Add(others); 12: } 13: } VB.NET 1: Sub Main() 2: Dim people As New PersonList 3: Dim others As New List(Of Vip)(New Vip() {New Vip(), New Vip()}) 4: people.Add(others) 5: End Sub When I compile this in Visual Studio 2008 the first Add method, taking the object argument, gets called. But with Visual Studio 2010 (using the .NET 4 target) the third Add method gets called. This is because now IEnumerable<T> is co-variant and will now convert the List<Vip> to an IEnumerable<Person>. Ok, granted. This is NOT something you will see every day, but if you ever encounter it, this might be very confusing. This kind of problem can also occur with extension methods and user-defined conversions…

Building a declarative WCF service using Workflow Foundation 4 and Content Based Correlation

This blog post accompanies my session on Workflow Foundation 4 programming during the Belgian Tech Days (actually developers and IT-pro days :)). During this session I built a WCF service using Workflow Foundation 4, and I want to show you how to do this on your own… In the first part you’ll learn how to create a simple FlowChart workflow and test it, and then in the second part you’ll learn how to setup correlation so multiple players can play the game… Preparing the lab This lab starts with a new project, so start Visual Studio 2010 and create a new workflow service project (call it NumberGuessingGame): This creates a new solution with a workflow project. Remove Service1.xamlx, we’re going to add a new service with a better name. Right-click the project and add a new item. Select the WCF Workflow Service template and name it GuessIt.xamlx. Creating the initial FlowChart This adds a workflow service with a sequential activity containing a Receive and Send activity. Delete the sequential activity leaving the workflow empty. Open the toolbox and drag a FlowChart activity onto the workflow designer. This should look like this now: Next drag a Receive activity onto the designer, below Start. Name it Start A New Game. Select the receive activity and enter following the OperationName, ServiceContractName and CanCreateInstance properties in the properties window: Next right-click on the receive activity and select the Create SendReply option from the drop-down menu. This add a Send activity to the workflow. The Send is coupled to the receive activity through its Request property: Now connect the activities: Adding variables Now, in the workflow designer open the variables window and add a new variable called player of type String. This will hold the player name received through the “Start A New Game” Receive activity. You will also see a variable of type CorrelationHandle. This is used to connect several receive and send activities through correlation. Rename this to gameHandle.We’ll use this handle later to setup content-based correlation. Now add two new variables theNumber and guess, both of type Int32. The first is the number the user needs to guess, to you need to initialize it to a random number between 0 and 100. Use the Random type to do this: Go back to the first receive activity. Click on the content property. This opens the Content Definition window. Select the player as the message data (and set the Message type to String): Do the same for the send activity, but now use following expression (of type String): 1: player + " guess the number between 0 and 100" You might also get validation errors because you renamed the correlation handle to gameHandle. Change the CorrelationInitilizers property to use gameHandle (click on the … button). Testing the service Press F5 in Visual Studio. The WCF Test Client will start and automatically fetch meta-data from the service. Double click the Begin method. This opens a new tab allowing you to invoke the Begin operation on the service. Enter your name and click Invoke. In the bottom area you should see the result of invoking the begin method. This concludes the first step. Adding the data contracts Let’s add a couple of data contracts to the project, one for a game and another for a guess. Note that both contain the player name, this way our service will be able to distinguish between multiple games, as long as each player uses a unique player name: 1: [DataContract] 2:  3: public class Game 4:  5: { 6:  7: [DataMember] 8:  9: public string PlayerName { get; set; } 10:  11: [DataMember] 12:  13: public string Message { get; set; } 14:  15: } 16:  17: [DataContract] 18:  19: public class Guess 20:  21: { 22:  23: [DataMember(IsRequired = true)] 24:  25: public string PlayerName { get; set; } 26:  27: [DataMember(IsRequired = true)] 28:  29: public int Try { get; set; } 30:  31: } Make the “Send Game Started” send activity use a Game instance as the result. To do this you will need to add another variable of type Game, and initialize it to a new instance: Now add to assign activities after the Begin receive activity, and assign appropriate values to the game: Adding the guessing logic Add another variable, calling it guess of type Guess. Then add another receive activity after the send activity, but now with an operation name called Guess, taking guess variable as its content. Check if the guess was right using a decision shape. If so, congratulate the user. If not, decide whether the guess was too small or large. Confirm this back to the user using three SendReply activities (create each one by right-clicking the Guess receive activity and select Create SendReply). Here is an example of the result: Adding Content-based Correlation Of course we want to make a single player play the same game, but what if multiple players are playing on the same server? We need correlation. In this part we’re going to correlate the guess activity to the Begin activity using the player’s name. That is why our data contract both contain the player’s name (by coincidence the properties have the same name, but this is not required). To do correlation we need a correlation handle. This handle will act like a key to a workflow. If a request comes in, we’re going to find this key through part of the message, in our case the player’s name. So, if you haven’t done so, add a new variable to the workflow called gameHandle of type CorrelationHandle. The make sure that all the first sendReply activity is set to initialize this handle by ensuring the CorrelationInitializer is set like this: When the workflow sends this message, we’re saying that the key of the gameHandle is the player’s name. So when another message arrives, the workflow runtime can then see is the message contains a valid key (again the player’s name), use this key to find the right workflow instance, and then send the message to it. So next step is to set the Guess receive activity’s Correlates On like this: Now you should be able to play a game, making sure that the first and following messages all have the same player name. You can also start two or more games, each with their own player name!

ObservableCollection<T> now part of .NET 4 (No need to reference WPF)

ObservableCollection<T> is a generic collection added as part of WPF and Silverlight. WinForms has BindingList<T>. So writing code that targets both WinForms and WPF would mean using BindingList<T> (the common thing) and writing code targetting WPF and Silverlight would mean ObservableCollection<T>. So there would be no way to write code that targets all three platforms. Luckily now (in .NET 4) we can use ObservableCollection<T> anywhere because Microsoft made it part of System.dll. Nice! Two others were also moved here: ReadOnlyObservableCollection<T> and INotifyCollectionChanged. To double check if I could use these collections outside WPF projects I created a simple console application using them: 1: class Program 2: { 3: static void Main(string[] args) 4: { 5: ObservableCollection<string> noWpf = new ObservableCollection<string> { "Hello", "World" }; 6: INotifyCollectionChanged watchCollection = noWpf as INotifyCollectionChanged; 7:  8: 9:  10: if (watchCollection != null) 11: { 12: watchCollection.CollectionChanged += (sender, e) => { Console.WriteLine("Collection action = {0}", e.Action); }; 13: } 14:  15: noWpf.Add("Love it!"); 16: } 17: } Compiles. Runs. Could I use it in WinForms? So I created a simple WinForms application like this: 1: public partial class Form1 : Form 2: { 3: ObservableCollection<string> noWpf; 4:  5: public Form1() 6: { 7: InitializeComponent(); 8:  9: noWpf = new ObservableCollection<string> { "Hello", "World" }; 10: var bs = new BindingSource() { DataSource = noWpf }; 11: bs.ListChanged += (sender, e) => { MessageBox.Show(e.ListChangedType.ToString()); }; 12: listBox1.DataSource = bs; 13: } 14:  15: private void button1_Click(object sender, EventArgs e) 16: { 17: noWpf.Add("Test"); 18: } 19: } But when I click the button, which adds a new element to the observable collection, the listbox doesn’t update. It looks like winforms databinding doesn’t support ObservableCollection…

Fixing Application Pool not starting problem by editing ApplicationHost.config

While playing around with Windows Server AppFabric I created a new Application pool set for .NET 4. However this application pool would immediately throw an error when starting; “The worker process failed to pre-load .Net Runtime version v4.0.21006.” A little experimentation showed that changing to the built in application pool worked. So it looks like IIS, when creating a new application pool, configures for the wrong .NET version. I then found that this kind of information is stored in ApplicationHost.config, but I couldn’t find it anywhere on my system (except backups in history). Hmmm. And I wasn’t the only one. So using notepad (!) I could open this file in “C:\Windows\System32\inetsrv\config”. Best of course is to first stop IIS and make a backup of the file. Then I edited the <applicationPools> section, looking for v4.0 and replacing it with v4.0.30128. And yes! That fixed it.