Note: if you’re looking for lamb curry, you came to the wrong place. This post is about C# programming techniques.
Currying a function is a technique named after Haskell Curry, to transform a function with multiple parameters into a series of functions having one parameter each. The technique is important, because it opens the door to an optimization technique called partial evaluation. Let’s look at an example.
Let’s say you need to write a program that sums a two-dimensional function f(x,y) over a two-dimensional range, e.g. –1000 ≤ x ≤ 1000 and –1000 ≤ y ≤ 1000.
Such a two-dimensional function, assuming double parameters and result, can be represented by Func<double, double, double>, and we can sum it using the following method:
private static double Sum(Func<double, double, double> f)
{
double sum = 0;
for (int x = -1000; x <= 1000; x++)
{
for (int y = -1000; y <= 1000; y++)
{
sum += f(x, y);
}
}
return sum;
}
We can apply this to an arbitrary function, for example:
Func<double, double, double> f = (x, y) => Math.Sin(x) * Math.Sin(y);
double result = Sum(f);
Currying this is now a simple textual transformation. Instead of defining f as Func<double, double, double>, we define it as Func<double, Func<double, double>>.
using System;
internal static class Curried
{
public static void Main()
{
Func<double, Func<double, double>> f = x => y => Math.Sin(x) * Math.Sin(y);
double result = Sum(f);
}
private static double Sum(Func<double, Func<double, double>> f)
{
double sum = 0;
for (int x = -1000; x <= 1000; x++)
{
for (int y = -1000; y <= 1000; y++)
{
sum += f(x)(y);
}
}
return sum;
}
}
Effectively, a function that took two parameters and returned a result is replaced by a function that takes one parameter and returns a function that takes the second parameter and returns the result. It looks a lot simpler than it sounds. Instead of writing f = (x, y) => Math.Sin(x) + Math.Sin(y), we write f = x => y => Math.Sin(x) + Math.Sin(y). And when calling it, instead of writing f(x, y), we write f(x)(y). Simple.
Unfortunately, every call to f(x) now allocates a new Func<double, double> object, and that can become quite expensive. But that can be fixed easily, so here is a smarter solution:
using System;
internal static class Smarter
{
public static void Main()
{
Func<double, Func<double, double>> f = x => y => Math.Sin(x) * Math.Sin(y);
double result = Sum(f);
}
private static double Sum(Func<double, Func<double, double>> f)
{
double sum = 0;
for (int x = -1000; x <= 1000; x++)
{
var fx = f(x);
for (int y = -1000; y <= 1000; y++)
{
sum += fx(y);
}
}
return sum;
}
}
I ran a little benchmark on this code. The benchmark executes the main function 20 times, and measures the shortest execution time. It also measures the number of generation 0 garbage collections. This is the result:
Naive: 261 msec, 0 collections
Curried: 340 msec, 733 collections
Smarter: 254 msec, 0 collections
As we can see, the curried version was initially slower due to all the memory allocations, but when we fixed that, the smarter version was as fast as the original. In fact is was just a little bit faster, though nothing to get existed about.
However, this is where partial evaluation kicks in. Currently, we are calculating the sinus of x over one million times, not taking advantage of the fact that we could reuse each calculated value a thousand times! So let’s change our definition of f as follows:
Func<double, Func<double, double>> f = x => { var sinx = Math.Sin(x); return y => sinx * Math.Sin(y); };
Now the benchmark shows a completely different result:
Optimized: 143 msec, 0 collections
We went from 261 milliseconds in the original version to 143 milliseconds in this version, in fact almost dividing execution time by two! That’s because, to be precise, in the original version we had two times 2001 * 2001 = 8,008,002 Math.Sin calls, and in the optimized version we have 1 time 2001 plus 1 time 2001 * 2001 = 4,006,002 Math.Sin calls. That is a division by a factor of 1.999, yielding a total execution time reduction by a factor of 1.825 (there is some overhead of course).
Of course, the technique is very much related to loop invariant code motion in imperative programming. For example, imagine a Sum function hardcoded for Sin(x) + Sin(y). Would you write is like this?
private static double SumSinSin()
{
double sum = 0;
for (int x = -1000; x <= 1000; x++)
{
for (int y = -1000; y <= 1000; y++)
{
sum += Math.Sin(x) * Math.Sin(y);
}
}
return sum;
}
Of course not! At least you would move the calculation of Sin(x) out of the loop over y:
private static double SumSinSin()
{
double sum = 0;
for (int x = -1000; x <= 1000; x++)
{
var sinx = Math.Sin(x);
for (int y = -1000; y <= 1000; y++)
{
sum += sinx * Math.Sin(y);
}
}
return sum;
}
And that is exactly what we did, except of course that the sum function is parameterized and not hardcoded.
So when would you apply this technique? You would apply it when performance matters, and you have a function that you need to call a lot, that takes more than one parameter, where one parameter varies more than another one (in our example, x remained the same for a long time, while y was different on every call), and part of the function can be evaluated knowing only the value of the parameter that varies the least.
In our example above, we could go even further. For example, we could eliminate the multiplication and even the Sin(y) calculation completely is case Sin(x) is 0 (which would be the case in our example only for x == 0).
Func<double, Func<double, double>> f = x =>
{
if (x != 0.0)
{
var sinx = Math.Sin(x);
return y => sinx * Math.Sin(y);
}
else
{
return y => 0.0;
}
};
That is not worth it in this scenario (because the special case applies to less than 0.05 % of all cases), but in some scenarios runtime algorithm specialization can be very significant.