## Friday, June 13, 2014

### Yet another lousy monad tutorial

I'm not a big fan of monads, but I understand them. They're not rocket science. Let me try to write a monad tutorial that would've helped my past self understand what the fuss was about. I like concrete explanations that start with practical examples, without any annoying metaphors, and especially without any Haskell code. So here's five examples that have something in common:

1) If a function has type A → B, and another function has type B → C, then we can "compose" them into a new function of type A → C.

2) Let's talk about functions that can return multiple values. We can model these as functions of type A → List<B>. There is a natural way to "compose" two such functions. If one function has type A → List<B>, and another function has type B → List<C>, then we can "compose" them into a new function of type A → List<C>. The "composition" works by joining together all the intermediate lists of values into one. This is similar to MapReduce, which also collects together lists of results returned by individual workers.

3) Let's talk about functions that either return a value, or fail somewhere along the way. We can model these as functions of type A → Option<B>, where Option<B> is a type that contains either a value of type B, or a special value None. There is a natural way to "compose" two such functions. If one function has type A → Option<B>, and another function has type B → Option<C>, then we can "compose" them into a new function of type A → Option<C>. The "composition" works just like regular function composition, except if the first function returns None, then it gets passed along and the second function doesn't get called. This way you can have a "happy path" in your code, and check for None only at the end.

4) Let's talk about functions that call a remote machine asynchronously. We can model these as functions of type A → Promise<B>, where Promise<B> is a type that will eventually contain a value of type B, which you can wait for. There is a natural way to "compose" two such functions. If one function has type A → Promise<B>, and another function has type B → Promise<C>, then we can "compose" them into a new function of type A → Promise<C>. The "composition" is an asynchronous operation that waits for the first promise to return a value, then calls the second function on that value. This is known in some languages as "promise pipelining". It can sometimes make remote calls faster, because you can send both calls to the remote machine in the same request.

5) Let's talk about functions that do input or output in a pure functional language, like Haskell. We can define IO<A> as the type of opaque "IO instructions" that describe how to do some IO and return a value of type A. These "instructions" might eventually be executed by the runtime, but can also be freely passed around and manipulated before that. For example, to create instructions for reading a String from standard input, we'd have a function of type Void → IO<String>, and to create instructions for writing a String to standard output, we'd have String → IO<Void>. There is a natural way to "compose" two such functions. If one function has type A → IO<B>, and another function has type B → IO<C>, then we can "compose" them into a new function of type A → IO<C>. The "composition" works by just doing the IO in sequence. Eventually the whole program returns one huge complicated IO instruction with explicit sequencing inside, which is then passed to the runtime for execution. That's how Haskell works.

Another thing to note is that each of the examples above also has a natural "identity" function, such that "composing" it with any other function F gives you F again. For ordinary function composition, it's the ordinary identity function A → A. For lists, it's the function A → List<A> that creates a single-element list. For options, it's the function A → Option<A> that takes a value and returns an option containing that value. For promises, it's the function A → Promise<A> that takes a value and makes an immediately fulfilled promise out of it. And for IO, it's the function A → IO<A> that doesn't actually do any IO.

At this point we could go all mathematical and talk about how "compose" is like number multiplication, and "identity" is like the number 1, and then go off into monoids and categories and functors and other things that are frankly boring to me. So let's not go there! Whew!

Instead, to stay more on the programming track, let's use a Java-like syntax to define an interface Monad<T> with two methods "identity" and "compose". The five examples above will correspond to five different concrete classes that implement that interface, for different choices of the type parameter T.

The main complication is that the type parameter T must not be a simple type, like String. Instead it must be itself a generic type, like List, Option or Promise. The reason is that we want to have a single implementation of Monad<Option>, not separate implementations like Monad<Option<Integer>>, Monad<Option<String>> and so on. Java and C# don't support generic types whose parameters are themselves generic types (the technical term is "higher-kinded types"), but C++ has some support for them, called "template template parameters". Some functional languages have higher-kinded types, like Haskell, while others don't have them, like ML.

Anyway, here's what it would look like in Java, if Java supported such things:

<A> Function<A, T<A>> identity();
<A, B, C> Function<A, T<C>> compose(
Function<A, T<B>> first,
Function<B, T<C>> second
);
}

public <A> Function<A, Option<A>> identity() {
// Implementation omitted, figure it out
}
public <A, B, C> Function<A, Option<C>> compose(
Function<A, Option<B>> first,
Function<B, Option<C>> second
) {
// Implementation omitted, do it yourself
}
}

Defining Monad as an interface allows us to implement some general functionality that will work on all monads. For example, there's a well known function "liftM" that converts a function of type A → B into a function of type List<A> → List<B>, or Promise<A> → Promise<B>, or anything else along these lines. For different monads, liftM will do different useful things, e.g. liftM on lists is just the familiar "map" function in disguise. The implementation of liftM with lambda expressions would be very short, though a little abstract:

<T, A, B> Function<T<A>, T<B>> liftM(
Function<A, B> func,
) {
() -> ta,
).apply(void);
}

Or if you don't like lambda expressions, here's a version without them:

<T, A, B> Function<T<A>, T<B>> liftM(
Function<A, B> func,
) {
return new Function<T<A>, T<B>>() {
public T<B> apply (T<A> ta) {
new Function<Void, T<A>>() {
public T<A> apply() {
return ta;
}
},
new Function<A, T<B>>() {
public T<B> apply(A a) {
}
}
).apply(void);
}
}
}

Monads first became popular as a nice way to deal with IO instructions in a pure language, treating them as first-class values that can be passed around, like in my example 5. Imperative languages don't need monads for IO, but they often face a similar problem of "option chaining" or "error chaining", like in my example 3. For example, the option types in Rust or Swift would benefit from having a "happy path" through the code and a centralized place to handle errors, which is exactly the thing that monads would give you.

Some people think that monads are about side effects, like some sort of escape hatch for Haskell. That's wrong, because you could always do IO without monads, and in fact the first version of Haskell didn't use them. In any case, only two of my five examples involve side effects. Monads are more like a design pattern for composing functions, which shows up in many places. I think the jury is still out on whether imperative/OO languages need to use monads explicitly, but it might be useful to notice them when they appear in your code implicitly.

## Thursday, June 12, 2014

### Why I don't like laziness

Some ppl think lazy functional programming is nicer than strict. At first I thought so as well, but now I think strict is nicer. Not for any engineering reasons, but from a purely mathematical point of view.

The reason has to do with equality. I think about the equality operation a lot, it really brings out the important issues in language design. It's become something of a litmus test for me, if some language feature is dragging down the equality operation, then it probably sucks in other ways too. For example, in Haskell you can do this:

data Natural = Zero | Successor Natural deriving Eq
omega :: Natural
omega = Successor omega

So you have this bizarre number "omega" that is not bottom, and you can pattern-match on it to your heart's content. You can compare it for equality with Zero, Successor Zero and so on, and all those comparisons will happily return False. But try to compare omega to itself, and you get an infinite loop!

That's the problem in a nutshell. Since Haskell is lazy, there are no types that support decidable equality at all! So you might as well allow "deriving Eq" for many more types, like Bool -> Bool. Sure, comparing two functions of that type is undecidable, but it's exactly as undecidable as comparing two lazy Bools in the first place :-)

Imagine for a moment that Haskell was strict. Then we would know, truly irrevocably and forever, that the types that allow "deriving Eq" are exactly the types that have an always terminating equality operation, nothing more and nothing less. In a lazy language, "deriving Eq" can't make such a powerful statement and replaces it with something wishy-washy, and that's why I think strict is nicer.

## Friday, June 6, 2014

### Developing Web applications using structured HTML diffs

The current trend in Web development is toward single-page applications, whose prevailing architecture I described in an earlier post. However, a lot of the functionality of modern webapps is conceptually pretty simple, and it's easy to imagine that the Web could have supported such applications from the start, with just one small architectural tweak to HTML.

What if clicking on a hyperlink didn't navigate to a new page by default, but instead loaded a set of changes from the server in some structured text format, like "append such-and-such HTML to the element with such-and-such ID", and applied these changes to the HTML of the current page? A subset of that functionality has always been possible with IFrames, where clicking a link would reload a separate region of the page. But in full generality, it seems to me that such an "HTML diff" protocol could be simple enough and general enough to support many use cases of current web apps.

For example, such functions as adding a product to a shopping cart, or switching from logged-out to logged-in state, or loading an additional set of blog posts at the bottom of the page, can all be expressed as HTML diffs that leave most of the page intact, and only add/delete/update the contents of specific elements. With built-in browser support for HTML diffs, many Web applications could be implemented without using JavaScript at all, using only plain hyperlinks, and everyone would be able to use their favorite server-side languages to generate the diffs on demand.

Another nice feature is that HTML diffs would work together well with CSS transitions, which can play a smooth animation whenever an element's contents change, without the need for JavaScript. For example, clicking a hyperlink on the page could smoothly expand a list of items, by loading its contents from the server and applying a diff to the current page.

To increase responsiveness, we could use ordinary HTTP caching to prefetch and cache certain diffs. In fact, some of the diffs could even be static files on the Web host, so you can implement some dynamic-looking features without any client-side or server-side coding at all! I don't expect that to be very useful, but it's certainly a nice feature to have.

Overall I'm not convinced that the approach can remove all uses of client-side scripting, because it wouldn't be able to support very interactive applications like Google Maps. But at least it might help reduce the complexity of some business webapps, by removing the need for huge stateful client-side frameworks like Angular or Ember. Also surprisingly, I haven't been able to find any previous descriptions of this idea, though I'm sure someone must have come up with it before.

## Tuesday, June 3, 2014

### Programming languages should have at most one equality operation per type

In many programming languages there are several ways to check things for equality. For example, Java has == for primitive equality and object identity, and equals() as an overridable equality operation. Other industrial languages like Python or C# have the same approach, and Common Lisp and Scheme have four equality operations each (!)

I think it's a mistake to have so many equality operations. In this post I'll try to argue that most languages would do well by following the example of Haskell, where some types have a single equality operation, and others have none at all.

The idea is that the equality operation should always correspond to observational equality. If it's possible to tell the two values apart by calling various functions on them, then the values are different, otherwise they are equal. For example, the integer 3 is always equal to any other integer 3, and there's no point in distinguishing their identity. On the other hand, a mutable array is not equal to another array with the same contents, because changing an element in one array doesn't affect the other one.

More generally, observational equality means that immutable types like integers, strings and immutable containers should have equality based on their contents, while mutable types like references or mutable containers should have equality based on identity. That approach is easy to understand and does the right thing in most cases.

As an additional complication, if a type contains functions, that means observational equality is undecidable in general. Such types should have no equality operation defined by default, but the user can provide one.

(It goes without saying that equality operations should be defined per type, because it makes no sense to compare values of different types. If the user needs to compare the integer 3 and the floating point number 3, they should use an explicit conversion first.)

At this point some people are saying, why shouldn't we compare mutable containers based on their current contents? There's no harm in having a "compareContents" operation, but it shouldn't be called equality, because when a container of mutable containers cannot assume stable equality of the individual mutable containers, you end up with weird bugs.

As a daring thought experiment, you could implement that approach in Java by doing the following changes:

1) Remove all primitive types like int. All types should be classes and inherit from Object. Primitive types cause problems everywhere in Java. With identity comparisons gone, the compiler will be free to unbox Integers anyway.

2) Remove null from the language, and replace it with a generic class Maybe<T>. Null also causes problems everywhere, not just for equality.

3) Remove equals() from Object and move it into the generic interface Comparable<T>. Note that equals() should accept an argument of type T, not any Comparable.

4) Make == be a synonym for equals() in all cases. Remove the default implementation of ==. In all cases where using equals() would lead to a compile error, due to a type not implementing the appropriate Comparable<T>, using == should be a compile error as well.

5) Add a standard class IdentityComparable that implements Comparable<IdentityComparable>, with a final implementation of equals() based on identity. That class should be used as the base for all mutable classes, including containers and file handles.

6) For all standard immutable classes like Integer and String, implement equals() based on contents, to make == do the right thing.

7) Do all the above for hashCode() and compareTo() as well. The declarations should be in Comparable<T>, and IdentityComparable should contain final implementations based on identity.

To be clear, I don't think Java will ever adopt any approach like that. But it's interesting to see just what changes to Java are needed to fix the equality mess, and how many of these changes also make sense for other reasons.

## Monday, June 2, 2014

### If OO languages don't need variants, they don't need objects either

The usual story is that object-oriented languages don't need sum types (a.k.a. variants) because they can use the Visitor pattern instead. For example, this Haskell type:

data Either a b = Left a | Right b

can be converted to a couple of Java interfaces:

interface EitherVisitor<A, B, Result> {
Result visitLeft(A a);
Result visitRight(B b);
}

interface Either<A, B> {
<Result> Result accept(EitherVisitor<A, B, Result> visitor);
}

(Note that I'm using logji's corrected Visitor pattern, which is generic over the Result type. That's the right way to implement visitors and should be in every textbook.)

I think the above argument is right, and visitors can be used to replace sum types in most situations, but I also think it proves too much. Namely, you can make a similar argument to prove that object-oriented languages don't need product types (a.k.a. objects with fields) either! For example, this Haskell type:

data Pair a b = Pair a b

can be expressed with Java interfaces that contain only one method each (a.k.a. function types):

interface PairVisitor<A, B, Result> {
Result visit(A a, B b);
}

interface Pair<A, B> {
<Result> Result accept(PairVisitor<A, B, Result> visitor);
}

(With currying you could go even further, and obtain interfaces that contain a single method receiving a single argument each. I won't do it here because it makes the code less clear.)

More generally, if you have only function types, you can use Church encoding to simulate pairs, lists, numbers and just about everything else. The resulting code will have reasonable types in System F, which is a subset of many existing type systems, including Java's and Haskell's.

There is, however, a subtlety. The Java interfaces shown above don't actually contain any data of types A and B, they just have methods to return that data on demand. In other words, they correspond to lazy Pair and Either. That has a couple of nontrivial consequences.

First, the thing that promises to be a Pair or Either can go into an infinite loop or throw an unchecked exception when asked to return the data. That corresponds to Haskell's bottom value, which is an inhabitant of every type.

Second, lazy types can lead to infinite data structures. For example, this Haskell type:

data List a = Empty | Append a (List a)

can be expressed with Java interfaces like this:

interface ListVisitor<A, Result> {
Result visitEmpty();
Result visitAppend(A a, List<A> tail);
}

interface List<A> {
<Result> Result accept(ListVisitor<A, Result> visitor);
}

If we encode lists in this way, there's nothing stopping a list from passing itself as its own tail into visitAppend. That means the type system cannot guarantee that the list is finite, as it can in eager functional languages like ML. Robert Harper has a well-known post arguing against lazy data structures, but Haskell programmers seem to like them.

On the other hand, using interfaces for sum and product types leads to an unexpected benefit for Java programmers, because they can write their own implementations that will be compatible with existing libraries. If we had a language where all types are interfaces "under the hood", we could do nice things like replacing fields with getter methods transparently, though the current Java style seems to be too clunky for that.

I guess this post turned out a little more rambling and educational than I intended. Oh well. A short summary is that both sum types and product types can be viewed as a layer of syntactic sugar over function types, and peeking below that layer can have some benefits for understanding programs.