Friday, April 18, 2014

The difference between code and data is Turing completeness

In principle there seems to be no difference between code and data, because the same sequence of bits can be treated as a sequence of numbers or a piece of code. It is more accurate to say that the same sequence of bits can be treated as code or data by different programs.

For example, we could have a program A that treats its input as a sequence of numbers and computes the sum of these numbers, and program B that treats the input as a piece of code and executes it. The important difference between programs A and B is that we can make program B carry out any possible computation by giving it a suitable input, while program A can only compute sums.

On the spectrum between these two extremes, we can find programs whose behavior is more influenced by their input than program A, but easier to analyze than program B. These programs correspond to models of computation that are stronger than finite state machines, but weaker than Turing machines.

For example, if we run a syntax highlighting program on a piece of code, we probably can't make it carry out an arbitrary computation. In this way, a syntax highlighting program treats the input as data. If we give the same piece of code to a compiler, is is treated less like data and more like code, especially if the programming language it's written in allows some compile-time computation. And when we finally execute the program, it is treated as code by the interpreter or the operating system.

Now you can tell the difference between code and data in any software system. If the data can strongly influence the program used to process it, and make it carry out an arbitrary computation, then it's code.

Thursday, April 17, 2014

Scrap your interfaces

This post is inspired by Scrap your type classes by Gabriel Gonzales. Basically I want to translate his idea into an object-oriented programming language and see if it works.

1. Comparable and Comparator

We begin by noticing a problem with OOP languages. Let's try to define a Comparable interface, for objects that we want to put in sorted collections etc.:

interface Comparable {
  int compareTo(Comparable); // Hmm, is this right?

class Foo implements Comparable {
  int compareTo(Comparable); // Definitely wrong

The problem is that we want to compare Foo to another Foo, not to any Comparable. And we want the compiler to enforce that guarantee, because that's what type safety is all about. Some might say we should add a type parameter to the Comparable interface:

interface Comparable<Foo> {
  int compareTo(Foo);

And since we don't want users to write something like "Bar implements Comparable<Foo>", we can use F-bounded polymorphism to prevent that:

interface Comparable<Foo extends Comparable<Foo>> {
  int compareTo(Foo);

That solution would work, but for purposes of this post I'd like to explore something simpler and more general. Instead of putting the comparison operation inside the class, we can use an external object that implements the Comparator interface:

interface Comparator<Foo> {
  int compare(Foo,Foo);

This way, the compiler makes sure that the "compare" method takes two arguments of type Foo. Another benefit of that solution is that you can have different comparators defined on the same class. That would not be possible with Comparable, because a class cannot implement the same interface in two different ways. The drawback is that now you have to pass the Comparator object around manually.

2. Serializable and Serializer

The above solution with Comparator is an instance of a general pattern that I want to explore in this post. Let's take another example. Let's try to define an interface for objects that can be serialized and deserialized:

interface Serializable {
  ByteArray serialize();
  void deserialize(ByteArray);

The problem is that having "deserialize" as an instance method is not ideal, because that forces the user to construct a new Foo first before deserializing. Ideally we'd want the deserialization code to be responsible for constructing Foo, so it can choose which constructor of Foo to call. To solve that problem, let's define an external Serializer interface, like in the previous section:

interface Serializer<Foo> {
  ByteArray serialize(Foo);
  Foo deserialize(ByteArray);

Note that since the class Foo doesn't need to implement Serializer<Foo> or reference it in any way, we can write different serializers for the same class, just like with Comparator.

3. Numeric types

For a third example, let's take arithmetic. Can we define an interface Numeric that includes all arithmetic operations, like addition and multiplication, so that users can implement it in their own classes that support arithmetic? That's difficult, because we want to stop users from adding together numbers of different types (the same problem we had with Comparable), and also because we'd like to provide zero and one as static constants. The solution is once again to pull the operations into an external interface:

interface ArithmeticOperations<Foo> {
  Foo add(Foo,Foo);
  Foo subtract(Foo,Foo);
  Foo multiply(Foo,Foo);
  Foo divide(Foo,Foo);
  Foo zero();
  Foo one();

That way users can implement their own numeric types and pass them to existing libraries, or even do more fancy things like tracing which arithmetic operations get called by the existing libraries (by writing a custom implementation of ArithmeticOperations<Int>).

4. Conclusions

By now I think the common thread is clear. Using interfaces in the usual way is sometimes limited and unnatural, because when a method of class Foo is defined in some interface, it only accepts one argument of type Foo (the "this" pointer). You'll need unnatural tricks if you want it to accept two Foos, return a Foo, construct a Foo from nothing, etc. Also you can't make Foo implement the same interface in two different ways, and other people can't make Foo implement an interface that you didn't anticipate.

The "external interfaces" shown above are free from all these restrictions. You can view them as a kind of generalized virtual table for Foo, which is not attached to any individual Foo. And other people don't need to depend on Foo implementing the right interfaces, because they can write their own implementations of Comparator<Foo> or Serializer<Foo> without modifying Foo. In this way, the relationship between classes and interfaces becomes truly many-to-many and extensible by third parties.

I don't know if you should go as extreme as never having interfaces on instances at all, and always use these "external interfaces" instead. There are probably drawbacks to that. The main syntactic drawback is having to pass the implementation of the "external interface" everywhere, but maybe it can be addressed with some syntactic sugar.

Trying to come up with such syntactic sugar for an OOP language could be an interesting exercise in language design, I think. Part of the inspiration could come from functional languages like Scala or Agda, which try to solve a similar problem. I intentionally didn't put any complicated FP stuff in this post, but you can find plenty of it in these Reddit discussions: Scrap your type classes, Instances and Dictionaries.

Edit: the Reddit discussion about this post has some very nice comments.

Tuesday, April 8, 2014

Building single-page web applications

From looking at recent web frameworks like Angular.js and Ember.js, it seems like there's an emerging Right Way to build single-page web applications, which is different from the past Right Way to build web applications that relied on server-side HTML generation. In this post I'll try to outline the main ideas, so you can understand them without puzzling through some obscure syntax.

The main idea is that the client code keeps a "model" of the current state of the application, which is basically a big data structure. That model gets updated by user actions or sending AJAX requests for more data. A client-side templating library receives the model as an input, renders the HTML, and updates the HTML whenever the model changes. Apart from that, the client code never touches any DOM elements directly. The only way to update the HTML is to update the model and let the templating library do its job.

Ideally you'd have a main template which includes many smaller templates, and the templating library would figure out which parts of the model changed since last time. That way the client code doesn't need to care about incremental updating, because full updates are cheap anyway.

For handling events, like clicks, every template should be able to have a "controller" object in client code, and have special syntax for binding the methods of that object as event handlers on specific elements. I won't say much more about the organization of client code, because it doesn't seem to be quite settled yet.

The nice thing about that approach is that the client code no longer needs to use things like document.getElementById, element.className, or element.addEventListener. By using the templating library as the only way to communicate between the client code and the HTML, you eliminate most of the feeling of "brittleness". And since the templating library's only job is to render the model, which is basically a struct that can contain other structs, you don't need many fancy features in the templating library either.

If you like static typing, note that the model can use a structured type definition, like Google's Protocol Buffers. That way you can typecheck the client code that updates the model (if it's written in a statically typed language that compiles to JavaScript), the templates that render the model, and the bindings between templates and their controllers. Of course it would make sense to use the same structured types for communicating with the server as well.

To support animations and repainting, the templating library can allow you to set an optional "key" attribute on any element, to indicate that the identity of the DOM element should stay the same from one repainting to the next. That way you can use ordinary CSS transitions on elements, and they will work as expected. More complex animations should be done explicitly, by representing the intermediate states of the animation within the model.

There are many more topics to cover (human-readable URLs, authentication, i18n, logging...) but it seems like the approach can accommodate them without much trouble.