Tuesday, August 30, 2011

Language design

Programming languages are our principal tools, the things with work with every day. Most programmers (real ones) have a particular relation to their favorite language, but, they're also very curious about new languages, new paradigms (or at least paradigms, they haven't test so far), new concepts …

This is probably the reason why it is so hard too have clear answers from programmers to questions like “what is the best programming language ?” or “what is the worst programming language ?”

So, I'll try to make a small survey of ideas on language design that I found most useful when trying to classify languages. I'll also add my personal visions on what is a good language.

Semantics ?
Lot of people began classification of programming languages using execution model (compiled or not) or paradigm (functional, object oriented, declarative, procedural … ) This is probably a good idea, but I won't do it that way.

From my own point of view there are two major groups of programming languages: those with practical approach and those with theoretic foundations. This is how I understand the evolution of programming languages.

The practical way follow the history of computer evolution: early computers have a minimal interface (switches, punched cards … ) and programmers where using direct machine code. As computers were able to treat bigger programs, programmers start to use assemblers that transforms an “almost” human-readable source code into binary machine code. And the evolution continue in the direction of more abstractions: conditional jump were replaced by if-then-else constructions, while and for loops simplified basic algorithms and so on. This where the birth of structured programming. The basis of this evolution is to abstract recurring coding constructions using a simpler syntax. Starting from machine code, the practical way has reached structured and object oriented programming paradigms.

On the other side, the theoretical way, take birth in mathematical models for computing like lambda-calculus. This branch of language evolution take a completely different direction: we start with an abstract concept, often with a very formal definitions, and try to build a concrete programming language. The evolution is now reverted, since the goal is to found lower level translation of abstract syntax, rather than building new syntax for known constructions.

Both evolution produce a lot languages like Cobol, Fortran or C/C++ for the practical way and Lisp, Scheme, Prolog or ML family for the theoretical way. The interesting fact is that the two ways are converging ! Practical languages tends to introduce concepts with theoretical foundations (like in Java or C#) and theoretical languages are more and more able to produce efficient concrete machine code.

So, why starting a classification like this ? Just because the way a language was built has a great impact on how the language can be used.

Languages with well founded semantics are good in implementing clever algorithms and theoretical data manipulation (symbolic processing, compiler, code analysis tools … ) On the other hand, more practical languages (like C) are more suited for system programming and any other activities involving lower level manipulations.
Dynamic or Static ?
An other important discriminating criterion is the nature of the language execution model. Rather than talking about compilation versus interpretation (or virtual machine, JIT … ) I prefer to push the distinction in the field of dynamic versus static languages (there are compilers for dynamic languages and interpreter for static ones.)

Again, using one kind or the other is a matter of goals. Dynamic languages are often considered as toys or scripting languages, but this is too restrictive since modern languages (such as Perl, Python or Ruby) are probably as good as other languages for writing applications.

In fact, this is more a matter of program design and software conception. Dynamic programming tends to encourage programming by accretion: you build a small kernel of functionality, and then make it grows. The dynamic nature of such languages, let you modify or add behavior to existing code without complete rewriting.

On the other hand, when you need (or you already have) a well established design, static languages are more suited: they enforced your design choices by making them mandatory. The usual benefits are faster error detection and probably better performances.

So, to summarized, dynamic languages are more suited for prototyping or unguided programming (you know, when you code, but don't really know where're you going … ), while static languages offer better context for bigger projects involving more conceptions. But, in my opinion, this is more a matter of taste !
Hidden Complexity
Now, we are on the dangerous field of what is good and bad in programming language. Hidden complexity is about language's simple constructions that involved heavy computation.

We can find a good example in OCaml's operators on lists. You have two operators that are often seen as similar: :: and @. The first one is a constructor for the list type while the second one is in fact a shorthand for a function on two lists (append.) So, the issue is that the former operator looks like a simple inexpensive operation, while it can induce very high complexity cost like in the following example:

let rec rev = function
  | [] -> []
  | h::t -> (rev t) @ [h]

This function has a quadratic complexity (O(n²)) due to the append operator while this is not evident (if you don't count the cost of the append, this function has a linear complexity.)

Most of the time, hidden complexity appears in magic operators, the kind of operators that seems to solve a lot of issues.

A very common source of hidden complexity is operator overloading and implicit operations. C++ is full of this kind of traps: when objects are passed by copy (that is, the default way of passing objects to a function) a special constructor is invoked, which is a simple memory copy unless someone has redefined it (“Hey ! Who add an Ackerman function to the copy-constructor of this object !”)
Keywords or symbols ?
Again, a matter of taste: language's syntax can be keyword oriented (Pascal) or symbol oriented (C++.)

For example, a Pascal's function will look like this:

PROCEDURE swap(VAR a: integer, VAR b: integer)
VAR
  c : integer;
BEGIN
  c := a;
  a := b;
  b:= c;
END

While in C++ we would have something like:

void swap(int &a, int &b)
{
  int c;
  c = a;
  a = b;
  b = c;
}

The C/C++ syntax relies more on symbol than keywords ({…} against begin … end for example.) What is best ? I tend to prefer the symbol way rather than the keyword one, but its a matter of taste. Of course, it's also a matter of balance, you probably can find some purely symbolic languages and you'll probably find it too criptic (have you ever opened a sendmail.cf file ?)

In fact, it doesn't need to be purely symbolic to be cryptic, C++ syntax is a good example, an average C++ piece of code is full of symbolic details which probably can't be understand with fast-reading since details can change semantics (and I don't speak about operators overloading … )

Here are my own rules about syntax:
  1. one meaning for one symbol
  2. symbol should only be used for immediate operations (no hidden complexity)
  3. uncommon operations deserve keywords rather than symbol
  4. default behavior and usual operations should have the simplest syntax (or no specific syntax at all)
In my opinion, the C (not C++) syntax  can be considered as a good example of a language well balanced between symbols and keywords while C++ is seminal example of all possible mistakes in language design (at least about syntax.) The Pascal syntax tends to be a little bit verbose and have a pedantic back-taste (this is that kind of languages that prefer to say integer rather than int to be sure you understand them well, in case you've missed some point, you  they are teaching languages.)
Humorous classification
As a so called conclusion, here is a draft for a programming languages classification:
  • Rudimentary languages: they're here for decades, they're probably older than you, and you don't want to use them but some times you can't avoid them (BASIC, COBOL … )
  • Sedimentary languages: built using a geological process of accretion, they were designed as stack of concepts in hope every cases where covered … (C++ … )
  • Alimentary languages: you learn them and use them because programming is a job, job brings you money and money buys you food. You probably don't like them but in fact, you don't like what you're doing with them (maybe you don't like programming at all) (Java, C#, VB, Php … )
  • |-|4><0r languages: you need to be l33t to understand them … (Brainfuck, whitespace … )
  • Will-be-dead-next-years-for-decades languages: we all agree, these languages are alive abominations escaped from Jurassic ages, but they're still there and used ! (COBOL, FORTRAN … )
  • Teaching languages: sometimes, you feel like if the compiler stares at you with a cold and calm face and a wooden stick in its hand while the rest of the class is holding its breath. When such time comes, you know that you, naughty boy, haven't declared your thrown exceptions, or have used a deprecated method … (Pascal, Java … )
  • We've-got-anything-you-want languages: have you ever dream of a procedural structured functional language with object oriented extensions, concurrent programming primitives, embedded assembly, static validation and dynamic assertion testing ? Ada is there for you …

Saturday, July 30, 2011

Rule 0 - Do It !

So, I've already listed some rules about programming (do I need to recall you that the most important one is don't trust the maxims) but I recently come up with a new one, some kind of rule 0, the rule above the rules. So, the rule is pretty simple:
Do It !
Simple isn't it ?

So what do I mean by Do It ? There are several topic where this rule can apply, I'll cover what seems to be the most important to me: programming learning, fixing bugs and rewriting/refactoring.

For the beginner:
Learning programming (or even learning a new programming language) can be separate in two parts: theoretical principles, syntax and structure studying and practicing.

Many beginner falls into the over-studying symptom: they spent a lot of time trying to understand inner concepts and to recall every specific syntax construction before practicing. But when it comes to code, they just do the same mistakes and face the same issues as if they haven't spend a lot of time to understand how it works.

Why ? Just think of it: if I explain to you how it is to ride a bicycle, I can explain you a lot of things, even show various videos and schema, but until you get yourself on a real bike and fall face to the ground, you won't be able to ride it.

Programming is no exception. You need to practice in order to learn. A damn good example are pointers. Pointers are a mandatory concept in many languages and probably in many traditional  algorithms, so a programmer can not avoid pointers manipulations. When facing it for the first time, they appears as strange entities with complex usage and semantics: they are value by them selves but they also seems to be containers, you can obtain a pointer from various operations (referencing a variable, allocating memory … ) All of this with various syntax a special cases (the worst of all is to begin to learn pointers with different languages using different syntax, say Pascal and C … )

The fact is simply that, you need to practice to learn. I've test a lot of way to explain pointers concepts to various students and never find a better way than practicing. The usual box and arrow diagram always lake important point. Explaining the idea of pointer as array indices for the big array that is memory won' help more. But a bunch of pointer and pointer's arithmetic exercises will always do the job.

Practicing break out another beginner's symptom: the fear of the difficulty. When facing a new kind of problems, measuring the real difficulty of solving it is quite hard. Trying to solve it in your head or with some useless schema on a sheet of paper won't help you, you should give it a try. A good example is writing a memory allocator (a pretty good exercise to understand pointers and memory management), if you had never try it before, you can't figure out how one should do it. But, as you begin to code, you'll find that there's no mystery nor magic and (if you stay on the simple path, of course, good generic and efficient allocators are not so simple) you'll see that the task is really affordable.

So, to summarize: don't waste your time over-thinking your code, just code, do it !

Bug fixing:
When building a projects there's always two possible tasks: continue to the next missing features and fix the existing ones. So should I prefer push the project to the some fixed bounds (achieving a functionality goal) or should I fix issues in what was already written.

My advice is to never postpone bug fix, even if it's a minor one. Bugs, incomplete cases and things like that are bombs awaiting to blow you all works down. There are several reason not to postpone corrections:

  1. It's simpler to fix something when the code is still fresh in your head
  2. The bug fix may induce deeper modifications than it seems
  3. Building a new part of a project on rotten bases will always end up in a big crash
You can extend this idea to real bug fixes: that is, you should not let ugly workaround sit in your base code. Workaround are far pernicious than bug in the sense that they hide the error without (most of the time) really fixing it.

Rewriting/Refactoring:
To illustrate this part I'll give a personal example: I'm actually working on an experimental language (mostly like a C variant but with a cleaner syntax and some syntactic sugar), the project began when it appears that we can't escape the use of the C language for writing kernels. We give a try to a lot of languages and face up various issues, mainly because modern languages are oriented toward userland programming, not kernel programming. After a long and trollish discussion on our mailing list, I came up with idea of rewriting the C syntax and extend it with various comfort syntactic sugar and high levels extensions.

So, I lay down a quick version of an AST (Abstract Syntax Tree) for a kind of C language, write up a pretty printer for it and then a parser. When it was clear that the idea works (and facing enthusiastic messages on the list), I go the next step and write down type checking. At that point, the original AST was not completely suited in the sense that it was an immutable value (I was using OCaml for the project) but I found a way to circumvent the difficulties and continue along this axis.

We then begin to write code generation (thanks to the fact that we were writing a compiler-to-compiler rather than a real compiler this wasn't so hard in the first place.) But as the project evolves, my badly designed AST give us more and more difficulties. And then, one day (after almost a year of code), it appears to me that if we want to push the project to its end, I must change my data structure. So, how to do it ? the task seems unaffordable since the AST was the skeleton of the project.

I take out my favorite text editor, allow myself five minutes of reflexion, and decide that if I can't rewrite parsing and typing within a few days, I'll give up and try to fix the original code. Guess what ? A week later, 90% of the project was rewritten and some parts was even more advanced than in the original version ! Of course, the first two days was a total rush, I came up writing 5000 lines of code in 48 hours ! But, thanks to the new design and to experiences gathered on the previous work, most of the task was straightforward.

Two weeks later (yes, this a really recent story ;), our compiler is now more functional than the previous version !

What did I learn ? First that I'm still capable of writing very big amount of code in few days (this good for my self esteem ;) Second, that rewriting is far simpler than it may seems at first glance. There are several reason you should give a try to rewriting:
  1. Previous work wasn't a waste of time, it points you out where difficulties are
  2. A better design and data structure will greatly simplify the rest of the code
  3. If you fail, you can still go back to the previous work
  4. There's always be external part that can be reused
  5. If you feel the need to do it, things gonna be worst if you postpone it !
Of course, rewriting is not always the only solution: this example is quite extreme in a sense that the whole project was based on the part that I want to change, and many things inside the project was to be rewritten.

Refactoring is, most of the time, a better way to fix bad design. The main idea is to rewrite your code chunks by chunks but still preserving the state of the project. There's a lot of patterns to do that (you can find good books on the subject, Marc Espie, which is a person I consider as a real expert programmer, pointed me out with the book: "Refactoring" by Martin Fowler, you should probably read it, I will as soon as I find times for it.) 

Conclusion:
So, my new rule, Do It !, is all about going straight to the point rather than wasting your time in useless preparation or postponing important fix or rewrite. There's no better way to do something than actually doing it !

Saturday, July 9, 2011

Functional Programming ?

Do you know functional programming ?

Functional programming (FP) is an interesting subject for a lot of reasons. Those of us that have used (and still used) functional programming languages  (FPL) often pointed out two disappointing facts:

  1. Lots of programmers doesn't really understand what is it
  2. Despite the fact that FP and FPL are pleasant to use but also quite efficient, it is still not largely used

Am I a FP fan ? the quick answer is "no more", the longest one is more complicated. I have used OCaml for now more than 10 years, I've found it very useful and pleasant many time. But, I've also found a lot of unsatisfactory points. It is hard to really explain why, but on bigger project, the absence of constraints on global program structure often leads to unreadable code, some syntax choices are annoying ... Other FPL haven't convince me yet ...

So, here are my humble opinion (how I hate this expression ... )


First, I'll attempt to define FP and what should be an FPL. You should right now forget the erroneous idea that FP is about recursion and recursive functions !
FP arise from early days of formal languages (before computers in fact), exactly λ-calculus is (at least for me) the starting point of FP. So what is the main characteristic of λ-calculus ? Function as first class value ! In fact, functions are the only value in λ-calculus !
First class value are language entities that can be passed (and returned) to (by) functions. So, λ-calculus can directly manipulate functions, this is what we call higher-order. So, the first requirement of a FPL is higher-order.

λ-calculus, being a mathematical model, have no notion of mutable entities and side effects (in fact mutable and side effects can be expressed in pure functional semantics.) Thus, languages inspired by λ-calculus try to avoid these notions to match the original model, but it's a matter of style. 

Another aspect of FP and FPL is the notion of everything evaluate to a value: we should always return something and there's no difference between statement and expression as in imperative languages. This is strongly linked with the absence of side-effect: you don't modify a state, you return a new value.

What about loops in FP ? The fact that in FP we prefer recursion over loops, is a consequences of the everything is an expression: a loop does not return a value. But, FOR-loops (i.e. bounded iterations) can exist without mutable entities and thus fit in the model (we can somehow handle the loop returns nothing, since in FPL nothing can be something) on the other hand, integrating WHILE-loops in the semantics of a pure FPL is far more difficult (we need to express a changing state.)

So, to summarize: an FPL is language with higher-order where everything is expression and where we prefer to limit mutable entities and side-effect. Everything else is comes from those facts (partial evaluation and curryfied functions, recursion rather than loop and even lazy evaluation.)

Is there good FPL ? This is hard and dangerous question. I tend to prefer strongly typed FPL, and thus my list of good FPL will be limited to the ML family (and derived languages.) My favorite is OCaml for various reason: I learnt FP with it, it is not too purist (in OCaml you have imperative aspects and even objects), it has a good module approach (functors ... ) and resulting code works well. I found Haskell interesting but too extremist to my taste. Lisp and scheme derivatives haven't convince me (but again, I prefer typed languages ... )


Interesting FPL features: except for pure FP aspects, many FPL have interesting features.:

  • Pattern Matching: this probably the most impressive features, it lets you describe complex cases and extract values. Combined with variants, it offers the best way to traverse abstract syntax trees.
  • Module systems: to organize your code, module are very useful and some FPL provides very powerful framework. For example, the module system of OCaml lets you define nested modules, independant module interfaces and the powerful functors (function over modules.)
  • Lazy evaluation: lazy values is mix between functions and expressions, as functions it is not evaluated when defined, but as expressions, it is evaluated only once. This lets you define expressions that will be evaluated only when needed but then resulting value is memoized. A good
    example is infinite generated lists: elements of the lists are generated only when traversed, but then you don't need to compute it for each access.

Now, how about the lack of users of FPL ? Since, I'm myself a disappointed user of FPL, I can understand the relatively low audience of FPL.

First of all, most FPL are research projects and as such are never quite yet finished: the language changes too often, priority is put on new features rather than production necessity ... Thus, it is hard to develop a long and stable project using an FPL.
On the side of the languages them selves, there are also some disturbing points. Even compiled FPL have an original syntax designed for interpretors, thus program written with those syntax are often badly organized. For example, there's no explicit entry point in OCaml, you must carefully organize your code so that the execution flow go directly where you want.

From a syntaxic point of view, language history has shown that language with explicitly delimited blocks and instructions have a better impact than too permissive one. Most FPL are syntactically permissive languages.

So, I think that despite being very interesting programming languages, FPL are not a good choice for practical reason. These reasons are not tied to the FP model but to the implementations of this model. If we want to see FPL regarded as usable as other main languages some day, we need a more stable language with something like a C syntax.

Sunday, June 26, 2011

Marwan's Programming Guiding Rules

Ok, now that I have expose my view on maxims, I can write my own ! (but remember, don't trust the maxims !)

I'll start with pure coding guidelines (and keep the rest for later.)

copy/paste are evil !

(with its corollary: copying code implies copying bugs)

This one is classic (at least for my students): whenever you want to copy/paste some code, you should reorganize it. Most of the time, a function (or something similar) will be useful later, and even if there'll be only one copy, you will probably find a better way to write it.

Just take some examples: here it is two function on linked list, a simple first place add and a sorted insertion, with copy/paste:

typedef struct s_list *t_list;
struct s_list
{
  t_list next;
  int    data;
};

t_list add(int x, t_list l)
{
  t_list t;
  t = malloc(sizeof (struct s_list));
  t->data = x;
  t->next = l;
  return t;
}

void insert(int x, t_list *l)
{
  if (*l == NULL || x < (*l)->data)
    {
      t_list t = malloc(sizeof (s_list));
      t->data = x;
      t->next = *l;
      *l = t;
    }
  else
    insert(x, &((*l)->next));
}

Now, the version of insert without copy/paste:

void insert(int x, t_list *l)
{
  if (*l == NULL || x < (*l)->data)
    *l = add(x,*l);
  else
    insert(x, &((*l)->next));
}
First, the later version is shorter. But that's not all, in the second version, the reference to the size of  the struct is done in one place. If you finally want to use a specific allocator (such as a recycling pool) you have one change to make, not two nor more.

Another toy example, with "one copy" (no need to write a function), we implement FIFO queue using circular list (this is only the push operation):

// with copy/paste
void push(int x, t_list *q)
{
  t_list t;
  if (*q)
    {
      t = malloc(sizeof (struct s_list));
      t->data = x;
      t->next = (*q)->next;
      (*q)->next = t;
      *q = t;
    }
  else
    {
      t = malloc(sizeof (struct s_list));
      t->data = x;
      t->next = t;
      *q = t;
    }
}

//without
void push(int x, t_list *q)
{
  t_list t;
  t = malloc(sizeof (struct s_list));
  t->data = x;
  if (*q)
    {
      t->next = (*q)->next;
      (*q)->next = t;
    }
  else
    t->next = t;
  *q = t;
}

Again, the code is smaller and simpler to verify. Those two example are very basic, and most programmer will probably use the second form intuitively, but in more complex situation, taking as guideline to avoid copy/paste may save you from long nightmarish bug hunt.

Any programming features is good as long as it fit your need.
There's no reason to not use a feature in your programming language  as long as you understand it and it do the job. If it's possible, it means that somehow, it was meant by the language designers for a good reason (or, else, they should have find a way to forbid it.)

When teaching programming, I often heard students arguing about bad practices, rules they have read on the net that they don't really understand. Most of the time, theses rules only take sense in a precise context or express a taste rather than a reasonable motivation.

The main goal of programmer is to make its code works, it's not to produce shinny piece of code or perfect models of how to code. This lead us to my next rule:

The end justifies the means.
To understand this one, we must define the end and the means. The end is your goal, what my program should do. But it's also all the requirements linked to it: should it be fast ? should it be robust and reliable ? what will be its life cycle (one shot or  high availability permanent run) ? would it be extended ? ...

The means deal with how you will produce it and it's strongly connected to the end: don't spend to much time on code for a one shot script, make it works, on the other hand remove all quick and dirty implementation from a production release !

Building a program is most of the time some parts of professional work you're paid for. So, this is not a challenge of best programming practices, there's no jury's prices for nice attempt: you have to make it works the way your future users expected it to work. If they ask for simple script that will save them hours of boring manipulation, they don't want months until you find it finally presentable, the want it now !

The important point is to define the correct goals, and make the right efforts to achieve these goals.

This also means that any tricks are fine as long as it makes your code works. Again, students some times seems disappointed by some hacks they found in some piece of code. They saw it as cheating as if finding a way to get around some difficulties is not elegant and should not be tolerated !

Coding is no game, you can have fun doing it, but there's no place for ridiculous honor code. You will find that the satisfying users' expectations will probably force you to produce code as clean as it can be and that you won't need arbitrary rules to guide you.

Take for example the use of goto and the particular cases of goto into the body of loop: goto is generally considered evil and goto inside a loop is the probably the most heretic things you can do. But, there are some cases where this can be useful. The best known example is the Duff Device example of smart loop unwinding, take a look at it, it's worth the read. I will use a simpler example, not focus on optimization, that I found more striking.

The idea is simple, you have a list or an array and want to print each element separated by a ';', by separated we really mean that the separator is between each element and not after, thus the last element will not be followed by a ';'. There are several ways to do that, let's take a first example:

void printArray(int tab[], size_t count)
{
  size_t i;
  size_t i;
  for (i=0; i < count; ++i)
    {
      if (i > 0) // not the first time
        printf(";");
      printf("%d",tab[i]);
    }
}
Ok, this is not the best version you can write, but it focus on the fact that the first case is handle separately. We can have done like that also:

void printArray(int tab[], size_t count)
{
  if (count)
    {
      size_t i;
      printf("%d",tab[0]);
      for (i=1; i < count; ++i)
        printf(";%d",tab[i]);
    }
}

In this version, the treatment of the first case is less obvious and we need to check the count. Now, take a look at this one:

void printArray(int tab[], size_t count)
{
  if (count)
    {
      size_t  i=0;
      goto start;
      for (; i < count; ++i)
        {
          printf(";");
        start:
          printf("%i",tab[i]);
        }
    }
}

We use a goto to skip the first printf in the loop. The result is better than the first version and it let appears the specificity of the first case. Of course, this is a toy example, and the goto version is not dramatically better in any way, but it illustrate the fact that dirty code can be useful and readable.

Keep it simple, stupid !
One of the classic ! Prefer the simplest path. When you can't figure out yourself how your code work, it means that it's too complex and will probably won't work. Here are some other maxims and quote on the subjects:

"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it ?"  (Brian Kernighan, "The Elements of Programming Style", 2nd edition, chapter 2)
Smarter code means smarter bugs ! (me ;)
"There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies." (C.A.R. Hoare)
So, all those rules deal finally with the same issues: making your programs work ! That's the only things that really matter in programming.

Sunday, June 19, 2011

Reading ...

In the previous entry, I quoted Rob Pike's Notes on Programming in C.

This leads me to re-read it. This is a damn good paper, it is worth reading it, not too long and very interesting. I found that Rob Pike and I have quite the same ideas most discussed matter in his paper.

Maybe, I'm not so dumb ;)

SO, if you haven't read it before, read it: Rob Pike: Notes on Programming in C

About comments

Okay, comments are one of the most obscure subject in programming. Yes, obscure, I really mean it.

Why ? Probably because there is no good way to use comments in your code. As I remember, in my student days, teachers explain to us that comments are mandatory, that a good source code is made up of more comments line than code line and so on, but none explains what to put in that damn comments.

Of course, there's the running joke on the "increments i" comment, but that's all.

As I'm now on the other side (yeah, I'm teaching programming), it was obvious that I should not make same mistake. But, how on earth should I explain to student how to comment their code, if I still don't know how myself ?

So, for some times I was doing like my teachers did, which means I avoided the subject !

But, then I ran into some text from Rob Pike (about programming in general) and he makes a strange remark: "I tend to err on the side of eliminating comments, for several reasons." (Rob Pike: Notes on Programming in C) This sentence strike me first, but then I realize that it was the good way to manage comments.

Most of the time comments are useless !

Why ? Just because the code is probably the best way to explain what you are trying to do !

So, should we always avoid comments ? Of course not, but we should use wisely ! The first question is "why" (not when or how !):


  • You need comments to explain how to use your code
  • You need comments to describe where to find some piece of code
  • Sometimes, you need comments to explain a trick or an unusual hack
  • You may need comments to track down modifications, but fixes and ugly workarounds
Other comments are just a waste of time.

So, the first and most important comments are what we should call "interface comments", that is comments in header files or before definitions. Those comments will explain: what the function does, how to call it, and what constraints should verified (pre-conditions, post-conditions and invariant.)

You can also put comments in front of files to explain globally how things work. Take for example what you found in a lot of driver's source files: a global explanation on how the device works and what are the specific points.

Then you can sometimes explain some tricks. But be careful, those comments are often less informative than it seems to be ! Explaining, using natural language, what a complex piece of code is doing is harder than writing the code itself. Most of the time, the reader will be able to understand what your doing by reading the code, if not, he has nothing to do here ...

A good example of bad comments can be found in the source files of GNU C library. Take a look at the definition of the  strlen function. The function is quite simple, the glibc version is little bit trickier than usual (read word by word rather than byte by byte) but no so complex. The trickiest part is the detection of the final NULL char inside a whole word. Ok, this can require some explanation, but the comments double fail this job: first, the text is less understandable than the code, second, the comments are obfuscating the reading of the code by breaking it into parts separated by big pieces of text ! And, the actual version is more readable than the first one I see.

So, most of the time don't even try to make a comment on your tricks, or put it outside of the code !

Commenting changes and corrections are the most difficult part. These comments are very useful, but they tend to grow bigger than they should be. You have several way to do it, but I found none very satisfactory. First, you can only rely on your version control system's log, in fact, you are probably already doing it, the only issue is that it very hard to find out the history of a specific piece of code (not a whole file, but a single function or data structure.) You can also add it to the code itself, but again, if you put it in front of the file, you loose the connection with the concerned code and if you put it right to the code, it will break the natural flow of your source code.

It would have been nice to have a clever integration of source files, version control and revision logs, but I haven't found such a tool.

So, as a conclusion, I will use some striking mantra (remember, you can't trust maxim !)
Comments are nothing but noise !

Sunday, June 12, 2011

"Don't trust maxims ... "

A lot of books, articles or teaching materials are full of "Maxims" or "aphorisms". The intention is to strike the reader and convince him with a strong idea. Based, on my own experience, I'll start this article with my own maxim:
"Don't trust Maxims !"
Ok, I ended up with the traditional "don't trust me, I'm a liar". But this contradiction serves my vision: Maxims are most of the time meaningless and of no use unless provided with the right explanations. In fact, my own maxim should be read:
"Don't trust Maxims, understand them !"
 So, this is general thoughts, what is the relation with programming ?

After several years of teaching programming and computer science, I often heard students quoting aphorisms religiously and most of the time completely irrelevantly ! Why, because they haven't read the original sources, nor they tried to understand the true meaning behind.

The most interesting example is the famous:
"Gotos are evil !"
This is one of the famous mantras of the structured programming approach. Are "gotos" really evil ? Probably not, there's a lot of legitimate uses of gotos (basic exception mechanism, loop escaping ... ) So, why should gotos be considered evil ?

Let's us go back to the elder ages of computer, when programming languages were no more than macro-assembly. In that time, the goto instruction were the only way to structure programs. Have you ever read a code with no functions, no procedures, no while loops ? These codes tend to be as cryptic as poem written by a drunk schizophrenic !

You don't even need to go back to the 60's. The first programming language that I learned was BASIC (for zx81 computer), the last time I read a bunch of code I wrote in those day make feel seek ! It was full of indirections, stupid line numbering and intricate gotos, I was unable to understand it !

So, yes, used that way gotos are evil, but this does not mean that you should not use it. This is only a matter of code semantic, take a look at the following examples:

void user_accept()
{
  again:
  printf("confirm with : ");
  if (getchar() != 'y')
      goto again;
}

(Remarque: this code works badly due to the newline read with the input char ... )

This code make sense, this is not a loop, all you want is to read 'y'. So, the goto is not evil in that case. Of course, you should have used a while loop to obtain the same result, but the meaning is the same. On the other hand, the following code is a bad use of goto:

int f(int **p)
{
  if (!(*p))
    goto getsome;
 doit:
  **p = 42;
  goto end;
 getsome:
  *p = malloc(sizeof (int));
  goto doit;
 end:
  return **p;
}

The previous code is just stupid (but anyway, it is inherently stupid) and its use of goto is just misleading. In fact. Of course, we shouldn't have write it this way, this is bad style programming, but it reflects the idea of evil goto: using it in place of functions or procedures (here, we can even do it without, but this how you should read it, jumping to a specific piece of code in order to solve some issue before returning to normal behavior.)

So, returning to our subject, we illustrate a case where a maxim is right but need further refinement. This lead us to my last maxim of today:
"Never restraint yourself for a bad reason, any programming features can be useful."

Thursday, June 9, 2011

New blog !

After many years of programming and teaching programming, I've accumulate a lot of reflexions that I want to share.

This won't be a technical blog, but a place for me to share my vision of programming. I'll tend to stay funny and I hope that my poor English  won't depreciate my thoughts.

Test edit, this is just a test, it will disappear soon … $x^y$