NL FP day 2011
Last Friday I attended the Nederlandse Functioneel Programmeren dag. The program showed a wide variety of short (20 min.) talks. This appeared to be a nice low barrier of entry opportunity to see what’s currently hot in the field of functional programming. With the added benefit of being able to talk to people who get to use languages like Haskell, F# on a daily bases instead of in their spare time ☺.
Anja Niedermeier presented a talk about designing a dataflow processor using CλaSH a functional hardware description language that borrows both its syntax and semantics from Haskell. The interesting part for me was to see a parallel to a question that was recently posed at my current project to create some means to let users create their own image processing pipeline from a given set of basic operations.
Doaitse Swierstra gave a talk about combining parsers for permutations with parsers for merged structures. An example of a parser for a permuted structure would be a parser for HTML elements, since attributes can occur in any order. An example of a parser for a merged structure would be The Utrecht Attribute Grammar System, which needs to parse different types of text in a file. Another example of this would be a template language like Liquid. I found it a bit to abstract to really grasp it.
José Pedro Magalhães presented the way in which he and his fellow researchers used Haskell Generalized Algebraic Data types to model chord sequences. The practical application of this work could for instance be recognising music based on the found chord sequences. This talk was very interesting to me since it combined the field of music with programming Haskell.
Sander van den Berg presented a way to solve a reformulation of the Expression Problem. The expression problem was coined by Philip Wadler as ‘a salient indicator of its (red: programming language) capacity for expression’ . This was the only talk that featured a dynamically typed language, Clojure in this case. The language features demonstrated were protocols and data types. Before the lunch break Sander also demo-ed a Clojure program that was coded on stage at a Java conference, that tried to ‘find’ a best estimate to the famous Mona Lisa by drawing only polygons. The program also tried to demonstrate Lisp’s homoiconicity, this is the property that code can manipulate code because the syntax of the language can be represented as a data structure in a primitive type of the language. Lisp is not the only language that is considered to be homoiconic other noteworthy examples are XSLT, Postscript and R.
Alexander Bertram gave an interesting talk about his efforts to write a Java-based R interpreter and compiler. R is language with a large history, created and maintained mostly by statisticians and data-analysts. This accounts for the large number of language features, some of which make computer scientists curl their toes. R is a functional language in which even if and break are functions, that can even be redefined, aaah. Running R in the ‘cloud’ is the ultimate goal, to provide a solution for data-mining very large datasets.
Peter Achten presented work to combine the forces of Clean and Haskell by providing two new dialects of both Clean and Haskell that can be used with the Clean compiler. It was interesting to learn something about the similarities and differences between Clean and Haskell. One of the differences between Clean and Haskell is the fact the Haskell uses Monads to separate pure code from side effecting code while Clean uses a concept called uniqueness types.
Kenneth Rovers presented his work on using Haskell to create simulations of mixed-signals encountered for instance in a phased array. The solution to the problem was very elegant, instead of trying to simulate a continuous signal with a set of values over a time-series, describe the simulation with functions that depend on the parameter time. The different components of the system can be composed using function composition.
Next up Stefan Holdermans talked about making sharing explicit in trees which results in graphs. Then he went on to explain how through the use of polynomial types1 and various cata- and anamorphisms one can impress your audience. Needless to say this talk was a bit to abstract for me even though I’ve had my share of Category theory while I was studying for my masters degree in mathematics. I could not see how this work would make me happy as a programmer trying to work with graphs.
Alexey Rodriguez was the last presenter of the day. The title of the talk intrigued me, especially since I’ve recently given up on serialization. In my opinion the time saved by not having to write a parser and generator for your data format is spent (and then some) solving the up-, and downgrade problems when using some form of serialization. Alexey described a framework in OCaml used at his company to solve the aforementioned problem.
The drinks and dinner afterwords were interesting and well deserved I would say. Having the opportunity to learn a little bit more about Clean and chat about using FsCheck to test a large C# code-base. After a good meal I went home tired, inspired and functionally rewired.
Of which I could not find a decent link describing them. ↩