Date: 14th October 2016

Venue: The Chancery Pavilion, Residency road, Bengaluru.

The train that I took was late by an hour and I was searching for the place all around and I could not find it. It’s a shame that I had to miss a part of Robert Virding’s keynote. He talked about Erlang and Beam (Beam is a virtual machine for Erlang, it is analogous to what jvm is to Java). Turns out Erlang is not the only language that runs on Beam. If you have heard of Lua, you must know that it is a multiparadigm programming language(imperative, functional style included). This language runs on Beam too. It was a real shocker to me. Beam has a special feature called hot loading, this means that while running an Erlang program you can change the code at the run time without having to stop the program that you are running. This is huge! You can make bug fixes without having to stop the service that you are providing. I thought that it was a feat that could only be achieved by functional programming languages but Robert told that this feat can be achieved for Lua too but it has not been implemented in the Lua repository. Hey if you are developing Lua then please try to implement this feature as a non standard extension at least, you will be able to blow the brains off of Lua developers. It seems that some of the developers of Haskell tried to give Erlang the former’s beautiful type system but turns out this broke the Erlang implementation. Hot loading works only because of the dynamic type system. If Erlang were statically typed then hot loading would not work. This is because the new Beam will not be able to hot load code of different type-signature.

The next talk was given by Brian McKenna. He talked about non-existence of a silver bullet to functional programming and I kind of agree with that. The equational reasoning was a real eye opener for me. He argued that the side effects on the whole was unnecessary. Someone pointed out that then we would not be able to affect the world and the program would become useless (absence of IO that is). We all have to agree on that. Then he asked, “Other than IO why would we need side effects?” I told that side effects would make it easier to write graph based algorithms (representing cycles in graphs) and Someone else pointed out that it can be done functionally and there were papers based on that. I agree to it, but performing a stunt to write graph based algorithms seems like an overkill to me. I thought that side effects would make it easier to perform memoization but I know how its done functionally with lazy arrays and there is a library to do just that. See the code and you can be the judge.

Python:

memo = {}
def fib(n):
    if n in memo:
        return memo[n]
    elif n <= 2:
        ans = 1
    else:
        ans = fib(n-1) + fib(n-2)
    memo[n] = ans
    return ans

Haskell:

import Data.Array
fib n = memo ! n
  where memo = listArray (1, n) (map myfib [1..n])
        myfib n
          | n <= 2 = 1
          | otherwise = memo ! (n-1) + memo ! (n-2)

I know that the Haskell version is not that big when compared to Python version but in the Haskell version a lot of lazy evaluation is going on and it is definitely not for beginners. The Python version can be understood by a noob too. I kept it to myself because I knew how it was done.

Next we had a breaking-the-ice session and I met with many new programmers (functional wizards?). Thanks to Naresh Jain. After that I kept introducing myself to a lot of people and it was fun.

The next talk was given by Aloïs Cochard on a library called machines in Haskell. I did not get much from it. There had to be something challenging so that the experts don’t get bored right? I should have used the law of two feet (The law of two feet states that if you do not gain or contribute something then thou shalt use your two feet).

The next talk was given by Abdulsattar Mohammed, and it was about dependent types in Idris. I really enjoyed that talk. With dependent types you can prove the correctness of your code at compile time, but the cost that you have to pay is that it will take forever to compile and that is not good but this language can be used for prototyping and it will kick ass in there. Another thing that I found interesting was an IDE for Idris and he just wrote the type signature for map and the IDE (based on Atom) wrote the entire code for map! If you have used Eclipse for Java this might not seem like much but this is much more than what Eclipse does for Java. In the case of Idris, the IDE takes leverage of the advanced type system and achieves nirvana! He explained the dependent types with an example of storing age. In Java you would use an Integer but an Integer is not the correct way to represent it. You cannot have -1 for age or you cannot have age greater than 150 (150 years is the hard limit to which a human can live). To solve this you have to write try and catch blocks and handle exception, this happens at run time. In Idris you can detect this at compile time and I am not convinced. Someone asked what would happen if you were to get an input greater than 150 at run time, what had to be done to handle it. I have the same question. Later, I asked if dependent types are available in Haskell he told that it can be used with an extension of GHC (Glasgow Haskell Compiler) the GADTs. The only thing that aches me is that we do not have such a good IDE for Haskell (I use emacs by the way and I have no regrets).

The lunch was appetizing and I was stuffed. The dessert was great too.

The next talk was given by Rahul Muttineni on a cross-breed between GHC and JVM. Enter GHCVM(now eta), it has support to use the entire hackage and Java’s library with some foreign function interface. He explained the concept of lazy evaluation in detail. The GHCVM that he was working on is a fork of GHC. He replaced the C code in the RTS part of GHC with Java. He also showed what STG(Spineless Tagless G-Code) looked like. STG is optimized Haskell that consists of only case-of and let constructs. This optimization was introduced into GHC by Simon Petyon Jones. It is one of the reasons why GHC is as fast as it is today. Later, I asked Rahul how he got started with working on GHC. He complained about the lack of documentation. He told that #haskell IRC channel on freenode would be helpful. He asked which part of GHC I wanted to work on. I had previously worked on the lexing and parsing stage. I had done my research on running bytecode on a VM. I told that I was interested in working on converting the AST into bytecode. He told, that was the most difficult part. Nevertheless I am still interested in doing just that.

The next talk was given by Bartosz Bąbol on Scala Meta. He told that macros were included in Scala as an experimental feature but it was removed recently. Scala without macros? Ugh. Then he also said that they improved macros to the next level and they introduced Scala Meta (that is a relief). I have not used any real macros. The closest that I have come is decorators in Python. He also said that this was different from the legacy macros and it could take tokens and manipulate it. He started with an explanation of what meta meant. A joke about someone’s joke is a meta joke. Data about the data is called meta data. Finally, a program that works with another program is called meta program. He showed how boilerplate code could be removed using a macro. He demonstrated that sometimes we might have to try the same action a hundred times, then if it did not achieve success then we abort with an exception. This can be easily done with try catch blocks and a for loop that runs a hundred times. He showed how it was done using macros (like a pro). Writing less code? I’m in! He also told that if we wanted to use Scala Meta, we could do it in a beautiful Island, hey that was an exceptional job offer.

The last talk was given by Viral B. Shah and Sashi Gowda. It was on Julia. It is a dynamically typed general purpose programming language. It is mostly used for crunching numbers. It is used where MATLAB or R is generally used. They talked about the optimization that they made to the Julia compiler to make the functional subset of Julia faster. They also told that they introduced APL like arrays. On a side note APL is one hell of a language. There is also experimental support for one indexed arrays. They showed how one could use Julia REPL to see the Assembly code or the LLVM IR that was very impressive. They also pulled off some stunts with matrices in Julia. They also showed how Julia was secretly Lisp under the hood.

The Conference ended and I vanished into the horizon riding my horse.

Update:

Click here to view the video recordings of the Conference on YouTube.