Some thoughts on Haskell library design
Few weeks ago, I released a first (experimental) version of my Haskell Neural Network library.
And since then, I have been thinking about the design of this library (and got some feedbacks from haskellers) : how to improve it, how to let people easily plug their stuffs in it, etc. In this post I’ll try to share my thoughts about how HNN can be improved (without anything neural network specific). I hope it will somehow be useful to someone.
Typeclasses — “Is it really worth creating a typeclass for … ?”
Let’s first replace this in its context. I like very much the typeclass feature of Haskell, I already used it for many things. But hey, HNN is a project for the Haskell community, I just can’t throw some typeclasses in it without actually thinking about why it would be good, and why it wouldn’t. Wouldn’t it be overkill and thus make haskell code using HNN less nice ?
In my case, it is about the list-of-value storage. I started to store the weights and the inputs/outputs with lists of double. Pretty inefficient yeah, but I was rather prototyping than seriously writing the library. Then I moved to the uvector package, on Hackage. Way more efficient. But, you may wonder the following : “what would he do if we have an even nicer and more efficient container ?” — and you’re right. For the moment, intern stuffs are stored in an UArr and I provide both a list-based interface and an UArr-based one. But, hell, it’s already annoying to maintain both of these ! But I couldn’t at that moment look at my code with enough hindsight to get the good and the bad and actually operate the necessary changes to make it for the release of the 0.1 version.
A possible plan for the next version is to have a typeclass representing Storage related functions, and instanciate it for UArr and List. Then I’d abstract the functions that I have to call differently for List, UArr and all in the Storage typeclass and thus would only require an instance of Storage to be given to the functions I need. This way, if you have some container in some other part of your project, or if you just want to have a more performant / specific container store your values, it’ll be possible, provided you will give the necessary functions at the typeclass instanciation.
Now that I shared my reasonning path, yeah, obviously, there will be a typeclass for value storing. I even have some ideas about other uses for typeclasses, in particular one being suggested by Thomas Bereknyei : it would represent a computation. Why would it be interesting ? So that we could have complex computation units within neural networks, or interacting with neural networks. I’m particularly thinking of Kohonen Maps or recurrent neural networks.
About “exposed function granularity”
Maybe you already wondered whether you should expose few functions covering most of the tasks you library accomplishes or if you should rather provide many functions for little tasks. First, I don’t think there’s an universal answer to that question. It highly depends on the “combinatorial” capacity of the underlying concepts. For example, parsec provides many little functions because using parsec consists in combining these little functions to create more complex parsers. With neural networks, we don’t have that much of a combinatorial power. I mean, you create your neural network, apply training functions on it, and then use it. And the library does not have (yet) enough maturity to provide features like neural network combinations or so. So what’s the best for HNN ?
For the moment, we only have few functions doing almost all the job, giving them parameters to have the necessary flexibility to train the neural network with the smoothness we want, etc. I think it might be interesting to provide something similar to data structure folding for the neural networks, or even consider offering a graph-oriented interface (after all, that’s what neural networks are) and provide graph traversal functions to operate on the neural network. The key point here is to – whatever way I choose – provide higher order functions to operate on the neural networks, in addition to the simple and straighforward functions we can find in the library for the moment.
I will try to introduce some typeclasses to offer more flexibility to the future users, the number one priority being for the value storage. In addition to that, I have to provide users functions offering them to do nearly anything they want on the neural network, like we have folds for lists, trees and all. If these both goals are satisfied in a close future, HNN should definitely be more interesting for serious use. Another point I have to consider is to offer even better performances since neural networks “just” compute stuffs. UArr helped a lot for that, but I’m pretty sure I can introduce other improvements on that aspect of HNN. I just need to read again some pages of the GHC manual 😉
I hope these thoughts will be useful to someone. At least, they have been to me.