Alp Mestanogullari's Blog

HNN news

Posted in Uncategorized by alpmestan on 2010/02/08

Indeed, HNN moved to haskell.org and is growing.

Firstly, the code is no longer evolving at github. There are at least 2 other persons working on HNN for the moment (Thomas Bereknyei and Jorden Mauro — thank you guys for joining the project) thus we decided it was high time having a bit more than a little github place. So we now have a darcs repository, a mailing list and a trac wiki / bug report system.

As said on the trac home page, we are rewriting HNN from scratch to make it more generic and efficient. We will of course still make the API as simple as possible, fitting one of the most important goals of the original version of HNN.

Stay tuned !

Tagged with: ,

Some thoughts on Haskell library design

Posted in Uncategorized by alpmestan on 2010/01/09

Hi,

Few weeks ago, I released a first (experimental) version of my Haskell Neural Network library.

And since then, I have been thinking about the design of this library (and got some feedbacks from haskellers) : how to improve it, how to let people easily plug their stuffs in it, etc. In this post I’ll try to share my thoughts about how HNN can be improved (without anything neural network specific). I hope it will somehow be useful to someone.

Typeclasses — “Is it really worth creating a typeclass for … ?”

Let’s first replace this in its context. I like very much the typeclass feature of Haskell, I already used it for many things. But hey, HNN is a project for the Haskell community, I just can’t throw some typeclasses in it without actually thinking about why it would be good, and why it wouldn’t. Wouldn’t it be overkill and thus make haskell code using HNN less nice ?

In my case, it is about the list-of-value storage. I started to store the weights and the inputs/outputs with lists of double. Pretty inefficient yeah, but I was rather prototyping than seriously writing the library. Then I moved to the uvector package, on Hackage. Way more efficient. But, you may wonder the following : “what would he do if we have an even nicer and more efficient container ?” — and you’re right. For the moment, intern stuffs are stored in an UArr and I provide both a list-based interface and an UArr-based one. But, hell, it’s already annoying to maintain both of these ! But I couldn’t at that moment look at my code with enough hindsight to get the good and the bad and actually operate the necessary changes to make it for the release of the 0.1 version.

A possible plan for the next version is to have a typeclass representing Storage related functions, and instanciate it for UArr and List. Then I’d abstract the functions that I have to call differently for List, UArr and all in the Storage typeclass and thus would only require an instance of Storage to be given to the functions I need. This way, if you have some container in some other part of your project, or if you just want to have a more performant / specific container store your values, it’ll be possible, provided you will give the necessary functions at the typeclass instanciation.

Now that I shared my reasonning path, yeah, obviously, there will be a typeclass for value storing. I even have some ideas about other uses for typeclasses, in particular one being suggested by Thomas Bereknyei : it would represent a computation. Why would it be interesting ? So that we could have complex computation units within neural networks, or interacting with neural networks. I’m particularly thinking of Kohonen Maps or recurrent neural networks.

About “exposed function granularity”

Maybe you already wondered whether you should expose few functions covering most of the tasks you library accomplishes or if you should rather provide many functions for little tasks. First, I don’t think there’s an universal answer to that question. It highly depends on the “combinatorial” capacity of the underlying concepts. For example, parsec provides many little functions because using parsec consists in combining these little functions to create more complex parsers. With neural networks, we don’t have that much of a combinatorial power. I mean, you create your neural network, apply training functions on it, and then use it. And the library does not have (yet) enough maturity to provide features like neural network combinations or so. So what’s the best for HNN ?

For the moment, we only have few functions doing almost all the job, giving them parameters to have the necessary flexibility to train the neural network with the smoothness we want, etc. I think it might be interesting to provide something similar to data structure folding for the neural networks, or even consider offering a graph-oriented interface (after all, that’s what neural networks are) and provide graph traversal functions to operate on the neural network. The key point here is to – whatever way I choose – provide higher order functions to operate on the neural networks, in addition to the simple and straighforward functions we can find in the library for the moment.

Conclusions

I will try to introduce some typeclasses to offer more flexibility to the future users, the number one priority being for the value storage. In addition to that, I have to provide users functions offering them to do nearly anything they want on the neural network, like we have folds for lists, trees and all. If these both goals are satisfied in a close future, HNN should definitely be more interesting for serious use. Another point I have to consider is to offer even better performances since neural networks “just” compute stuffs. UArr helped a lot for that, but I’m pretty sure I can introduce other improvements on that aspect of HNN. I just need to read again some pages of the GHC manual ;-)

I hope these thoughts will be useful to someone. At least, they have been to me.

Tagged with: , ,

HNN-0.1 has been released !

Posted in Uncategorized by alpmestan on 2009/12/23

Hi,

I just released the 0.1 version of my Haskell Neural Network library on Hackage.
Instead of writing a long blog post, I created a page on the Haskell wiki that you can find here : HNN describing what is HNN, how to get it, showing a sample and all.

There is an online version of the documentation here : hnn documentation
You can also consult hnn’s hackage page : hnn at hackage (the documentation should be generated soon there)

Here is a sample showing how you can use HNN :

module Main where

import AI.HNN.Net
import AI.HNN.Layer
import AI.HNN.Neuron
import Data.Array.Vector
import Control.Arrow
import Data.List

alpha = 0.8 :: Double — learning ratio
epsilon = 0.001 :: Double — desired maximal bound for the quad error

layer1, layer2 :: [Neuron]

layer1 = createSigmoidLayer 4 0.5 [0.5, 0.5, 0.5] — the hidden layer

layer2 = createSigmoidLayer 1 0.5 [0.5, 0.4, 0.6, 0.3] — the output layer

net = [layer1, layer2] — the neural network

finalnet = train alpha epsilon net [([1, 1, 1],[0]), ([1, 0, 1],[1]), ([1, 1, 0],[1]), ([1, 0, 0],[0])] — the trained neural network

good111 = computeNet finalnet [1, 1, 1]
good101 = computeNet finalnet [1, 0, 1]
good110 = computeNet finalnet [1, 1, 0]
good100 = computeNet finalnet [1, 0, 0]

main = do
putStrLn $ "Final neural network : \n" ++ show finalnet
putStrLn " —- "
putStrLn $ "Output for [1, 1, 1] (~ 0): " ++ show good111
putStrLn $ "Output for [1, 0, 1] (~ 1): " ++ show good101
putStrLn $ "Output for [1, 1, 0] (~ 1): " ++ show good110
putStrLn $ "Output for [1, 0, 0] (~ 0): " ++ show good100

Output :

$ ./xor-3inputs
Final neural network :
[[Threshold : 0.5
Weights : toU [1.30887603787326,1.7689534867644316,2.2908214981696453],Threshold : 0.5
Weights : toU [-2.4792430791673947,4.6447786039112655,-4.932860802255383],Threshold : 0.5
Weights : toU [2.613377735822592,6.793687725768354,-5.324081206358496],Threshold : 0.5
Weights : toU [-2.5134194114492585,4.730152273922408,-5.021321916827272]],[Threshold : 0.5
Weights : toU [4.525235803191061,4.994126671590998,-8.2102354168462,5.147655509585701]]]
—-
Output for [1, 1, 1] (~ 0): [2.5784449476436315e-2]
Output for [1, 0, 1] (~ 1): [0.9711209812630944]
Output for [1, 1, 0] (~ 1): [0.9830499812666017]
Output for [1, 0, 0] (~ 0): [1.4605247804272069e-2]

Don’t hesitate to try it, play with it and give some feedback ! For any feedback or question, see the end of the HNN wiki page.

Thanks, and enjoy !

Tagged with: ,

HNN : a Haskell Neural Network library

Posted in Uncategorized by alpmestan on 2009/12/22

Hi,

Few months ago, I started working on a neural network library, in Haskell. The result wasn’t that bad, but needed some additional work. Past days, I’ve worked a bit on that code again to get a releasable and usable 0.1 version of HNN up and working. For example, the weights, inputs and outputs are of type UArr Double now (they were of type [Double] before).

You can find the source code there : http://github.com/alpmestan/HNN.
Also, I’ve built a minimal documentation. You can find it here : http://mestan.fr/haskell/hnn/.

To get the current code and test it :

$ git clone git://github.com/alpmestan/HNN.git
$ cd HNN
$ cabal configure
$ cabal build
$ cabal install (to add it in your ghc package list, etc)
$ cabal haddock (to generate the documentation)

I plan to put this on Hackage soon, but I would like to get some feedback & reviews about it before.

Thank you !

Tagged with: , ,
Follow

Get every new post delivered to your Inbox.

Join 251 other followers