Alp Mestanogullari's Blog

scoutess, continuous integration, cabal and the Google Summer of Code

Posted in Uncategorized by alpmestan on 2012/03/21

Recently, many people have been publishing blog posts or Google+ threads about cabal(-install), dependency hells, continuous integration, and some other issues (here and there for example). You may also have noticed some attempts to work around some of these issues, like cabal-nirvana. Some of you may have also read about or even used Travis CI (they recently announced Haskell project support).


Jeremy Shaw and I have been discussing a lot about many of the issues that make a Haskell project hard to maintain, in particular how hard it is to know when/how someone will enter the dependency hell when installing your project, and how/why that person got there. Then we talked about how nice it would be to actually have a flexible continuous integration platform for (cabal-powered) Haskell projects. (This was before cabal-nirvana has been written and also before Travis CI’s announcement.)

So we decided to start such a project. Scoutess aims at providing the Haskell community a powerful and viable build bot with a few key services like dependency tracking, on-commit build testing, nightly builds with handling of multiple GHC versions (and dependency versions), report generation, etc. Of course, all the builds will be sandboxed and many of the services will be configurable (and in particular, you will be able to turn them on/off). You can find a list of features/ideas we plan to implement (or already have implemented) on this page.

In particular, you may notice that scoutess’ scope is deeper than Travis CI’s. For example, if you want to do dependency tracking, or handle multiple package and GHC versions, or handle automatic posting of reports somewhere, you have to handle it all by yourself with Travis CI, whereas scoutess will provide this out-of-the-box. Also, as far as I know, Travis CI is currently restricted to github (although this may not be the biggest drawback since it represents a fair amount of the most commonly used Haskell packages). And we plan to benefit from focusing only on cabal-powered Haskell projects by using all the information we can gather from cabal files to offer much better functionality.

We already have implemented some core features and designed a part of the project, but it is still far from being usable. If you want to talk about the project with us, give your opinion or some ideas, do not hesitate to drop by the #happs IRC channel on freenode and ping stepcut and/or alpounet.

Want to hack on scoutess for the Google Summer of Code?

If you are a prospective student for the GSoC and if you are interested in working on scoutess, we would gladly accept mentoring you during the GSoC. Possible project ideas include (but are not necessarily restricted to):

  • writing the sandboxed building service around cabal-dev, virtualhenv or hs-env, and integrating it with the current code-base (for example, making it use the ‘LocalHackage’ service instead of just fetching packages over and over again)
  • writing the dependency-tracking service (which would eventually also include tracking the development repositories of the dependencies)
  • working on the report generation and posting system (but that would most likely require the build service to be implemented)

These are the 3 ideas I have in mind right now, but we could find something more “restricted”, or just tailored for an interested student after some discussion.

So feel free to ask any question about the project and/or the ideas for the GSoC in the comments, on IRC or by email at <alpmestan AT gmail DOT com>.

Note: I just created a ticket on the Haskell GSoC trac here.

Tagged with: , ,

A French community for Haskell

Posted in Uncategorized by alpmestan on 2011/10/09

During the past few months, there has been some growing activity in the (hidden) French circles of the Haskell community. A few people, including myself, are trying to increase the interest around Haskell in the French-speaking communities. You may (or may not) have noticed the translation of Learn You A Haskell that is available here, in French. We also have an IRC channel, #haskell-fr (access it through Freenode’s webchat), and a mailing list. Please let us know about you if you are a French-speaking haskeller!

But we would like to get it a step further. We are currently considering the idea of a French hackathon. We already have been offered a room for 3 days in June 2012, in conjunction with a French Perl event. So one of the reasons behind this blog post is to get some feedback about this, in addition to letting people know about the French community. Who would be interested in attending such an event? This would be in Strasbourg, France for the moment. Note that this would not necessarily be restricted to French haskellers! The format would be a pretty much classical Hackathon: potential talks and a lot of Haskell hacking. So please let us know if you’d be interested in attending this (by commenting this post, the IRC channel or the mailing list).

On a side note, these past few days, I updated one of my libraries: statistics-linreg and published one too: kmeans-vector. It performs the k-means clustering algorithm on a list of points, and can lead to “pretty” graphics like the following.

Performing k-means on 10,000 points with 4 clusters

Tagged with: , ,

Getting GHC HEAD and LLVM working together

Posted in Uncategorized by alpmestan on 2010/03/11

Since dons’ two posts, I have seen many people on the haskell-cafe mailing-list and the haskell-related IRC channels asking how they could get GHC HEAD and the LLVM-related stuff, build them and all. There are a lot of this information on the ghc trac but I have been told it would be nice to have a comprehensive guide with all the necessary steps and information.

Getting, patching and building LLVM

It is as simple as executing the following commands:

$ svn co llvm
$ cd llvm
$ patch -p0 -i ~/llvm-ghc.patch
$ ./configure –enable-optimized # probably also want to set –prefix
$ make
$ make install

where the given llvm-ghc.patch file can be downloaded here :

Getting GHC HEAD and Patching it to add the LLVM backend

First, get the latest archive matching that pattern : ghc-HEAD-AAAA-MM-JJ-ghc-corelibs-testsuite.tar.{gz or bz2} at, the latest currently being ghc-HEAD-2009-10-23-ghc-corelibs-testsuite.tar.gz. It ships GHC with all its dependencies, also including a testsuite.

Put the archive file in (on my computer) /home/alp/haskell/. Uncompress it with for example:

$ tar xzf ghc-HEAD-2009-10-23-ghc-corelibs-testsuite.tar.gz

You should get a /home/alp/haskell/ghc/ directory. Now we have to fetch the latest patches applied to GHC since the archive you downloaded got published on To do this, simply do:

$ chmod +x ./darcs-all
$ ./darcs-all pull -a

(the former may not be necessary though)
It can take some time, there may be a lot of patches to fetch and apply, don’t worry.

If you are not interested in having the LLVM backend, you can stop reading that section and go straight to the next one.

Once done with that, you have to download the patch to add the support to the LLVM backend to GHC. Just download the following file : http://www.cse.unsé (it actually just is a darcs patch, not an archive or anything else — you may want to rename it with a .patch extension), put it in ghc’s directory (/home/alp/haskell/ghc/ here) and apply the patch :

$ darcs apply ghc-llvmbackend-full.patch

(or .gz)

You just have one more patch to apply (isn’t life about applying patches?), written by Don Stewart, to be able to pass LLVM options to GHC without conflicting with other options. You can get it at Just move it to your ghc directory and darcs apply it, like you just did for the previous patch. (note: this step might not be necessary if the previous patch or the ghc repository contain it)

Ok, we are almost done! Now create a file named “” in (on my computer) /home/alp/haskell/ghc/mk/, and put the following content in that file:

GhcWithLlvmCodeGen = YES
GhcEnableTablesNextToCode = NO

The first, obviously, enables LLVM code generation in GHC. The second disables a feature currently being in conflict between the LLVM and the C code generators, if I remember correctly. Now, let’s build GHC, hooray!

Building and using the patched GHC non-intrusively

Once all the patches are fetched and applied, you just have to do:

$ sh boot
$ ./configure
$ make

Now, have a cup of coffee, read newspapers, try to solve P vs NP, whatever. It takes some time.

Once done with ‘make’, there should be many binaries in (on my computer) /home/alp/haskell/ghc/inplace/bin/. The ghc binary is called ghc-stage2

To check that the LLVM backend is enabled, as explained on Don’s post, just do the following (in the your ghc/inplace/bin/ directory):

$ ghc-stage2 –info

and verify that it gives you (“Have llvm code generator”,”YES”).

To make use of them properly without installing your brand new GHC HEAD system-wide, here is one possibility; add the following to your ~/.bashrc (or whatever you use, it’s just about defining variables):

GHC613 = /home/alp/haskell/ghc/inplace/bin

And now, when you will want to install libraries via cabal for your new GHC, just execute commands close to the following (to install say the vector package):

$ cabal install vector –with-compiler=$GHC613BIN –with-hc-pkg=$GHC613PKG

And it will build and install the vector package using GHC HEAD and its library directories (in ~/.ghc/).
Moreover, if you want to build packages with the llvm backend, just ask cabal gently:

$ cabal install vector –with-compiler=$GHC613BIN –with-hc-pkg=$GHC613PKG –ghc-options=-fllvm

You can even chain options this way, passing for example -optlo-03. For more informations on that, check Don’s posts:

But be careful, -fvia-C and -fllvm aren’t much friends, you may sometimes have to either edit the .cabal file of the package you want to install and removing -fvia-C from there, or via the –flags option of cabal install. For more informations, cabal help install.

Here are few links that you may find to be of interest:

I hope this post will be of help and that it will lead some of you to contribute to GHC !

Tagged with: , ,

HNN news

Posted in Uncategorized by alpmestan on 2010/02/08

Indeed, HNN moved to and is growing.

Firstly, the code is no longer evolving at github. There are at least 2 other persons working on HNN for the moment (Thomas Bereknyei and Jorden Mauro — thank you guys for joining the project) thus we decided it was high time having a bit more than a little github place. So we now have a darcs repository, a mailing list and a trac wiki / bug report system.

As said on the trac home page, we are rewriting HNN from scratch to make it more generic and efficient. We will of course still make the API as simple as possible, fitting one of the most important goals of the original version of HNN.

Stay tuned !

Tagged with: ,

Some thoughts on Haskell library design

Posted in Uncategorized by alpmestan on 2010/01/09


Few weeks ago, I released a first (experimental) version of my Haskell Neural Network library.

And since then, I have been thinking about the design of this library (and got some feedbacks from haskellers) : how to improve it, how to let people easily plug their stuffs in it, etc. In this post I’ll try to share my thoughts about how HNN can be improved (without anything neural network specific). I hope it will somehow be useful to someone.

Typeclasses — “Is it really worth creating a typeclass for … ?”

Let’s first replace this in its context. I like very much the typeclass feature of Haskell, I already used it for many things. But hey, HNN is a project for the Haskell community, I just can’t throw some typeclasses in it without actually thinking about why it would be good, and why it wouldn’t. Wouldn’t it be overkill and thus make haskell code using HNN less nice ?

In my case, it is about the list-of-value storage. I started to store the weights and the inputs/outputs with lists of double. Pretty inefficient yeah, but I was rather prototyping than seriously writing the library. Then I moved to the uvector package, on Hackage. Way more efficient. But, you may wonder the following : “what would he do if we have an even nicer and more efficient container ?” — and you’re right. For the moment, intern stuffs are stored in an UArr and I provide both a list-based interface and an UArr-based one. But, hell, it’s already annoying to maintain both of these ! But I couldn’t at that moment look at my code with enough hindsight to get the good and the bad and actually operate the necessary changes to make it for the release of the 0.1 version.

A possible plan for the next version is to have a typeclass representing Storage related functions, and instanciate it for UArr and List. Then I’d abstract the functions that I have to call differently for List, UArr and all in the Storage typeclass and thus would only require an instance of Storage to be given to the functions I need. This way, if you have some container in some other part of your project, or if you just want to have a more performant / specific container store your values, it’ll be possible, provided you will give the necessary functions at the typeclass instanciation.

Now that I shared my reasonning path, yeah, obviously, there will be a typeclass for value storing. I even have some ideas about other uses for typeclasses, in particular one being suggested by Thomas Bereknyei : it would represent a computation. Why would it be interesting ? So that we could have complex computation units within neural networks, or interacting with neural networks. I’m particularly thinking of Kohonen Maps or recurrent neural networks.

About “exposed function granularity”

Maybe you already wondered whether you should expose few functions covering most of the tasks you library accomplishes or if you should rather provide many functions for little tasks. First, I don’t think there’s an universal answer to that question. It highly depends on the “combinatorial” capacity of the underlying concepts. For example, parsec provides many little functions because using parsec consists in combining these little functions to create more complex parsers. With neural networks, we don’t have that much of a combinatorial power. I mean, you create your neural network, apply training functions on it, and then use it. And the library does not have (yet) enough maturity to provide features like neural network combinations or so. So what’s the best for HNN ?

For the moment, we only have few functions doing almost all the job, giving them parameters to have the necessary flexibility to train the neural network with the smoothness we want, etc. I think it might be interesting to provide something similar to data structure folding for the neural networks, or even consider offering a graph-oriented interface (after all, that’s what neural networks are) and provide graph traversal functions to operate on the neural network. The key point here is to – whatever way I choose – provide higher order functions to operate on the neural networks, in addition to the simple and straighforward functions we can find in the library for the moment.


I will try to introduce some typeclasses to offer more flexibility to the future users, the number one priority being for the value storage. In addition to that, I have to provide users functions offering them to do nearly anything they want on the neural network, like we have folds for lists, trees and all. If these both goals are satisfied in a close future, HNN should definitely be more interesting for serious use. Another point I have to consider is to offer even better performances since neural networks “just” compute stuffs. UArr helped a lot for that, but I’m pretty sure I can introduce other improvements on that aspect of HNN. I just need to read again some pages of the GHC manual ;-)

I hope these thoughts will be useful to someone. At least, they have been to me.

Tagged with: , ,

HNN-0.1 has been released !

Posted in Uncategorized by alpmestan on 2009/12/23


I just released the 0.1 version of my Haskell Neural Network library on Hackage.
Instead of writing a long blog post, I created a page on the Haskell wiki that you can find here : HNN describing what is HNN, how to get it, showing a sample and all.

There is an online version of the documentation here : hnn documentation
You can also consult hnn’s hackage page : hnn at hackage (the documentation should be generated soon there)

Here is a sample showing how you can use HNN :

module Main where

import AI.HNN.Net
import AI.HNN.Layer
import AI.HNN.Neuron
import Data.Array.Vector
import Control.Arrow
import Data.List

alpha = 0.8 :: Double — learning ratio
epsilon = 0.001 :: Double — desired maximal bound for the quad error

layer1, layer2 :: [Neuron]

layer1 = createSigmoidLayer 4 0.5 [0.5, 0.5, 0.5] — the hidden layer

layer2 = createSigmoidLayer 1 0.5 [0.5, 0.4, 0.6, 0.3] — the output layer

net = [layer1, layer2] — the neural network

finalnet = train alpha epsilon net [([1, 1, 1],[0]), ([1, 0, 1],[1]), ([1, 1, 0],[1]), ([1, 0, 0],[0])] — the trained neural network

good111 = computeNet finalnet [1, 1, 1]
good101 = computeNet finalnet [1, 0, 1]
good110 = computeNet finalnet [1, 1, 0]
good100 = computeNet finalnet [1, 0, 0]

main = do
putStrLn $ "Final neural network : \n" ++ show finalnet
putStrLn " —- "
putStrLn $ "Output for [1, 1, 1] (~ 0): " ++ show good111
putStrLn $ "Output for [1, 0, 1] (~ 1): " ++ show good101
putStrLn $ "Output for [1, 1, 0] (~ 1): " ++ show good110
putStrLn $ "Output for [1, 0, 0] (~ 0): " ++ show good100

Output :

$ ./xor-3inputs
Final neural network :
[[Threshold : 0.5
Weights : toU [1.30887603787326,1.7689534867644316,2.2908214981696453],Threshold : 0.5
Weights : toU [-2.4792430791673947,4.6447786039112655,-4.932860802255383],Threshold : 0.5
Weights : toU [2.613377735822592,6.793687725768354,-5.324081206358496],Threshold : 0.5
Weights : toU [-2.5134194114492585,4.730152273922408,-5.021321916827272]],[Threshold : 0.5
Weights : toU [4.525235803191061,4.994126671590998,-8.2102354168462,5.147655509585701]]]
Output for [1, 1, 1] (~ 0): [2.5784449476436315e-2]
Output for [1, 0, 1] (~ 1): [0.9711209812630944]
Output for [1, 1, 0] (~ 1): [0.9830499812666017]
Output for [1, 0, 0] (~ 0): [1.4605247804272069e-2]

Don’t hesitate to try it, play with it and give some feedback ! For any feedback or question, see the end of the HNN wiki page.

Thanks, and enjoy !

Tagged with: ,

HNN : a Haskell Neural Network library

Posted in Uncategorized by alpmestan on 2009/12/22


Few months ago, I started working on a neural network library, in Haskell. The result wasn’t that bad, but needed some additional work. Past days, I’ve worked a bit on that code again to get a releasable and usable 0.1 version of HNN up and working. For example, the weights, inputs and outputs are of type UArr Double now (they were of type [Double] before).

You can find the source code there :
Also, I’ve built a minimal documentation. You can find it here :

To get the current code and test it :

$ git clone git://
$ cd HNN
$ cabal configure
$ cabal build
$ cabal install (to add it in your ghc package list, etc)
$ cabal haddock (to generate the documentation)

I plan to put this on Hackage soon, but I would like to get some feedback & reviews about it before.

Thank you !

Tagged with: , ,

Functional compile-time templates based type lists in C++

Posted in Uncategorized by alpmestan on 2009/12/03


Have you ever heard about typelists in C++ ? That just consists in using the functional way of defining lists, but with templates.
It looks like that :

template <typename Head, typename Tail>
struct TypeList
  typedef Head head;
  typedef Tail tail;

However, we’ll need a type representing an empty type list. Ours will be the following.

struct EmptyList { };

How to write metafunctions (compile-time functions, working over types and not values — actually, the types are in this context like “values”) for these type lists now ?

Let’s start with a metafunction computing the length of a type list :

// declaration
template <typename Typelist>
struct Length;

// the normal case : the head element of the list (it's a type), and the tail, which is itself a type list
template <typename H, typename T>
struct Length<TypeList<H, T> >
  static const int value = 1 + Length<T>::value ;

// the terminal case : our typelist is the empty list, we're at the end of the list, so we won't add 1 neither we will go on with the recursion
template <>
struct Length<EmptyList>
  static const int value = 0 ;

Now, calling it on a given typelist will return us the good result :

Length< TypeList<int, TypeList<char, TypeList<bool, EmptyList> > > >::value // equals 3

Do you want more ? I guess you do, of course. Let’s tackle a more complicated one : Map. It maps a type list to another one, computing the result of the application of a metafunction on each type of the type list. Ok, let’s start with the declaration.

template <typename TL, template <typename> class Func>
struct Map;

Now, the basical case, with a head element and the tail, will consist in computing the result of the application of the metafunction of the head element, and appending to it the result of Map on the tail. Looks like that :

template <typename H, typename T, template <typename> class Func>
struct Map<TypeList<H, T>, Func>
  typedef TypeList< typename Func<H>::type, typename Map<T, Func>::type > type;

And the trivial case, on the empty list :

template <template <typename> class Func>
struct Map<EmptyList, Func>
  typedef EmptyList type;

And we’re done with Map !

Let’s see one more interesting function over type lists : Filter. It’ll filter (really ?!) the type list according to a compile-time predicate, and return the original type list without the types that didn’t match the predicate.
Here we go !

template <typename TypeList, template <typename> class Pred>
struct Filter;

template <typename H, typename T, template <typename> class Pred, bool result>
struct FilterAux
  typedef typename Filter<T, Pred>::type type;

template <typename H, typename T, template <typename> class Pred>
struct FilterAux<H, T, Pred, true>
  typedef TypeList<H, typename Filter<T, Pred>::type> type;

template <typename H, typename T, template <typename> class Pred>
struct Filter<TypeList<H, T>, Pred>
  typedef typename FilterAux<H, T, Pred, Pred<H>::value>::type type;

template <template <typename> class Pred>
struct Filter<EmptyList, Pred>
  typedef EmptyList type;

This one was trickier, because we needed an auxiliary template structure to have a bool against which we could specialize, to either go on without keeping the type in the type list (in case it doesn’t match the predicate) or keeping it.

Now, I’ll leave as exercise the following functions :
- Repeat : takes a type, a number, and returns a type list containing n times the given type
- Take : takes a type list and a number, and returns a type list contaning the first n elements of the typelist if possible, less otherwise.
- Interleave : it takes 2 two lists, say l1 = [T1, T2, T3] and l2 = [U1, U2, U3] and returns the list [T1, U1, T2, U2, T3, U3]
- Zip : takes two lists and returns a list of component-wise pairs of the types
- ZipWith : takes two lists and a function itself taking two types and returning a type, and returns a list compound of the component-wise application of the given function on both lists simultaneously

I’ll try to post these during the upcoming days.

Good functional metaprogramming to all ;)

Tagged with: , ,

A little WIP Haskell editor

Posted in Uncategorized by alpmestan on 2009/10/04


Tagged with: ,

Playing around Control.Concurrent and Network.Curl.Download

Posted in Uncategorized by alpmestan on 2009/09/27


I’ve been playing with Control.Concurrent and Network.Curl.Download today, willing to write a program that would spawn threads to download web pages… It’s now done !

Here is the Haskell code, minimally commented (I think the Control.Concurrent doc is enough explicit, and my explanations wouldn’t be better).

module Main where

import Control.Concurrent -- multithreading related functions and types
import Control.Exception
import Network.Curl.Download -- HTTP page download related functions and types
import System.IO
import System.Time

-- like it is said on 
-- it lets you block the main thread until all the children terminates     
waitForChildren :: MVar [MVar ()] -> IO ()
waitForChildren children = do
  cs  return ()
    m:ms -> do
       putMVar children ms
       takeMVar m
       waitForChildren children

-- creates a new thread within the thread syncrhonization mechanism
forkChild :: MVar [MVar ()] -> IO () -> IO ThreadId
forkChild children io = do
    mvar <- newEmptyMVar
    childs <- takeMVar children
    putMVar children (mvar:childs)
    forkIO (io `finally` putMVar mvar ())

-- downloads the content of the web page and then saves it into a file in the current directory
doDl url = do
  Right content <- openURIString url
  let filename = (takeWhile (/= '/') . drop 7 $ url) ++ ".html"
  writeFile filename content
-- spawns 8 threads to download the corresponding web pages and then waits for the 8 threads to terminate before exiting
main = do
  children <- newMVar []
  mapM_ (forkChild children . doDl) ["", "", "", "", "", "", "", ""]       
  waitForChildren children

Now, let’s compile it :

ghc -threaded --make Main.hs -o hsmultidl

and execute it, with the -N2 option (2 cores on my computer here) to the RunTime System, and RTS informations (-s option) :

$ time ./hsmultidl +RTS -N2 -s
./hsmultidl +RTS -N2 -s 
      11,470,748 bytes allocated in the heap
      11,930,464 bytes copied during GC
       1,726,380 bytes maximum residency (4 sample(s))
          85,004 bytes maximum slop
               5 MB total memory in use (0 MB lost due to fragmentation)

  Generation 0:    17 collections,     0 parallel,  0.02s,  0.03s elapsed
  Generation 1:     4 collections,     1 parallel,  0.02s,  0.06s elapsed

  Parallel GC work balance: 1.00 (155513 / 155242, ideal 2)

  Task  0 (worker) :  MUT time:   0.00s  (  0.00s elapsed)
                      GC  time:   0.00s  (  0.00s elapsed)

  Task  1 (worker) :  MUT time:   0.00s  (  0.00s elapsed)
                      GC  time:   0.00s  (  0.00s elapsed)

  Task  2 (worker) :  MUT time:   0.00s  (  1.60s elapsed)
                      GC  time:   0.00s  (  0.05s elapsed)

  Task  3 (worker) :  MUT time:   0.01s  (  1.60s elapsed)
                      GC  time:   0.02s  (  0.02s elapsed)

  Task  4 (worker) :  MUT time:   0.00s  (  1.62s elapsed)
                      GC  time:   0.00s  (  0.00s elapsed)

  Task  5 (worker) :  MUT time:   0.00s  (  1.62s elapsed)
                      GC  time:   0.00s  (  0.00s elapsed)

  Task  6 (worker) :  MUT time:   0.00s  (  1.62s elapsed)
                      GC  time:   0.01s  (  0.01s elapsed)

  Task  7 (worker) :  MUT time:   0.00s  (  1.61s elapsed)
                      GC  time:   0.00s  (  0.00s elapsed)

  Task  8 (worker) :  MUT time:   0.01s  (  1.61s elapsed)
                      GC  time:   0.00s  (  0.01s elapsed)

  Task  9 (worker) :  MUT time:   0.00s  (  1.62s elapsed)
                      GC  time:   0.00s  (  0.00s elapsed)

  Task 10 (worker) :  MUT time:   0.00s  (  1.61s elapsed)
                      GC  time:   0.00s  (  0.00s elapsed)

  Task 11 (worker) :  MUT time:   0.00s  (  1.61s elapsed)
                      GC  time:   0.00s  (  0.00s elapsed)

  Task 12 (worker) :  MUT time:   0.00s  (  1.61s elapsed)
                      GC  time:   0.00s  (  0.00s elapsed)

  Task 13 (worker) :  MUT time:   0.00s  (  1.61s elapsed)
                      GC  time:   0.00s  (  0.00s elapsed)

  INIT  time    0.00s  (  0.00s elapsed)
  MUT   time    0.02s  (  1.61s elapsed)
  GC    time    0.04s  (  0.09s elapsed)
  EXIT  time    0.00s  (  0.01s elapsed)
  Total time    0.05s  (  1.71s elapsed)

  %GC time      80.0%  (5.4% elapsed)

  Alloc rate    1,147,304,260 bytes per MUT second

  Productivity  13.3% of total user, 0.4% of total elapsed

recordMutableGen_sync: 0
gc_alloc_block_sync: 0
whitehole_spin: 0
gen[0].steps[0].sync_todo: 0
gen[0].steps[0].sync_large_objects: 0
gen[0].steps[1].sync_todo: 0
gen[0].steps[1].sync_large_objects: 0
gen[1].steps[0].sync_todo: 0
gen[1].steps[0].sync_large_objects: 0

real	0m1.714s
user	0m0.050s
sys	0m0.037s

(there isn’t a significant difference whether I activate the -N2 option or not, for 8 pages, but I guess there would be for 100, 1000, … — maybe more on that soon !)

I’m now wondering if it would be that much insane to use my 3D Text Rendering application to render the HTML code of the pages in a 3D OpenGL/GLUT context. Would it ? :)

Tagged with: ,

Get every new post delivered to your Inbox.

Join 251 other followers