Nothing positive going on, so lets have another listen…

This time, Richard David James aka. AFX aka. Aphex Twin. One of my absolute favourite electronic artists. There's tons to choose from. But — again — if I'd have to make one choice among his creations, it would be Donkey Rhubarb. That's the title song of a 1995 EP, that I bought ridiculously over-priced via import… Worth it, of course. ;)

Posted Wed 10 Aug 2022 23:16:01 CEST Tags:
ft SYSHH#9

This time it's playful, heavy Deathcore from Canada. The band was formed in 2018, taking their name from a pretty dark manga. They have put out a number of excellent pieces in their relatively short history. This might be my favourite:

Posted Tue 12 Apr 2022 23:51:11 CEST Tags:

If you're one of those people who still use IRC, you've probably heard of the sort of meltdown that freenode has been experiencing. Despite all that, I was still on the network, partly because I had been since forever, but mostly because one of my favourite projects still has their community channel on there.

This evening I noticed that my IRC client was disconnected from freenode and since can't reconnect. I've set up automatic SASL authentication into the nickname I had registered on the network for some — oh… I don't know — fifteen years or so. That authentication fails and therefore irssi drops the connection.

Weird right? So I connected to freenode without authentication and asked the network's nickserv service about my previous account. Looks like it's not registered anymore. Fantastic! No prior warning. No nothing.

Strictly speaking, I have no right to the account of course. So I guess there's little point in complaining, and this post is enough of that.

Just in case somebody who knew me on freenode as ft is wondering: Should the nickname appear again in the future, you're not talking to me any more. A network like this is not worth the hassle.

Posted Tue 15 Jun 2021 23:59:32 CEST Tags:
ft SYSHH#8

In today's episode of songs you should have heard, we'll go with some post-punk. Specifically the classic Joy Division track “Love will tear us apart”. Gorgeous in every way.

Posted Wed 13 Jan 2021 22:12:40 CET Tags:

There was a talk at Remote Chaos by Michael Sperber about using syntax-parse, which is Racket's most advanced system to express syntactic extensions, that is one of the important cogs for implementing new programming languages on top of Racket. Anything that promotes awesome Scheme technology is cool with me. ;)

Posted Wed 06 Jan 2021 11:46:55 CET Tags:
ft SYSHH#7

In this episode of songs you should have heard, let's have some metal, shall we? In particular, the Devin Townsend Band. Some people are flooding youtube with reaction videos to Devin's performance of Kingdom, which is a good one, but let's have this one instead:

Devin Townsend Band — Accelerated Evolution — Deadhead

Posted Mon 12 Oct 2020 02:05:01 CEST Tags:

One of the esoteric programming languages that a fair number of people have heard of, is “Whitespace”. I'm sure this has nothing at all to do with jokes like its source code being so fantastically efficient when being printed out. The language was actually meant as a joke (at least that's how one of its creators puts it when he mentions the language in some of his talks). But even though people are aware of the language's existence, they rarely know how it works. Let's change that, because it's really not all that hard.

At its core, the language describes operations on a stack machine, that has access to a heap. The only data type of the machine are integer values of arbitrary size. There are operations for the machine, that use these values to encode characters as well. And that's about it. The rest is just a peculiar way to encode operations and values using three ASCII whitespace characters (TAB, SPACE and LINEFEED).

Operations are grouped by function. These groups determine an operation's encoding prefix, that the language spec calls “Instruction Modification Parameter”. Many of the operations have no arguments, as they create and consume arguments on the machine's stack. Some however do take an argument: Integers and Labels. Label arguments are used by flow control operations; and integer arguments are used by some of the stack manipulation operations.

Arithmetic Operation: Integer Division

Their encoding is similar: Both use strings of spaces and tabs, that are terminated by linefeeds. In labels, the spaces and tabs have no special semantics. At least not within the language specification; more on that later. In integers, tabs encode ones and spaces encode zeroes. Something to note about such integer literals is that they do not use two's complement to encode negative numbers. Instead, they use the literal's leftmost bit as a signedness bit: Tab means negative number, space means positive number. That makes encoding arbitrarily wide integers straight-forward.

Integer Literals

When you take a look at actual whitespace programs, you'll sometimes notice extremely long labels. Oftentimes with that, there seems to be a silent convention to use chunks of eight characters to encode eight bits (same semantics as in number literals as to what characters encode ones and zeroes) that are turned into a positive number, which is then mapped to the ASCII encoding (seven would have sufficed, but that's not what the programs I've seen use).

When you try and implement this language, you'll notice a couple of things your machine implementation needs: A stack obviously, since whitespace is a stack-manipulating language. Another stack, used as a callstack, since the language has Call and Return operations. It also needs a heap, mapping addresses to integers. Finally you'll need program memory and a program counter register. You might want a jump-table too, to deal with translating labels to addresses. That's not strictly required, though: You could just translate all labels to addresses before loading the program into your machine.

When I digged deep enough into the language spec to figure this out, I was intrigued enough to actually do yet another implementation of the language. It's called SpaceMan and it is available at gitlab.com/ft/spaceman as well as github.com/ft/spaceman.

I've added an org-mode conversion of the original language homepage, because that one is currently only available via archive.org. When trying some of the more complex examples you can find on the net, I was running into problems. My implementation failed to even parse them. I was verifying my code for quite some time, until I concluded that it was implementing the parser correctly. So I looked at other implementations. And it turned out most of them implemented two additional stack-manipulating operations: Copy and Slide. Apparently, they were added to a later specification of the language. I couldn't find such a spec on the net, though (not that I invested a lot of time — see the update at the end of the post for a resolution to this). However, after implementing these two, spaceman could run the most elaborate examples that I could find online, like a sudoku solver. I've added those two additional operations to the included version of the language spec.

I'm using Megaparsec for parsing purposes. And with a couple of utilities put in place, writing the parser becomes rather pleasant:

stackParser :: Parser StackOperation
stackParser = do
  (      try $ imp [ space ])
  (      try $ Push  <$> number [ space ])
    <|> (try $ operation        [ linefeed, space    ] Duplicate)
    <|> (try $ operation        [ linefeed, tabular  ] Swap)
    <|> (try $ operation        [ linefeed, linefeed ] Drop)
    <|> (try $ Copy  <$> number [ tabular,  space    ])
    <|> (try $ Slide <$> number [ tabular,  linefeed ])

When implementing the language's operations, you'll find that you're facing lots of common instructions that manipulate the virtual machine. You put those common tasks into functions, of course, and like any designer of an assembly language worth their salt, you obviously give your instructions slightly cryptic three letter names. With those, implementing the stack-manipulating operations looks like this:

eval :: WhitespaceMachine -> StackOperation -> IO WhitespaceMachine
eval m (Push n)  = return $ pci $ psh [n] m
eval m Duplicate = return $ pci $ psh h m               where h     = peek 1 m
eval m Swap      = return $ pci $ psh [b,a] $ drp 2 m   where [a,b] = peek 2 m
eval m Drop      = return $ pci $ drp 1 m
eval m (Copy i)  = return $ pci $ psh [n] m             where n = ref  i m
eval m (Slide n) = return $ pci $ psh h $ drp (n+1) m   where h = peek 1 m

Implementing the other groups of operations looks similar. I sort of like it. Each of them would basically fit onto an overhead slide.

As it turns out, editing whitespace programs is tough work. Doing it directly is best done in an hex-editor. But spaceman has a feature, that makes it dump the programs syntax-tree to stdout. And those program dumps are actually executable programs. So if you'd like to edit a whitespace program, you can dump it into a file, edit its AST and then run that program to yield the changed whitespace program.

“Yet another whitespace implemention created” achievement unlocked, I guess.

[Update] Andrew Archibald informs me, that if you spend a little more time looking for the language specification, that includes Slide and Copy, you will find http://compsoc.dur.ac.uk/whitespace/index.php (via archive.org), that contains v0.3 of the whitespace specification, that adds these operations.

Posted Sun 11 Oct 2020 21:16:20 CEST Tags:

Not too many programming languages have the concept of first-class continuations built into them. Scheme is one that does1, so it will be our vehicle for today's adventure. Now what really is a continuation? In order to get an idea, consider the following simple expression:

(+ 5 4)

Remember that Scheme uses prefix notation2, where the first item in a parenthesised list is the operator, that is applied to the list of arguments, which is the rest of the list. + is the sum-operator, so the above expression evaluates to 9. So far, so good. How does this help with continuations? Well, a continuation is a way of bundling up “what's left to do” with respect to an arbitrary sub-expression within a program.

Imagine the perspective of the literal 5 in the expression above. If you are this five, look around you and try to come up with a function, that would represent the rest of the computation after you are done evaluating3. So in the expression above, after 5 evaluates to five, what is left to be done in order to finally evaluate to 9? Here's a suggestion:

(define (kont v)
  (+ v 4))

Indeed, after evaluating the 5 literal, the rest of the expression can be represented as a function, that takes one value and adds 4 to it. And if you stick the value of 5 into that function, it will return 9, like it must.

Once you realise that you need to take the perspective of a part of an expression in order to figure out what the continuation does, it is not hard to figure out these functions for arbitrarily complex expressions. It just takes a little bit of practice. But thus far, this is just a theoretical idea. How would you work with continuations? How to you get to them?

In Scheme (and also Standard ML), there is an operator called call/cc, which is short for call-with-current-continuation. Its purpose is to capture the continuation at a certain point in a program and make it available to the programmer. The name call-with-* suggests, that the operator needs something that it can call. That would be a function. This function has to take one argument and call/cc will call that function with the continuation it captured as the function's only argument. This function parameter is the only argument that call/cc expects and the return value of the call/cc invocation is the value of the the supplied function applied to the captured continuation. Here are simple examples, that use call/cc:

(call/cc (lambda (k) 5)) → 5
(+ (call/cc (lambda (k) 5)) 4) → 9

Since these don't use the continuation at all, it is straight forward to tell the values these expressions evaluate to. Now for something more interesting: Since Scheme allows us to mutate memory4, we can just store the continuation in a parameter for later use:

(define kont #f)

(+ (call/cc (lambda (k)
              ;; Store the continuation k in the global
              ;; parameter kont, that we defined before.
              (set! kont k)
              ;; Ignore the continuation otherwise, and
              ;; just return 5; so this expression boils
              ;; down to (+ 5 4); though we used call/cc
              ;; to extract the continuation from the
              ;; perspective of our value 5.
              5))
   4) → 9

;; Take a look at ‘kont’ — this is how Racket pretty prints it.
kont → #<continuation>

(kont 10) → 14

Remember when we defined a function called kont earlier? We figured out that the continuation right after evaluating the 5 is a function that adds 4 to its one and only argument. Now applying the continuation (stored in kont) to a value of 10 results in: 14 — just as we predicted.

The captured continuation has a special property, that sets it apart from being a simple function: When it is invoked, it resurrects its execution context entirely and abandons the current one, no matter where the program went in the meantime. This property can be used to implement all sorts of useful high-level control flow facilities. A simple, easily understood one is “early exit”. Consider this:

(fold * (*) '(1 2 3 4 0 6 7 8 9 10)) → 0

This is an expression that computes the product of all values in the list, that is the last argument to fold5. Since any integer being multiplied by zero results in zero, the computation could stop at the fifth element of the input list. However, fold has no concept of exiting early. Since we have call/cc, we can easily add that:

(call/cc (lambda (return)
            (fold (lambda (x acc)
                    (if (zero? x)
                      (return 0)
                      (* x acc)))
                  (*)
                  '(1 2 3 4 0 6 7 8 9 10)))) → 0

It's the same expression as before, but it's executed inside the function, that is passed to call/cc. That way, we have access to the surrounding continuation. In the call to fold we pass a function that's more complex than the direct reference to multiplication function. fold will hand the current element of the list as the first argument of its function argument and the current value of its accumulator as the second one. What we need to do is straight-forward: Check if the current argument is zero, and if that is the case, just invoke the surrounding continuation with 0 as its argument. As mentioned earlier, this resurrects the continuations execution context and abandons the current one.

Now in order to see if you understood continuations, at this point in other texts on the subject, you get confronted with mind-benders like the following, which is from TSPL page 646:

(((call/cc identity) identity) "Hey!")

If you evaluate this at a Scheme prompt, it will return the string "Hey!". This might be guessable, but the question is: Why? Let's analyse the expression's form first: At its most primitive, this expression can be summarised like this:

(operator operand)

The operand has the value "Hey!", trivilly. And since we know the result of the final evaluation is that same value, we can (barring side effects) reason that operator has to be the identity function7. Since that function appears twice in the original expression, it feels like we are on the right track! A guess would be, that the second incarnation of identity somehow gets transposed into the operator spot. Maybe we can just replace it with something else, like string-upcase. But if you try, you will notice that things are not quite as easy is they seemed8:

(((call/cc identity) string-upcase) "Hey!")
→ string-upcase: contract violation
    expected: string?
    given: #<procedure:string-upcase>

…and not "HEY!" like one might have guessed. Time to take a look at the expression, that is our operand:

((call/cc identity) identity)

With our previous strategy we need to look at (op identity), put ourselves in the perspective of op and ask: What function represents the rest of the computation after I am done evaluating myself? And you can follow that strategy pretty mechanically:

(define (k v)
  (v identity))

This is what gets handed in as the argument to identity in (call/cc identity). And since it is identity, all it does is return the continuation it is handed. That is all it does. Which means, we end up with this:

((lambda (op) (op identity)) identity)
;; Which can be reduced to:
(identity identity)
;; Which can be further reduced to:
identity

Let's come back to our guess-work from earlier, where we naïvely used string-upcase instead of identity: It would reduce to this: (string-upcase string-upcase). And now all of the sudden the error message makes sense too: It says string-upcase expected a string argument, but instead got a procedure; and not any procedure but string-upcase itself.

To summarise the above: The call/cc uses its identity argument to feed the right identity into itself, ultimately returning itself. So (((call/cc identity) identity) "Hey!") reduces to (identity "Hey!") which evaluates to "Hey!".

It's a fair bit mind-bendy if you think it all through, but managable if you follow the strategy that was presented earlier. What remains to examine is what continuations are useful for beyond clever puzzles. Well, they are a primitive that allows the implementation of a lot of high-level control structures such as early-exit (as we have seen earlier), exceptions, co-routines, generators, and more. In a language with first-class continuations all these features can be implemented in a library without further language support.

Finally, let's not overlook that there are problems with full, unbounded continuations as well: For details on that, see [Kis12]. To alleviate most of these concerns, we might take a look at delimited continuations. That is, however, a story for another day.

Footnotes

1 See [R4RS] p.28f
2 See [SICP] p.8f
3 Number literals evaluate to themselves — but in general, the subexpression that would be picked could be arbitrarily complex.
4 I know, I know.
5 The * symbol evaluates to the multiplication function and (*) evaluates to the identity element of multiplication, namely 1.
6 According to the author it is “probably the most confusing Scheme program of its size”, a Confusion Desity Maximum, if you will.
7 (lambda (x) x)
8 As we will see later, this intuition is not completely wrong, but there is a twist!

References

[SICP] Harold Abelson and Gerald J. Sussman. Structure and Interpretation of Computer Programs. 2nd ed. The MIT Press, 1996. isbn: 0262011530.

[R4RS] William Clinger, Jonathan Rees, et al. R4RS, The revised4 report on the algorithmic language Scheme. New York, NY, USA: ACM Lisp Pointers, 1991, pp. 1–55.

[TSPL] R. Kent Dybvig. The Scheme Programming Language. 4th ed. MIT Press, 2009, pp. I–XII, 1–491. isbn: 978-0-262-51298-5.

[Kis12] Oleg Kiselyov. An argument against call/cc. 2012. url: http://okmij.org/ftp/continuations/against-callcc.html.

Posted Tue 01 Oct 2019 22:23:24 CEST Tags:
ft SYSHH#6

Haven't done one of these in a while. So here goes…

Nine Inch Nails — And All That Could Have Been — Leaving Hope

NIN have a couple of amazing instrumental tracks on their résumé, some very quiet ones too, and it's hard to pick a favourite. But if pressed for an answer, I'd pick this.

Posted Sun 20 Jan 2019 16:59:26 CET Tags:

After being the most horrible package maintainer imaginable for at least the past two years, I've been putting some work into the Debian package for the MRA/MDA combination that is fdm. That package now captures a new version control snapshot from upstream (releases happen seldom, the project is pretty much in maintenance only mode), that fixes a couple of things and adds some features, like being able to use SSL with Google's IMAP without turning off all kinds of verifications.

I've been using different iterations of the new package for a while now and am experiencing zero issues in day to day use. So even though this is the first package update in a while, and a couple of things (like the build system) have changed upstream; I'm fairly confident this is an improvement over the previous versions of the Debian package.

Internally, the packaging is pretty much a complete rewrite. It uses modern debhelper features and implements pretty much all items on Debian's wish list (like machine readable debian/copyright format and stuff like that).

Thanks go out to Axel, who patiently reviewed my changes and helped me get this package out of the door before buster goes into freeze. This update would not have happened without him.

Posted Sun 20 Jan 2019 15:26:50 CET Tags:

In my dayjob, I'm an electrical engineer. Though, I'm mostly involved in writing firmware that runs on baremetal systems. If you're designing electrical circuits for a living or just tinkered with them, you will have heard of E-Series preferred values. In particular, resistors, capacitors and inductors are generally available in ratings derived from these numbers. The basic idea is to fit a number of values into one order of magnitude. For examples in E3, there are three values in the same decimal power: 100, 220 and 470. The next value would be 1000 in the next decimal power. Likewise in E12 there are twelve such values.

Roughly, E-Series follow an exponential definition. The series generally don't follow the exact mathematical expression. For an actual implementation you either adjust to the specified values or you simply use the actual tables from the spec.

Now looking up values in tables is a boring task; especially if the tables are relatively small. Finding combinations, though, is more interesting. For example, if you'd like a resistor rated at 50Ω (which is a rather important value in radio frequency design), you'll notice that the exact value is not available in any of the E-Series. But it's easy to combine two 100Ω resistors in parallel to produce a value of 50Ω. …sure, in an actual design you might use 49.9Ω from E96 or E192, or use a specially made resistor of the desired rating. But you get the idea. Combining components in parallel and series circuits allows the parts from E-Series to cover lots of ground.

I've written a library that implements E-Series in Scheme. Its main modules are (e-series adjacency), which allows looking up values from an E-Series that are in the vicinity of a given value. Then there's (e-series combine) which produces combinations of values from a certain E-Series to approximate a given value. And finally there's the top-level (e-series) module, that implements frontends to the other mentioned modules, to make it possible to easily use the library at a Scheme REPL.

To see if the library finds a combination from E12 that matches 50Ω:

scheme@(guile-user)> (resistor 12 50)
 ------------+-------------------------+-------------+-------------+--------------
    Desired  |         Actual (Error)  |     Part A  |     Part B  |  Circuit
 ------------+-------------------------+-------------+-------------+--------------
   50.0000Ω  |   50.0000Ω (  exact  )  |   100.000Ω  |   100.000Ω  |  parallel
   50.0000Ω  |   50.0380Ω (+7.605E-4)  |   56.0000Ω  |   470.000Ω  |  parallel
   50.0000Ω  |   49.7000Ω (-6.000E-3)  |   47.0000Ω  |   2.70000Ω  |  series
   50.0000Ω  |   50.3000Ω (+6.000E-3)  |   47.0000Ω  |   3.30000Ω  |  series
 ------------+-------------------------+-------------+-------------+--------------

Those aren't all the combinations that are possible. By default the module produces tables, that contain combinations that match the desired value at least as well as 1%. Now, to see values in the vicinity of 50Ω all E-Series, you can do this:

scheme@(guile-user)> (resistor 50)
 ---------+--------------------------+-------------+--------------------------
  Series  |           Below (Error)  |      Exact  |           Above (Error)
 ---------+--------------------------+-------------+--------------------------
    E3    |   47.0000Ω  (-6.000E-2)  |             |   100.000Ω  (+1.000E+0)
    E6    |   47.0000Ω  (-6.000E-2)  |             |   68.0000Ω  (+3.600E-1)
    E12   |   47.0000Ω  (-6.000E-2)  |             |   56.0000Ω  (+1.200E-1)
    E24   |   47.0000Ω  (-6.000E-2)  |             |   51.0000Ω  (+2.000E-2)
    E48   |   48.7000Ω  (-2.600E-2)  |             |   51.1000Ω  (+2.200E-2)
    E96   |   49.9000Ω  (-2.000E-3)  |             |   51.1000Ω  (+2.200E-2)
    E192  |   49.9000Ω  (-2.000E-3)  |             |   50.5000Ω  (+1.000E-2)
 ---------+--------------------------+-------------+--------------------------

And here you see that E96 and E192 have pretty close matches as single components.

With combinations, the library allows the for user to specify arbitrarily complex predicates to choose from the generated combinations. For example, to only return parallel circuits that approximate 50Ω from E12:

scheme@(guile-user)> (resistor 12 50 #:predicate (circuit 'parallel))
...

And to limit those results to those that have an eror of 0.001 or better, here's a way:

scheme@(guile-user)> (resistor 12 50 #:predicate (all-of (max-error 1e-3)
                                                         (circuit 'parallel)))
 ------------+-------------------------+-------------+-------------+--------------
    Desired  |         Actual (Error)  |     Part A  |     Part B  |  Circuit
 ------------+-------------------------+-------------+-------------+--------------
   50.0000Ω  |   50.0000Ω (  exact  )  |   100.000Ω  |   100.000Ω  |  parallel
   50.0000Ω  |   50.0380Ω (+7.605E-4)  |   56.0000Ω  |   470.000Ω  |  parallel
 ------------+-------------------------+-------------+-------------+--------------

There are frontends for inductors and capacitors as well, so you don't have to mentally strain yourself too much about with combination corresponds to which circuit. To find a 9.54μF capacitor approximation from E24 with an error better than 0.002:

scheme@(guile-user)> (capacitor 24 #e9.54e-6 #:predicate (max-error #e2e-3))
 ------------+-------------------------+-------------+-------------+--------------
    Desired  |         Actual (Error)  |     Part A  |     Part B  |  Circuit
 ------------+-------------------------+-------------+-------------+--------------
  9.54000µF  |  9.53000µF (-1.048E-3)  |  9.10000µF  |  430.000nF  |  parallel
  9.54000µF  |  9.55102µF (+1.155E-3)  |  13.0000µF  |  36.0000µF  |  series
  9.54000µF  |  9.52381µF (-1.697E-3)  |  10.0000µF  |  200.000µF  |  series
 ------------+-------------------------+-------------+-------------+--------------

The test-suite and documentation coverage could be better, but the front-end module is easy enough to use, I think.

Posted Sun 09 Dec 2018 23:21:36 CET Tags:

At this point, Scheme is my favourite language to program in. And my prefered implementation (…there are a lot) is GNU Guile. I've written a couple of pieces of software using it: Bindings for termios on POSIX Systems, An implementation of Linear Feedback Shift Registers, A unit test framework that emits TAP output, A system that produces time-sheets from Redmine Timing exports, An elaborate system to describe and experiment with configurable electronic parts and native implementation of the XMMS2 protocol and client library. That latter I've written about before.

Now with Guile there's a lot going on. The project as a small but dedicated team of developers. And they have been working on improving the system performance by a lot. For the longest time, Guile only had an interpreter. The last version of that was 1.8 — the version I came in contact with for the first time as well. Shortly after I started to dabble, 2.0 was released so I didn't have a lot of exposure to 1.8. Version 2.0 added a stack VM that Guile byte-compiled its code to. In 2017 the project released version 2.2, which saw the stack VM replaced by a register VM. And that improved performance dramatically. And now, they are preparing a 3.0 release (2.9.1 beta release happened on October the 10th of this year), which adds just-in-time native code generation on x86-64 via GNU Lightning.

I used xmms2-guile to get an idea about the improvements that the different Guile versions offer. I couldn't be arsed to build myself a 1.8 release so, this won't look at that. The setup is a minimal client connecting to an XMMS2 server, pull out the information required to display a playlist of about 15300 tracks. All of this is running on an Intel Core i7 with plenty of memory, running XMMS2 version 0.8 DrO_o (git commit: 3bd868a9).

As a point of comparison, I ran XMMS2's command line client to do the same task. It took a mean time of 10.77 seconds. That's about 1420 tracks processed per second. I'm pretty sure the client does a little more than it needs to, but this establishes a ballpark.

With Guile 2.0 the whole affair takes 23.9 seconds. That's about 640 tracks per second. Quite a bit slower. More than twice as slow. I was sure this would be slower than the official client, but the difference is disappointing. So what about 2.2 — can it do better?

Well, in Guile 2.2 the same job takes 6.38 seconds. 2400 tracks per second! Impressive. This is why I think the official CLI client implemented in C does a little too much work than it needs. It shouldn't lose to this, and if it would, it shouldn't be behind this much. My xmms2-guile client library is a native implementation. It doesn't wrap the official XMMS2 client library that's written in C. I'm way too lazy to do detailed timing analyses of the different libraries and Scheme implementations. The results are reproducible, though.

Now what about the upcoming Guile 3.0? I've used 2.9.1.3-1f678 to be precise. It should execute a little faster, but the 2.2 results are pretty good already. Running the same setup yields a mean time of 5.87 seconds. A bit over 2600 tracks per second.

I mean sure, this is not an in-depth analysis, there's a local network connection involved and it's a rather specific repetitive task. Still though. Pretty cool when a random test shows that 3.0 is four times as fast as 2.0. ;)

Posted Tue 13 Nov 2018 00:55:34 CET Tags: