About Articles Projects Links Apps Feed

A Lisp REPL as my main shell

The shell is dead, long live the REPL!

If you enjoy this article and would like to help me keep writing, consider chipping in, every little bit helps to keep me going :)

Thank you!

Update: As of 2021-02-07, not all the code and configurations used in this presentation have been published. Should happen in the coming days, stay tuned!

Introduction video

The concepts I’m going to present in this article were featured in a presentation at FOSDEM 2021. The video demonstrates a rather unconventional paradigm how to think the REPL as a “shell” interface to the machine.

In this article I’m going to dive into more in details in the theory, the setup, as well as more advanced features.

Mind the title: this is not about another system shell implemented in Lisp (such as clash or shcl); rather, I tried to approach the problem in the opposite direction by trying to bring the “shell” to a programming language REPL.

I want to emphasize that while I use Common Lisp and Emacs as support technology for my demos, the thesis of this article concerns itself with a different shell interface paradigm.

Should you dislike Emacs or Lisp, never mind: the concept presented here can be adapted using other (editor) interfaces and other programming languages (but not all, as I will explain later).

This article is bordering the format of an extensive tutorial and as such it’s rather long. The various top sections starting from SLY: A tour of a most-advanced REPL are mostly independent so feel free to read them in any order. (I’ve interlinked them whenever necessary.)

Happy reading!

Challenging the dominating paradigm

The universal computer usage paradigm, and why traditional shells are poor tools

What a computer user (and in particular a technically minded user such as a developer) does on a computer can be understood as one of three actions:

  • Data collection and filtering.
  • Data visualization (which I also call “inspection” in this article).
  • Data transformation (which I also call “processing” in this article).

Opening a file or multiple files with a program, say, to play a music album, is in effect “collection + visualization”. Shrinking a bunch of photos is “collection + processing”.

I believe that most, if not all our interactions with a computer can be summed up in those three tasks.

More interestingly, there is a feedback loop between collection and visualization, which is very common when manipulating large bunches of files from a shell: the user first displays the file list to process, then filters it, displays the filtered list, re-filters if needed, re-displays it, and so on. When the file list is finalized, the user can process it and finally visualizes the result.

This feedback loop that’s best represented by this simple diagram:

Figure 1: User interaction feedback loop

It turns out that traditional shells are particularly poor interactive tools to deal with such a feedback loop.

To paraphrase the example in Howard’s presentation on piper, typical “collection + processing” in the shell happens with a pipeline and some control structures:

for S in $(systemctl --all | grep openstack | sed 's/\.service.*//' | cut -c3-)
    systemctl restart $S

In the above example, as is customary in the shell, the whole “collection + processing” step remains a blackbox to the user who can visualize the data only before and after, but never for the intermediate steps.

While we are at it, shell languages from the sh family have poor control structures, which make simple things like a for loop too cumbersome to write and riddled with pitfalls.

The local minimal of terminals and shells

There is a common misconception that terminals and shells are inherently bound to each other, to the point that there is sometimes a confusion between the two.

  • A terminal, actually a terminal emulator, is a program that visually emulates the hardware of the 1970s and 1980s, such as the VT100. These tools are by definition stuck in the past.

    (A common misconception is that they are fast. Ironically, they are not: emulators emulate the physical properties of the terminal such as the baud rate, and this limits the speed at which text can be printed.)

  • A shell is a programming language interpreter. A REPL of a sort. It often embeds interactive features such as a prompt with history support.

Terminals have no reason to continue to be used in my opinion. Note that this does not mean we shouldn’t use “textual” interfaces, quite the opposite: textual data is a bliss to manipulate. But we can very well manipulate text, along with other types of data, in something other than a terminal, that is faster, prettier, more powerful. (Graphical Emacs is one such example.)

In the past I’ve discussed the drawbacks of using a terminal as an interface (see my article Eshell as a main shell). I won’t go into the details again, but allow me to summarize terminal-induced limitations of traditional shells:

  • Can’t search the outputs.
  • Can’t interact with output (like opening a path under the cursor).
  • Can’t copy/paste without a mouse (barring a few hacks).
  • Can’t navigate the prompts.
  • Limited colors (barring a few hacks), formatting and rendering is convoluted and not portable.
  • Can’t render anything beside text.
  • Interface toolkits like ncurses can’t render structured widgets (to see what I mean, try selecting text in an ncurses frame: the selection will grab the whole line, beyond the frame).
  • Slow text output.
  • Poor font support (single size).
  • No per-pixel elements (e.g. cannot draw a line separator).
  • Prompts and their output cannot be folded, moved arround, etc.

Readline-shells, “go-to scripts” and code composition

I was an Eshell user for a while, then I switched to M-x shell. Both these non-terminal-based shells rid me of most of the aforementioned terminal limitations, since graphical Emacs is a full-blown graphical application.

But I still wasn’t satisfied, in particular with the programming language used by the shell (I’ve used Bash, Zsh, Fish and Eshell).

I fancy the Lisp programming languages, so why not use the language I’m most comfortable with, say Common Lisp?

For a long time I tried really hard to stick to the paradigm of what I call the “readline shells”. Almost all shells are developed with the idea of running in a terminal with a readline-based prompt (if not readline, then a similar interface). Even shells as exotic as Xonsh, Ammonite Shell, Oil shell or SHCL are developed with this paradigm in mind.

This can only perpetuate the problem: these shells can’t really escape the poor feedback loop problem I mentioned above because of the lack of interactivity options in the user interface. And it is still limited in terms of rendering, prompt navigation and manipulation, etc.

“Go-to scripting languages”

Power users and developers alike fancy having their personal “shell scripts”, usually short and simple programs that don’t warrant an official distribution, to perform every day tasks from file processing to shell helpers.

These programs can be written in various languages, but unfortunately the choice must be restricted for practical reasons:

  • Some languages are “compile-only”, so they won’t take scripts.
  • Some language interpreters are too slow to start (e.g. above 100ms), which makes the script prohibitive to use in a tight loop from another script.
  • Some languages are poor at process management.
  • Portability may be a concern: if a script requires libraries to be locally installed, it may hinder its use on another system, like your other machine or a friend’s computer.

Bash, POSIX sh and friends address all these points rather well, which probably explains why they are so ubiquitous (especially the point on portability).

But is it enough a reason to surrender and keep using among the worst programming languages out there today?

I believe it’s high time we challenged the status quo and stepped up in the shell game.

As we will see in the rest of the this article, a required feature for a programming language to be used as a shell is meta-programmability, or the ability to redefine at least part of its syntax. Here Lisp languages perform really well thanks to their homoiconicity property.

Perl or Python are often quoted as good alternatives: It’s certainly more expressive and less limited than Bash while enjoying a broad ecosystem of libraries.

Being a Lisper at heart, I’ve explored the surface of various options:

  • scsh

    scsh is an obvious choice since its name stands for the “Scheme shell”. While a Scheme, it seems that it is does not have a broad ecosystem of libraries, which may make it a bit limiting in practice, or forcing the users to write most of their own. (Please correct me if I’m wrong.)

  • Guile

    Guile is a good contender: it’s reasonably fast, it has a developing ecosystem, it has good support for scripting.

  • Gauche

    Gauche design goal is to be a fast scripting language. So it could be ideal for go-to scripts.

  • TXR

    I am mostly ignorant about TXR, so I can only tell from the little I’ve played with it: it seemed rather slow (maybe in the ballpark of Bash), and I’m unsure about its ecosystem.

  • Racket

    At first, Racket startup time around 100ms on my machine seems prohibitive for scripts. But maybe there is a way to overcome this. Any Racketeer around? :)

  • Emacs Lisp

    Howard’s piper is very inspiring, but I decided not to go with Elisp which I believe is too limiting (for now) to be used as a shell language.

    Eshell cannot separate standard input from standard error. This is a blocker in my opinion, until we fix it or implement some other shell in Emacs Lisp.

    As a scripting language, Emacs Lisp is not a great choice either because emacs --script can only print to stderr.

    Besides, as of Emacs 27, Emacs Lisp has poor threading support. This is quickly limiting, in particular when it comes to process management.

    Finally Emacs Lisp has no namespacing in the language, which makes it impractical to host something as central as a shell.

  • Common Lisp

    What about Common Lisp? Some Common Lisp implementations like SBCL even have explicit support to be used for scripting (e.g. the --script flag for SBCL).

    Common Lisp has even various shell implementations, to name a few:

Composition (or integration) of code

While I was exploring the above possibilities, it struck me that, after all, “go-to scripts” may not be the right approach.

Scripts are supposed to be a quintessential part of “dotfiles” by enabling the power user to extend their workflow by means of simple “apps” that can be composed together.

This last point on composition is key, because it’s done wrong! Indeed, scripts are a poor way to compose code:

  • The only interface of scripts is “argument passing”. More importantly: You can’t pass data structures to another script!
  • Each script call fires up a new system process (the interpreter). (This can be mitigated by calling using an interpreter daemon and have scripts start with a client, like emacsclient. But this still fires up a new system process.)
  • Script internals (functions, options, etc.) are usually not accessible to other scripts. This often results in much code duplication.

From a readline-shell to a REPL

At this point, it became clear to me that I was still trying too hard to follow the status quo. What if I started thinking the other way around? Instead of trying to bring Lisp (or your favourite programming language) to the shell, why not bring the shell to the Lisp REPL?!

Turning the problem around has many immediate benefits: it gets us (for free) top-notch programming tools and features such as a debugger, a stepper, interactive stacktraces and, very importantly as we will see below, an inspector. From there, we can implement what traditional shells are specialized with: process management, convenient input-output redirection, pipelines… (It’s surprisingly not that much!)

It seems much easier this way than the other way around!

SLY: A tour of a most-advanced REPL

I chose SLY as a starting point, because it may be one of the most advanced REPLs out there, which just happens to also be running Lisp (Common Lisp).

SLY is a fork of SLIME: while very similar, SLY has a few extra features which are instrumental in the making of a shell. I’ll mention the features missing from SLIME when they are introduced below.

SLY alone may not be enough to offer a full-blown shell experience. Fear not! By combining various Emacs packages, Common Lisp libraries and other utilities, I succeeded in complementing most of the lacking features.

Now let’s review what makes SLY-on-steroids so special!

Prompt formatting

Like many shell aficionados, your favourite sport might be to customize your prompt (also know as PS1) :)

The following screenshot showcases my prompt customization in SLY:


It looks like a rather regular shell, but let’s not jump to the conclusion too fast, as there are many subtle and yet important features at display.

  • It’s a multi-line prompt. The first line indicate the path. The second line has two noteworth elements:
    • The 0 is the back-reference of this prompt result (if any). We will come back to it in a while.
    • The $ is the current Common Lisp package (called “namespace” in many other programming languages), here my own “shell” package. This is great because this means that we have namespacing support right at the prompt!

Here SLY prompt customization frees itself from the shell cryptic syntax by using regular Lisp (for instance colors are referred their actual name like “green”). Instead of being just a string, my prompt can be generated dynamically with Lisp code!

  • Automatic duration reporting

    See the second to last prompt? It sleeps for 2 seconds, after which it displays a status notification about the end time and the duration.

    I’ve programmed the prompt to only display the status for commands lasting more than 1 second. This is useful to avoid cluttering the output with duration reports of 0s.

    Reporting thee duration automatically is super useful, it’s actually the only sane way to do it.

    In Bash, you’d typically run time when you want to measure the duration of a command. The problem is that you need to anticipate that you want to know how long this is going to last, so you must already be knowing that the command is going to take some time to complete.

    Only too often, we realize this after the fact, which prompts us to re-run the command, this time prefixed with time! Which can be a huge blocker if the command happens to take ages to complete.

  • Missing SLIME feature

    If I’m not mistaken, SLIME does not allow for customizing the prompt as of 2021-02-06, but it wouldn’t be hard to backport it.

Searching and editing

Since SLY runs in Emacs, you get all the features of a powerful text editor right at your prompt!

  • Search the whole REPL with incremental highlighting.
  • With helm-occur or similar, you can list the search matches in a new window, narrow them down, navigate them (press C-c C-f), etc.
  • Use you favourite keybindings, either CUA, Emacs-style or VI-style (with the Emacs Evil and Evil Collection packages).
  • Smart S-exp manipulation with Paredit, Lispy or similar, which is very handy to write and manipulate Lisp code smoothly and rapidly.
  • Keyboard macros.
  • Multiple cursors.

Considering how extensible and mature Emacs is, the list goes endless…


Typically, multiple shells don’t share the same underlying system process, which means that if you define a function or variable in one shell, it one be seen in others.

While it can sometimes be useful to isolate shells from each other, other times I wish I could share code and data between my shell instances!

SLY supports “multi-REPLs” out of the box. When you open a new REPL, you can decide whether to start a new process or reuse an existing one.

Another benefit is that REPLs sharing an instance use the memory of only one process. More on that in the section on Size and memory usage.

Maximum flexibility, maximum power.

Window selectors

If you use the shell a lot, and if you use it both for regular shell usage and your programming projects like I do in Common Lisp, it’s only too easy to get lost between the many shell windows.

So I worked on a Helm extension called Helm Selector.


It allows you to fuzzy-search among all SLY REPLs. You can group even them by inferior Lisp process.

More importantly, you can select multiple REPLs and run an action on all of them in one go, such as deleting them or restarting them.

Notice that this “selector” also gives you the list of Lisp files and all related windows, such as the debugger and compilation results windows.

Tip: You can configure the appearance of the shell listing with various information, such as the Lisp compiler being used, the name of the window (“buffer” in Emacs parlance), etc. Example:

(defun ambrevar/helm-sly-format-connection (connection buffer)
  (let ((fstring "%s%2s  %s"))
    (format fstring
	    (if (eq sly-default-connection connection)
	      " ")
	    (helm-sly-connection-number connection)
	     "*$" ""
	      "*sly-mrepl for " ""
	      (replace-regexp-in-string "*sly-inferior-lisp for " ""
					(buffer-name buffer)))))))
(setq helm-sly-connection-formatter #'ambrevar/helm-sly-format-connection)

Prompt navigation

An essential feature, in my opinion, is the ability to “go to a given prompt”. When the output of a command is long, it can be cumbersome to seek back to the previous prompt (it’s only worse if the output is heavily colored, since then the prompt does not stand out so much then).

In SLY, you can move the cursor to the various prompt in your REPL with sly-mrepl-previous-prompt and sly-mrepl-next-prompt.

It only gets better: with helm-comint-prompts-all, you can list all prompts of all REPLs, fuzzy-search them, narrow down live and finally confirm to go to the desired prompt.

With this weapon in your hands, you won’t ever lose a prompt input nor its output again!

Tip: Since it’s not bound by default, I like to bind it to M-s f:

(define-key sly-mrepl-mode-map (kbd "M-s f") 'helm-comint-prompts-all)

Narrow to prompt

In Searching and editing we talked about searching the whole REPL. Sometimes it can be useful to restrict the search to a single or select outputs.

One way to do this is to “narrow” the REPL to the desired prompt. If I place my cursor on a prompt or its output and press C-x n d (narrow-to-defun), all other prompts and outputs disappear (only virtually), thus restricting searches and other commands to just what I see.

To display the whole buffer again, press C-x n w (widen).


A back-reference is like an automatic variable that is assigned to every single result of all prompt commands.

backreferences.gif (GIF from the SLY GitHub page.)

Without back-reference, you’d have to systematically store the results in well-defined names. In Bash, you could do this:

$ v1=$(command1 ...)
some output...
$ v2=$(command2 ...)
some other output

This quickly gets cumbersome. If you get the number wrong, you may skip a number (which would be confusing) or accidentally overwrite a previous result.

Automatic back-references solve all these issues.

Back-references play a crucial role in rethinking how a shell can be used. See the following section Better than pipelines: graphs! for an introduction.

  • Missing SLIME feature

    SLIME has relative back-references with *, ** and *** to refer to the last, next to last and next next to last results respectively.

    Sadly, this makes it impossible to refer to results past the third to last one. Worst, this means that commands need be adjusted depending on the relative position of the current prompt.

Directory switching (cd)

Some jokingly call cd the poor man file manager! :) It’s probably only justice: It’s slow, cumbersome, inefficient. We can do much better.

I believe everyone should be able to use their favourite file manager. Good news: with SLY, it’s possible to “change directory” to the one corresponding to what your file manager points to!

As a big fan of Helm, I use helm-find-files as a file manager. I find navigating directories with it really nice since you can fuzzy-search the directory name, no need to type the name precisely.

Together with the Helm Switch to REPL extension, pressing M-e from anywhere in helm-find-files will switch the desired REPL to the corresponding directory.


  • To go 3 parents up, I press s-f C-l C-l C-l RET.
  • To go to a previously visited directory, I can press M-p to prompt the fuzzy-searchable history.
  • With helm-locate, I can fuzzy-search any file and directory, anywhere on all my hard drives, within a finger snap. Then I press M-e to switch to the directory of the selected file.


I use Helm to add live, narrowing-down fuzzy completion to the history search.


Notice how matches are found regardless of the search term order.

Since SLY does not use Helm by default, I simply replace the history binding with the corresponding Helm command in my Emacs config:

(define-key sly-mrepl-mode-map (kbd "M-p") 'helm-comint-input-ring)

Completion and function signatures

In the traditional shell world, completion power is all the rage. Zsh and Fish boast amazing completion features.

With SLY, the topic is turned upside down since now you write Common Lisp, and you are given Common Lisp completion!

So you can complete Common Lisp functions, symbols, etc. You can also fuzzy-search any symbol, from a given package or any package, with sly-apropos (or helm-sly-apropos).

When calling a function, the signature is automatically displayed (thanks to eldoc) so that you know which arguments the function takes.

Common Lisp natively supports

  • positional arguments
  • optional arguments
  • key arguments

which makes it advantageously nicer to use than the variety of inconsistent command-line arguments programs expose in the shell.

SLY also gives you completion against filenames. Take the following:

> (sha1 "/path/to/dir/<|>")

In the above <|> represent my cursor position. If I press Tab here, it will prompt a list of possible completion which I can fuzzy-search!

You can also trigger file completion via other means, like after a #p (which is the syntax for a pathname in Common Lisp). See this discussion.

Moreover, you don’t even have to use completion to insert a path. Instead, you can use your file manager to insert the selected file path in the REPL.

Helm supports these key bindings by default:

  • C-c i: Insert full path at point.
  • C-u C-c i: Insert short (possibly relative) path at point.
  • C-u C-u C-c i: Insert base name.

I’d recommend remapping these keyboard shortcuts considering how useful they are.

  • Missing SLIME feature

    I may be wrong, but it seems that file completion does not work in SLIME because the REPL is not based on comint-mode. That said, it wouldn’t be too hard to implement.

  • Future work

    You may still want external program completion for the times you execute programs from you REPL. For instance, in

    > #! ls -

    if you press Tab at the end of the above line, you may like to see all the arguments ls accepts.

    The good news is that it’s possible and all the bricks are already there to support it: emacs-fish-completion and emacs-bash-completion make it possible to use both completion systems (Fish first, then fallback to Bash if Fish has nothing completion to offer). It’s even possible to display the Fish completion inline documentation with helm-fish-completion.

    All that remains to be done is connect the dots together.

Interactive documentation

What most good text editors provide is interactive documentation: Point at a function and display its documentation!

SLY also has sly-apropos which searches all known symbols (functions, variables, classes, etc.). And helm-sly-apropos which does the same but with live fuzzy completion.

Bash and friends don’t have such approachable features. Man pages don’t really compete here in my opinion, especially if they prevent you from accessing the prompt! You could use two terminals side-by-side, one reserved to man calls… Or some other niftier trick.

Inspecting and editing

A killer feature of SLY is its inspector.

As a simple example, let’s list a bunch of files:

<7:$> (finder ".")
(#<FILE 02. And the Day Turned to Fright (Eat Static Remix).mp3 {10052FF613}>
 #<FILE 04. A New Way to Say Hooray (Prometheus Remix).mp3 {1005301933}>
 #<FILE 04. Without Thought (Youth Remix).ogg {1005303B93}>
 #<FILE 06. Dorset Perception (Total Eclipse Remix).mp3 {1005305BA3}>
 #<FILE 06. Timeless E.S.P.ogg {1005307D53}>
 #<FILE 08. Aranyanyara (Abakus Mix).ogg {1005309AC3}>
 #<FILE 08. Once Upon the Sea of Blissful Awareness (Esionjim Remix).mp3 {100530B973}>
 #<FILE Dialogue of the Speakers - Back.jpg {100530E0E3}>)

Here we have a list of FILE objects. Pressing Enter on the result opens up the inspector:

#<CONS {1005301927}>
A proper list:
0: #<FILE 02. And the Day Turned to Fright (Eat Static Remix).mp3 {10052FF613}>
1: #<FILE 04. A New Way to Say Hooray (Prometheus Remix).mp3 {1005301933}>
2: #<FILE 04. Without Thought (Youth Remix).ogg {1005303B93}>
3: #<FILE 06. Dorset Perception (Total Eclipse Remix).mp3 {1005305BA3}>
4: #<FILE 06. Timeless E.S.P.ogg {1005307D53}>
5: #<FILE 08. Aranyanyara (Abakus Mix).ogg {1005309AC3}>
6: #<FILE 08. Once Upon the Sea of Blissful Awareness (Esionjim Remix).mp3 {100530B973}>
7: #<FILE Dialogue of the Speakers - Back.jpg {100530E0E3}>

Each element is recursively inspectable, which in effect allows me to navigate the whole structure of the result that was printed at the prompt!

So if I press Enter on an element, I now get:

#<FILE {100530B973}>
 Group slots by inheritance [ ]
 Sort slots alphabetically  [X]

All Slots:
[X]  ACCESS-DATE       = @2021-01-24T18:19:50.000000+01:00
[X]  CREATION-DATE     = @2021-01-24T18:19:14.000000+01:00
[ ]  GROUP-ID          = 998
[ ]  INODE             = 19738333
[ ]  KIND              = :REGULAR-FILE
[ ]  LINK-COUNT        = 1
[X]  MODIFICATION-DATE = @2020-09-07T12:44:11.000000+02:00
[ ]  PATH              = "/home/ambrevar/projects/fosdem2021/music/collection/08. Once Upon the Sea of Blissful Awareness (Esionjim Remix).mp3"
[ ]  SIZE              = 6635729
[ ]  USER-ID           = 1000

[set value]  [make unbound]

A FILE is a Common Lisp object, so here the inspector lists the object “slots” (sometimes known as “attributes” in other programming languages). The slot values are further inspectable.

Notice that I’ve selected some slots (the dates, marked with [X]). If I click on [set value], I can change the selected slot values

  • Editing outputs and S-expressions

    We just saw that we can use the inspector to edit objects. But actually we can do that with any output or S-expression.

    Press one of:

    • p (sly-button-pretty-print) on an output,
    • C-c C-p (sly-pprint-eval-last-expression) after an S-exp,
    • C-c E (sly-edit-value) and input a symbol you want to edit,

    you’ll be shown a buffer (if it’s read-only, make it writable with C-x C-q) where you can edit the value just like any piece of text or code!

  • Future work

    The SLY inspector could allow setting slot values directly where the value is displayed, in the fashion of the Emacs customize interface.

    It could also display list elements as a table. See this discussion for more.

Tailoring the language expressiveness for the shell

In this section, we are going to enhance the Common Lisp language to increase its usability as a shell.

Executing external commands

The first and foremost feature of a shell is its ability to run commands.

Interestingly, not all programming languages can do process management properly. Thankfully, Common Lisp is rather complete in this area:

  • Execute processes directly, without relying on a shell.
  • Execute processes asynchronously.
  • Print process output live.
  • Set the standard input, standard output and standard error. Common Lisp streams can be used here.
  • Connect the output of a process to the input of another one (efficiently).
  • Collect the exit code.

The de-facto “process manager” in Common lisp is launch-program (from the UIOP library), but its name alone is too long to type to quickly run commands.

Syntax really matters here: the raison d’être of Bash and friends, after all, is that they have the shortest syntax possible for executing a program with arguments.

The good news is that we can get just 1-2 character close to this conciseness in Common Lisp, due to the language meta-programming capabilities and the ability to customize its reader. This means that running ls -l can be written as

> #! ls -l

or even

> !ls -l

depending on your taste.

(In Common Lisp it’s good practice to customize the reader only for strings starting with #, so #! is preferred here.)

I wrote an example implementation of this syntax in my dotfiles.

SHCL has a more advanced syntax table with the possibility to interweave Bash and Common Lisp expressions.

This syntactic sugar is key here to ease adoption and transit from a traditional shell to a REPL.

Indeed, many of us still use Bash, many code snippets on the Internet rely on a sh-like shell, so it’s important to be able to run these code snippets seamlessly in our REPL as well.

This can only smooth the transition and ease the adoption.

  • Lispy command execution

    Now I have a subversive question: is shorter syntax necessarily faster to type? Maybe not!

    Remember, we are in a full-fledged, extensible text editor, so why not write a little helper to insert the right characters for us?

    (defun ambrevar/sly-insert-cmd ()
      "Convenient to call commands."
      (insert "(cmd \"\")")
      (backward-char 2)
      (when (and (boundp 'evil-state)
    	     (not (eq evil-state 'insert)))
        (call-interactively #'evil-insert)))
     (define-key sly-mrepl-mode-map (kbd "<C-return>") 'ambrevar/sly-insert-cmd)

    With the above, if I want to call ls -l, I just need to press

    ls -l

    which is only one key-press longer than with Bash, while remaining an S-expression.

    In the above function I use the cmd library which parses the arguments and does not rely on an underlying shell.

    If I want to run Bash syntax, I can replace cmd with run-program or some other convenient alternative.

  • ANSI color support

    Many shell programs emit ANSI color codes to colorize their output. SLY does not support this by default, but the fix is easy with the following snippet in my Emacs config:

    (defun ambrevar/sly-colorize-buffer (str)
      (ansi-color-apply str))
    (add-hook 'sly-mrepl-output-filter-functions 'ambrevar/sly-colorize-buffer)
  • Sudo support

    By default, sudo expects to be run in a traditional terminal. It won’t work in SLY out of the box, so to fix this you can specify an external “ASKPASS” program which will handle the password prompting.

    Since we are already in Emacs with SLY, we can use Emacs as an ASKPASS client by creating this simple emacs-askpass executable script:

    emacsclient -e '(read-passwd "sudo password: ")' | xargs

    (Thanks to /u/loafofpiecrust for this tip!)

    Then I add this setting to ~/.slynk.lisp:

    (let ((askpass (format nil "~a/.local/bin/emacs-askpass" (uiop:getenv "HOME"))))
      (when (uiop:file-exists-p askpass)
        (setf (uiop:getenv "SUDO_ASKPASS") askpass)))
  • Future work
    • Have sudo ask for password only once until the timeout expires. (Does anyone know how to do this?)
    • Parse the ^M and ^K in outputs of commands that update their output in place. (Like progress bars.)
    • Extract the SHCL reader as a separate library so that it can be used from SLY. (If it’s already doable, please let me know!)

“Visual commands” (e.g. ncurses)

In the line of the previous section, while we can preserve back-compatibility with Bash commands, it would be nice if we could do the same with “visual programs” (to use Eshell terminology), such as ncurses programs like htop.

Again, this would make the transition smoother for those who would like to venture out of the traditional shell territory.

Talking about Eshell, the latter has a nice workaround: it knows a list of “visual program names”, and every time the user tries to input a command starting with one of those names (e.g. htop), Eshell will automatically forward the execution to a preferred terminal like Xterm, or Vterm if you’d like to stay in Emacs.

I’ve implemented a similar workaround in Common Lisp:


All that said, my opinion is that we should ultimately not have to rely on decade-old hardware emulators for user interfaces, and all your favourite ncurses programs should have a fancier, programmable alternative (in Lisp or any modern language).

  • Future work

    Implement automatic ncurses detection to automatically start a “visual” program in a terminal, even when it’s not known in advance. Is it even possible?

Lispy pipelines

If we can run Bash commands directly in our Lisp REPL, (for instance by prefixing them with #!), then we can run Bash pipelines.

But what about pipelines using a Lispy syntax? Still with little typing?

With the cmd- helper I wrote (which should be published at some point), it’s just as easy. To remain Common-Lispy, I decided not to use the reserved | for pipes. We could have use a single character like ! but I’ve opted for :- for various reasons. (This may change if this gets published as a library.)

As above, I’ve leveraged the editor to help me insert the extraneous characters:

(defun ambrevar/sly-insert-double-quotes ()
  "Convenient to write list of strings, e.g. when writing a shell command line."
  (while (sly-inside-string-p)
  (insert "\"\"")
  (when (and (boundp 'evil-state)
	     (not (eq evil-state 'insert)))
    (call-interactively #'evil-insert)))

(defun ambrevar/sly-insert-pipe ()
  "Convenient to write a `:-' pipe."
  (while (sly-inside-string-p)
  (insert ":-")

(define-key sly-mrepl-mode-map (kbd "S-SPC") 'ambrevar/sly-insert-double-quotes)
(define-key sly-mrepl-mode-map (kbd "C-S-SPC") 'ambrevar/sly-insert-pipe)
(define-key sly-mrepl-mode-map (kbd "<C-M-return>") 'ambrevar/sly-insert-cmd-) ; Defined as for `ambrevar/sly-insert-cmd'.

Now if I want to write

(cmd- "sort" "./dict"
      :- "uniq" "-c"
      :- "sort" "-nr"
      :- "head" "-3")

all I’ve got to press is


which is, again, just one key press longer than with Bash.

(And “./dict” could be completed since we have file completion.)

  • Future work

    Publish the pipe helpers and the Bash command executors as a publicly available Common Lisp library. See if SHCL can be reused.

Better than pipelines: graphs!

The above still feels very “sh-ish” as a pipeline: you need to write the whole thing in one go. It’s still cumbersome to write and even harder to get right from the first shot. What we lack here is a more iterative approach to sequencing the various steps of the pipeline.

Remember the diagram at the beginning? What a pipeline essentially does is “collect → process”, possibly multiple times. Crucially, it lacks any “visualization” step. While you can hack it in with a bunch of tee commands duplicating the various outputs, it’s so cumbersome to write that you’ll rarely bother unless you are writing a proper script.

This is were Back-references come to play their crucial role: we can call each step separately and pass the output of one command to the input of another by using the appropriate back-reference.

This is very handy since it allows us:

  • to inspect the intermediate results at any point;
  • to correct the intermediate result before passing it on, without having to re-run the previous commands.

What’s also better than pipelines here is that the various steps don’t have to form a linear pipeline: since various sources of data can be combined at any point, it forms a graph!

Or, preferably, a direct acyclic graphs (DAG). Allowing cycles in a process graph may result in hard-to-debug issues (like dead locks or non-reproducible results).

In SLY, back-references already allow us to compose the execution of processes as a graph. For an example, see User stories.

But we could have hoped for something more declarative.

  • Future work
    • Introspectable pipelines

      It would be nice to make the intermediate input/output data of Common Lisp pipelines introspectable, but this is not implemented yet. I believe that Howard worked on a proof-of-concept of this feature with Piper.

    • Introspectable, asynchronous graphs

      The back-reference and graph paradigm allows us to process data asynchronously by passing Common Lisp streams (or CSP channels, or whatever queue you like).

      For instance, if a command takes a long time to complete, we can tell it to write its result to a stream object and then pass this stream as the input to another command, which will then read the stream data as it comes.

      But here the black-box issue strikes back: how do we inspect the data that’s going through the stream? We would need to duplicate the streams and give a stream to the user for inspection (like tee does!). We need a convenient way to do this.

      dgsh has some syntax for declaring a DAG of processes. The FOSDEM presentation shows how convenient it can be, now we would need to port it to Common Lisp.

    • GWL

      A related project is GWL (Guix Workflow Language). Integrating GWL with a Common Lisp REPL would be a killer since GWL would benefit from an over-powered shell!

Text processing, AWK and tokenizing

As we saw in Searching and editing and Inspecting and editing, we can search and edit with all the power of Emacs.

This makes most filtering tools obsolete, like grep, cut, head, etc.

Maybe you’d think that more advanced text processing programs like AWK still have their use. But even there, the Common Lisp alternative CLAWK provides both support for the original syntax and a more Lispy syntax:

(for-file-lines (filename)
  (with-fields ((name payrate hrsworked))
    (when ($> hrsworked 0)
      ($print name payrate hrsworked))))

If you are not a nostalgic of AWK, maybe a dumb-simple string tokenizer will do:

(defun tokenize (string)
  "Return list of STRING lines, where each line is a list of each word."
  (mapcar (lambda (line)
	    (sera:tokens line))
	  (str:split (string #\newline) string)))

(defun token (line column lines)
  "Return token at line LINE and column COLUMN in the list of strings LINES."
  (nth column (nth line lines)))

then selecting the third column of the second line of the output of ls -l is as easy as:

(token 1 2 (tokenize ($cmd "ls -l")))

Here again, this shows how the Common Lisp language is vastly more powerful at text processing than any shell tool.

Collecting and filtering

We just saw how Common Lisp makes many Unix tools obsolete. Let’s push it further: what about the filtering Unix tools like sort, uniq, grep (for filtering and not searching this time), etc.?

Here the Common Lisp language, even without any library, has many nice functions on offer:

  • sort which accepts a key and a predicate parameter.
  • remove to remove elements matching the given item.
  • remove-if to remove elements matching a predicate.
  • remove-duplicates which, unlike uniq also works on non-adjacent data.

And more sophisticated list/set manipulation helpers:

  • set-difference
  • union
  • intersection

We will see how these helpers prove useful (and in my opinion superior to Unix tools) in the User stories section.

File recursive listing and manipulation

After Executing external commands, maybe the most common thing to do in a shell is operate on files.

Interestingly, this is something Bash is very bad at. In particular, it falls on its face as soon as it hits files with whitespace (or worse, line breaks) or if we must deal with files nested in different sub-directories.

The deep reason behind this weakness is because Bash pipelines can only pass text from process to process, and not structured data. This means that if we want to filter data, the file collection process (e.g. find) must return the file list as a string buffer: we just lost the structure of the file list. An alternative which is supported by some find versions is to use \0 as a separator, but then either the receiving program must support this, or you must use a version of xargs which also understands \0. In any case, it’s cumbersome and limiting.

In Common Lisp, we don’t need to start a new process to do any filtering, so it doesn’t suffer from this limitation. We can use any data structure we like: lists, hash tables, sets, etc.

One library that seems to be missing from the Common Lisp ecosystem is a find alternative, coupled with a file class which would embed properties like the Unix attributes and more, in an extensible way.

So I wrote the ficle library which implements exactly this.

I’ve shown an example use in Inspecting and editing.

There are many great benefits to this finder helper compared to Unix find:

  • Predicates can be arbitrary: so I can write my own predicate, say match-date>, that matches files newer than the given period, and thus

    (finder "/path/to/dir" (match-date> 60) (match-extension "txt"))

    finds text files newer than 1 minute ago.

    By default, the predicates are joined by a logical and (meaning they must all be satisfied. If you want a logical or instead, you can make use of the disjoin higher-order function (from the Alexandria library):

    (finder "/path/to/dir" (disjoin (match-date> 60) (match-extension "txt")))

    This time we find all the text files as well as all the files newer than 1 minute.

  • It’s more readable and flexible than find! :)
  • The find syntax is hard to remember. With finder, I can complete against any function starting with match- or from the ficle namespace.
  • finder returns a list of file objects. It’s structured and inspectable.
  • If the resulting list is not exactly what I want, I can further filter this result (using a back-reference and without recalling finder) using remove and friends, or simply by editing the list by hand if that’s enough. Indeed, hand-editing of an S-expression remains “structured” thanks to the editor support (e.g. Paredit, Lispy).

The file class can be specialized against specific types of files. For instance, the mediafile class contains more information, like the MIME type and all the information returned by ffprobe (the information tool from the FFmpeg suite).

Now we can inspect the audio tags, the video codecs, the picture resolution, etc., by just using the SLY inspector. No need to even learn an API here, it’s explorable from the interface itself.

Emacs and web graphical widgets

With the introduction diagram, we discussed the feedback loop between data filtering and visualization. Good tools are key in order for this loop to roll smoothly.

The traditional shell practice of groping around with piped Unix tools until we get the results right is certainly as frustrating as it is unproductive.

Previously, we’ve shown some more efficient ways to filter data iteratively. What we haven’t talked much about yet is how to visualize this data.

We’ve shown how the SLY inspector can inspect data recursively. While extremely useful, it does not allow for other representations. For instance, interactive tables could give us an overarching view of sequences of data, such as a file list.

To work around this, I’ve implemented a small Emacs widget. Now I can call it on my file list and it displays the following tabulated list view:


Notice that I can sort each column (and reverse the sort). I can apply interactive filters to this list, for instance a filter which prompts the file type to keep, or the date range to exclude.

In the future I’d like to be able to send the filtered and sorted list back to SLY. This is very much a work-in-progress, but it shows how we can communicate back and forth with Emacs and leverage this communication to use Emacs and its widgets as an interface to display the data.

That said, Emacs itself is also somewhat limited in terms of widgets: no real support for interactive graphs (beside maybe the chart library), limited support for pictures, no support for videos or 3D rendering.

But Emacs is not the end of it.

  • Future work

    While better integration between Emacs widgets and SLY is in the works, we need some other interfaces or even maybe other programming languages if we want better widget support.

    Web browsers are obvious contenders to this role: they are good at visualizing any kind of data (even 3D), they are widely available and portable. There are some already existing attempts at visualizing Common Lisp data interactively with a web browser:

    There are also web-less alternatives:

User stories

Shuffled playlists

I wanted to play music files and videos found recursively in sub-directories, at random.

This is surprisingly hard to do in Bash. (Exercise left to the reader :p)

With SLY, cmd, my aforementioned finder helper and Alexandria’s shuffle, it’s a one-liner:

(apply #'cmd "mpv" (shuffle (finder "/path/to/media/library/")))

Interactive pipeline / process graphs

In Better than pipelines: graphs!, we talked about an important paradigm shift in how to use a shell. Rather than writing pipelines, we can write the various steps separately and combine them as we go.

Let’s see what it gives us with a practical example. The following session is a real life task I had to do some weeks ago.

I had two versions of a game files, let’s call them superdiffer-v1 and superdiffer-v2.

They have many files in common, some files are new in v2, some files have been modified. I wanted to remove all identical files from superdiffer-v2, in order to make a v1-to-v2 patch.

Should be easy, right? Turns out it’s pretty tough to get right with conventional Unix tools. I tried with rsync but couldn’t figure out the right incantation.

I gave up with the shell and did it all in Common Lisp. Here follows a recording of the session, including the mistakes. I’ve interspersed it for comments to clarify what I was trying to do:

(finder "/path/to/superdiffer-v1")
;;    (#P"/path/to/superdiffer-v1/Manual.html"
;;     #P"/path/to/superdiffer-v1/Readme.txt"
;;     ; ...
;;  )

(finder "/path/to/superdiffer-v2")
;; (#P"/path/to/superdiffer-v2/ChangeLog"
;;  #P"/path/to/superdiffer-v2/Manual.html"
;;  #P"/path/to/superdiffer-v2/Readme.linux"
;; ...)

;; Optional: Here I decided to name the results with easier-to-remember names,
;; but of course I could have just kept using the back refernences.
(defvar source #v1)

(defvar target #v2)

;; Collect the files relative paths paired with their checkum.
(mapcar (lambda (p) (list p (checksum p))) (mapcar #'relative-path source))
;; ((#P"superdiffer-v1/Manual.html" "83483fc4155ec32038fc5f2f0c5c56a205bf03d6")
;;  (#P"superdiffer-v1/Readme.txt" "13b3cc1f97c5f871b550ae0ee722256ef12a8ef4")
;; ...)

(mapcar (lambda (p) (list p (checksum p))) (mapcar #'relative-path target))
;; ((#P"superdiffer-v2/ChangeLog"
;;   "094db450c8352b3a5687e63984eb59ccb89f2533")
;;  (#P"superdiffer-v2/Manual.html"
;;   "83483fc4155ec32038fc5f2f0c5c56a205bf03d6")
;;  (#P"superdiffer-v2/Readme.linux"
;;   "d2481dc94eedb19f56277124889c7130ee73d9e7")

(defvar source+checksum #v5)
(defvar target+checksum #v6)

;; Oops!  The relative path is wrong, we need to store the path relative to
;; the game directory itself.
;; No problem, since we proceed iteratively, we can fix the previous result
;; without having to start over, and without having to recompute the checksums.
(mapcar (lambda-match
	  ((list path sum) (list (relative-path path "./superdiffer-v1/") sum)) )
;; ((#P"Manual.html" "83483fc4155ec32038fc5f2f0c5c56a205bf03d6")
;;  (#P"Readme.txt" "13b3cc1f97c5f871b550ae0ee722256ef12a8ef4")

(mapcar (lambda-match
	  ((list path sum) (list (relative-path path "./superdiffer-v2") sum)) )
;; ((#P"ChangeLog" "094db450c8352b3a5687e63984eb59ccb89f2533")
;;  (#P"Manual.html" "83483fc4155ec32038fc5f2f0c5c56a205bf03d6")
;;  (#P"Readme.linux" "d2481dc94eedb19f56277124889c7130ee73d9e7")

;; Since we've _visually_ verified the result is correct, let's update our
;; references:
(setf source+checksum #v9)
(setf target+checksum #v10)

(set-difference target+checksum source+checksum :test #'equalp)
;; ((#P"data/libssl.so.1.0.0" "00df752e95496f3f68f3938e0478942b6d2c124f")
;;  (#P"data/libdraw.so" "1b4847cf117190e8c8de6cddabdf36c61797c2e9")
;;  (#P"data/libcrypto.so.1.0.0" "9ef2f7749ebd3d42b7c6044e2aa3d1b4732cfae3")

;; Now let's take the complement:
(set-difference target+checksum #v14 :test #'equalp)
;; ((#P"palettes/enemies/zamza7.pal" "72716fa0f509f8b1c8bd199ad22c2f30f70357fa")
;;  (#P"palettes/enemies/zamza6.pal" "3a9d2afa2d5ee1b90bdf8ff2650e13e25e3953dc")
;;  (#P"palettes/enemies/zamza5.pal" "051c1230d5beea757b9acfe70fe2867a6b9661ee")

;; Once we've visualized which data is going to be removed, we can proceed with
;; confidence!
(mapc (alex:compose #'delete-file #'first) #v15)

Hopefully the above walkthrough highlights the usefulness of back-references and list manipulation helpers (here set-difference).

This took me less than five minutes and, more importantly:

  • The logic of the process flowed naturally, I knew there would be no blockers. The cognitive effort is much lower I believe.

    In Bash, I would have to resort to the expressiveness of tools like rsync or find, but if those are not able to do what I want, I could very well get stuck.

  • Writing the Lisp was faster for me than just reading the manual of find or rsync.
  • Errors that happened any time before the actual processing (the file deletion at the very end) can be corrected without having to start over.

Performance, memory usage and startup time

Startup time

The startup time of the REPL matters if you are going to fire up dozens of them.

An Xterm window running Bash starts in a fraction of a second and this is what we should be aiming for.

Personally, I found that a REPL starting in more than 1s would become frustrating over time.

The case of Common Lisp is interesting. Most implementation start up really fast by default:

$ time sbcl --no-userinit --quit

real	0m0.004s
user	0m0.000s
sys	0m0.004s

Sadly, this only gets us the language standard, which is very limited as a shell: no library management, no modern string manipulation, no regular expressions, no high-level concurrency library, etc.

If we want to add these features to the language, we must load third-party libraries, which means we must load ASDF first:

time sbcl --no-userinit --eval '(require :asdf)' --quit

real	0m0.094s
user	0m0.086s
sys	0m0.008s

Ouch! That alone added a significant overhead. Add to this some more libraries and you quickly end up with a REPL that’s much too slow to start.

So I worked on lisp-repl-core-dumper, a little tool that caches the initialization (technically, it “dumps a Lisp image”). Now if we use it to start an SBCL preloaded with Alexandria and Bordeaux Threads:

$ time lisp-repl-core-dumper -p 'alexandria bordeaux-threads' sbcl --quit
Running '/home/ambrevar/.cache/lisp-repl-core-directory/sbcl-2.1.0-alexandria+bordeaux-threads.image'.

real	0m0.032s
user	0m0.024s
sys	0m0.012s

Much better!

Pro-tip: SLY has an option which you can tweak to dramatically reduce the startup of an “mrepl”:

(setq sly-connection-update-interval 0.1)

See the variable documentation for the details.

Size and memory usage

If you are going to use many shells, memory usage might be a concern. This is were “mrepls” (multiple REPLs sharing the same instance) are a life saver.

But for the times you’d like to use different instance, or if you are going to run a REPL on a system with very low memory, you might want to save in both compiler size and memory usage. For this, different Lisp compilers can be good options.

The transitive size (package with all dependencies) of the compilers (except for ABCL) is generally very low:

$ guix size ccl
store item                                                       total    self
/gnu/store/kxiilibc62zxp0dj4ywg4gqw8nvvhp40-ccl-1.12               109.1    37.0  34.0%
/gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31              38.4    36.7  33.7%
/gnu/store/01b4w3m6mp55y531kyi1g8shh722kwqm-gcc-7.5.0-lib           71.0    32.6  29.9%
/gnu/store/mmhimfwmmidf09jw1plw3aw1g1zn2nkh-bash-static-5.0.16       1.6     1.6   1.5%
/gnu/store/pwcp239kjf7lnj5i4lkdzcfcxwcfyk72-bash-minimal-5.0.16     39.4     1.0   1.0%
total: 109.1 MiB

Compared to Bash:

$ guix size bash
store item                                                       total    self
/gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31              38.4    36.7  43.4%
/gnu/store/01b4w3m6mp55y531kyi1g8shh722kwqm-gcc-7.5.0-lib           71.0    32.6  38.6%
/gnu/store/87kif0bpf0anwbsaw0jvg8fyciw4sz67-bash-5.0.16             84.6     6.3   7.4%
/gnu/store/zzkly5rbfvahwqgcs7crz0ilpi7x5g5p-ncurses-6.2             76.9     5.9   7.0%
/gnu/store/mmhimfwmmidf09jw1plw3aw1g1zn2nkh-bash-static-5.0.16       1.6     1.6   1.9%
/gnu/store/knp4rkdm39ph4brkbzsp07q248nfffi1-readline-8.0.4          78.3     1.4   1.7%
total: 84.6 MiB

But then Bash can be compiled statically:

$ guix size bash-static
store item                                                       total    self
/gnu/store/bcjcd97xvh0qkvq1maqj6qab88xb30dv-bash-static-5.0.16       1.6     1.6 100.0%
total: 1.6 MiB

Some compilers like ECL can compile down to binaries, which can save on disk-usage.

There is even an attempt to generate static executable with SBCL.

In terms of memory usage, I’ve measured the memory taken at startup (doing nothing), once without loading anything, and once just loading the implementation initialization file which was set to load ASDF. All tests are done with Guix System commit 51418c32d95d8188d8877616829f26479f1135c6. The results are in Kbytes as per GNU time.

$ command time -f '%M' bash -c 'exit'
$ command time -f '%M' /gnu/store/kxiilibc62zxp0dj4ywg4gqw8nvvhp40-ccl-1.12/bin/ccl -n -e '(quit)'

$ command time -f '%M' /gnu/store/kxiilibc62zxp0dj4ywg4gqw8nvvhp40-ccl-1.12/bin/ccl -e '(quit)'
$ command time -f '%M' /gnu/store/lzfxjn036h3kis13lcc222rpwcnqazkr-ecl-20.4.24/bin/ecl --norc --eval '(quit)'

$ command time -f '%M' /gnu/store/lzfxjn036h3kis13lcc222rpwcnqazkr-ecl-20.4.24/bin/ecl --eval '(quit)'
$ command time -f '%M' sbcl --no-userinit --quit

$ command time -f '%M' sbcl --quit
$ command time -f '%M' /gnu/store/0r1x1pp2da4cilbj3y5bklr9b8y8z272-clisp-2.49-92/bin/clisp -norc -x '(quit)'

$ command time -f '%M' /gnu/store/0r1x1pp2da4cilbj3y5bklr9b8y8z272-clisp-2.49-92/bin/clisp -x '(quit)'
$ command time -f '%M' /gnu/store/16ajvwh94mrz8amghxvd9l2bz8n96pzr-abcl-1.8.0/bin/abcl --noinit --eval '(quit)'

$ command time -f '%M' /gnu/store/16ajvwh94mrz8amghxvd9l2bz8n96pzr-abcl-1.8.0/bin/abcl --eval '(quit)'

Conclusion: CLISP performs the best, ABCL the worst, ECL comes close to CLISP.


Benchmarks are always to take with a grain of salt. The following exponential-cost Fibonacci implementation is useful to measure just a few things: function calls, memory usage and simple arithmetic (here the addition).

All tests are done with Guix System commit 51418c32d95d8188d8877616829f26479f1135c6. on an AMD Ryzen 5 2600 processor.

POSIX sh Fibonacci implementation:

fibonacci () {
  if [ $1 -lt 2 ] ; then
    echo $1
    echo $(($(fibonacci $(($1 - 1)))
	    $(fibonacci $(($1 - 2)))))
$ time fibonacci 30

real	27m37.401s
user	23m54.411s
sys	4m47.767s

Notice it’s just 30 here! 40 (as in the following tests) would have taken much to long to be practical. Even though Bash can do slightly better, they are obviously not made for calculus. It’s an embarrassing limitation because this means that even simple computations can block a script forever while it would have taken a few seconds with a higher performance programming language.

Now to the Lisp measurements.

(defun fibonacci (n)
  (if (< n 2)
      (+ (fibonacci (- n 1)) (fibonacci (- n 2)))))
(time (fibonacci 40))
Evaluation took:
1.915 seconds of real time

;; CCL
(time (fibonacci 40))

took 753,161 microseconds (0.753161 seconds) to run.

(time (fibonacci 40))

75.654 seconds real time

;; ECL
(time (fibonacci 40))

real time : 61.357 secs
run time  : 77.137 secs

(time (fibonacci 40))

Real time: 117.89895 sec.
Run time: 116.89732 sec.

Informal conclusion

Both SBCL and CCL are performing really well here, so it’s unlikely that performance is ever going to be a problem when used as a shell.

CCL is both the fastest (here) and a good middle ground in terms of memory usage.

If memory is the bottleneck, CLISP or ECL are good alternatives.

Guix environments and containers

Multiple profiles, multiple environments

In Performance, memory usage and startup time, I talked multiple Lisp implementations.

SLY lets you use any of these implementations and you can quickly jump between them. Here is my setup which leverages Guix to create independent environments:

(setq sly-lisp-implementations
      (let ((maybe-core-dumper (when-let ((exec (executable-find "lisp-repl-core-dumper")))
				 (list exec))))
	`((sbcl-ambrevar ("lisp-repl-core-dumper" "-p" "ambrevar" "sbcl"
			  "--eval" "(in-package :ambrevar/all)"
			  "--eval" "(named-readtables:in-readtable ambrevar/syntax:readtable)"))

	  (sbcl (,@maybe-core-dumper "sbcl"))

	  (sbcl-failsafe ("sbcl"))

	  (sbcl-nyxt (lambda () (ambrevar/sbcl-for-nyxt :no-grafts? t)))

	   (,(expand-file-name "~/projects/guix/pre-inst-env")
	    "guix" "environment" "-l"
	    ,(expand-file-name "~/common-lisp/nyxt/build-scripts/guix.scm")
	    "--ad-hoc" "glib" "glib-networking" "gsettings-desktop-schemas"
	    "--" "sbcl"))

	  (sbcl-nyxt-site ("guix" "environment" "--pure"
			   "-m" ,(expand-file-name "~/common-lisp/nyxt-site/guix-manifest.scm")
			   "--" "sbcl"))

	  (ccl (,@maybe-core-dumper "ccl"))

	  (clisp (,@maybe-core-dumper "clisp"))

	  (ecl ("ecl")))))

This allows me to easily jump between different Lisp implementations, with different startup options.

When an implementation is started with guix environment, the environment is includes the specified packages, such as extra (Common Lisp or not) libraries and tools, but these packages are not seen by other REPLs! I can pass it the --pure option to further isolate the environment, i.e. not inherit from any existing environment variable.

In other words: these various environments don’t spill on each other, they remain clearly separated. This gives many guarantees in terms of setup and reproducibility.

Reduced-privilege, containerized shells

While not a Common Lisp exclusivity, it’s important to note that SLY can connect to a remote Common Lisp process, including a process running in a container.

I made this little wrapper script to quickly start SBCL in a Guix container:


[ -n "$1" ] && port=$1

guix environment --network --container --manifest=$HOME/guix-manifests/common-lisp-manifest.scm -- \
     sbcl --eval "(require :asdf)" \
     --eval '(dolist (p (list "" "sly/contrib/" "sly/slynk/")) (push (pathname (format nil "~a/share/emacs/site-lisp/~a" (uiop:getenv "GUIX_ENVIRONMENT") p)) asdf:*central-registry*))' \
     --eval "(asdf:load-system :slynk)" \
     --eval "(slynk:create-server :port $port)" \
     --eval "(asdf:load-system :cmd)"

Then I can connect to this instance with M-x sly-connect. The result is a usual shell, but with limited file system access and all the limitations that I can configure with a Linux container.

Portable scripts

In the introduction we mentioned that one of the reasons behind the popularity of POSIX sh is that it is particularly good to exchange code between friends, since it’s one of the few languages that’s almost guaranteed to work anywhere (well, on any Unix-type system at least).

Being a script, it’s also inspectable by the user, which is a necessary security requirement. Indeed, writing a script in your favourite programming language and then shipping a Docker or the like to your friends is impolite: they can’t trust its content. (See https://www.omgubuntu.co.uk/2018/05/ubuntu-snap-malware and https://lwn.net/Articles/752982/.)

Is there a way out? As of January 2021, it’s unclear but if the functional package management model boasted by Guix or Nix would take over, this could put an end to the problem.

So if your friend has Guix installed, you can ship a portable, fast starting Common Lisp script (not a binary!) by using the following preamble.

SYSTEMS="sbcl-alexandria sbcl-cl-str"

name="$(basename "$0")"

mtime () { ## TODO: Portable version?
  stat --printf=%Y "$1"

if [ ! -e "$root" ] || \
     [ $(mtime "$guix_checkout") -gt $(mtime "$root") ]; then
  echo build
  mkdir -p "$(dirname "$root")"
  exec guix environment --root="$root" --ad-hoc sbcl $SYSTEMS lisp-repl-core-dumper -- \
       lisp-repl-core-dumper sbcl --script "$0" "$@"
  exec "$root"/bin/lisp-repl-core-dumper sbcl --script "$0" "$@"

;; (require :asdf) ; No need if using lisp-repl-core-dumper.
(asdf:load-system :alexandria)
(format t "Args ~s!~%" (uiop:command-line-arguments))
(format t "Hello ~a!~%" (alexandria:iota 1))

;; Rest of your script follows...

Notice how you can access the whole Common Lisp ecosystem by specifying which Common Lisp library to use with the SYSTEMS variable.

The script might take some time to cache the first time, but then:

$ time ./portable-sbcl-script-test Guix rocks!
Running '/home/ambrevar/.cache/lisp-repl-core-directory/sbcl-2.1.0.image'.
Args ("Guix" "rocks!")!
Hello (0)!

real	0m0.027s
user	0m0.021s
sys	0m0.010s

Language and interface alternatives

As mentioned in the introduction, much of what I’ve presented here does not have to be an exclusivity of the Common Lisp language or Emacs. In fact, Emacs itself is limited when it comes to interactive visualization widgets.

So what about the alternatives?

Here I’ve collected a list for me to explore of other options that may be inspiring with regard to the language capabilities, the interface, or the whole new paradigm they are experimenting with.

I haven’t tried them much or at all, so take my comments with a grain of salt.

  • Racket, being the “programmable programming language”, boasts high performance and with a rather extensive ecosystem.

    DrRacket, the Racket IDE, has nice interactive features, although the interface might not be very suited for a shell. That said, it shows the possibilities that the language can offer.

    Racket offers programmable, interactive widgets for visualizing and manipulating data. See VideoLang (some code here and a paper there) for example.

    Finally, Racket also has its own shell, Rash!

  • Clojure has Babashka.

    I’ve never Babashka, but I’ve used Clojure and CIDER draws heavily from SLIME, so it boasts similar interactivity power as SLY, but some components such as back-references may still have to be implemented.

In terms of alternative interfaces to the traditional way of thinking a shell:

  • Org Babel

    Still Emacs, so this does not fix the widget issue. However, what’s interesting with Org Babel is that you no longer go by the (archaic) ordered sequence of prompts. Instead, you write documents of commands and their associated results, which you can reorganize the way you like. All commands are run asynchronously, thus there is no such thing as a “blocking prompt”.

    A major drawback, I believe, is that you can’t see the output live, as it is written. (Correct me if I’m wrong.)

  • Jupyter

    Like Org Babel, the big novelty here is that the prompts are first class widgets which you can move around, fold, run in the background, etc. It has a Common Lisp kernel.

    Drawback: I’m not very familiar with Jupyter, but I’ve heard that the kernel design prevents the interface from accessing more internals of the language which would allow access to advanced debugging tools. Should this be true (please let me know if you know better), this could severely restrict Jupyter as a shell.

  • Glamorous Toolkit

    I have never tried it, but it seems to be in the same page as Jupyter (?) and the websites sells fancy visualization widgets.

    I don’t know how good it is with job management and how practical it would be as a shell.

    (Thanks to u/ram535 for the link.)

  • Xiki

    Xiki takes yet another approach to the problem; some parts are reminiscent of a notebook, others of shells helpers.

    The video on the front page is very inspiring.

    (Thanks to @trantorvega from FOSDEM for the link.)

  • Nyxt

    The Nyxt browser, beyond being a web browser, is at its core a development environment using a web renderer. It even embeds a REPL (rather primitive as of February 2021) , but technically it could be possible to re-implement all the features of SLY in Nyxt and then leverage the web rendering to visualize all kinds of data.

  • scsh and Commander S

    scsh stands for the “scheme shell”, so it’s Lispy by definition. It’s one of the rare shell projects that was designed together (?) with a graphical user interface, Commander S, which apparently (I haven’t tried it) departed enough from the terminal / readline paradigm that it allowed for many of the features I’ve presented here.

    (Thanks to u/PropagandaOfTheDude for the link.)

  • Elvish

    A shell with some good ideas about the language and structured data passing in the pipeline.

    Sadly the shell is stuck in the terminal, it seems. (Correct me if I’m wrong.)

    (Thanks to @trantorvega from FOSDEM for the link.)

  • NGS

    A project with similar ideas as the thesis presented here: it’s challenging the readline-shell paradigm and they are working on a different kind of interface (video).

    There have an extensive list of design ideas. Some points are reminiscent of SLY and the inspector.

    While the libraries and the interface make sense to me, I believe that (re-)inventing a language for this purpose is not necessary when you can base it on a programmable language like Racket.

    One of the authors summed up the problem and their approach very well in this blog post.

    (Thanks to @trantorvega from FOSDEM for the link.)


  • Death to the Shell Special thanks to Howard for this very inspiring talk!
  • Emacs
  • SLY
  • Common Lisp cookbook Good resource if you want to learn more about Common Lisp.
  • Awesome CL Curated catalogue of Common Lisp libraries and tools.
  • Awesome Shell A curated list of command-line frameworks.

    Most elements from this list start off from the “readline shell” paradigm.


Date: 2021-02-07 (Last update: 2021-02-21)

Made with Emacs 26.1 (Org mode 9.1.9)

Creative Commons License