OCaml Planet
The OCaml Planet aggregates various blogs from the OCaml community. If you would like to be added, read the Planet syndication HOWTO.




New Try-Alt-Ergo — OCamlPro, Mar 29, 2021
Have you heard about our Try-Alt-Ergo website? Created in 2014 (see our blogpost), the first objective was to facilitate access to our performant SMT Solver Alt-Ergo. Try-Alt-Ergo allows you to write and run your problems in your browser without any server computation.
This playground website has been maintained by OCamlPro for many years, and it’s high time to bring it back to life with new updates. We are therefore pleased to announce the new version of the Try-Alt-Ergo website! In t…
Read more...Have you heard about our Try-Alt-Ergo website? Created in 2014 (see our blogpost), the first objective was to facilitate access to our performant SMT Solver Alt-Ergo. Try-Alt-Ergo allows you to write and run your problems in your browser without any server computation.
This playground website has been maintained by OCamlPro for many years, and it’s high time to bring it back to life with new updates. We are therefore pleased to announce the new version of the Try-Alt-Ergo website! In this article, we will first explain what has changed in the back end, and what you can use if you are interested in running your own version of Alt-Ergo on a website, or in an application! And then we will focus on the new front-end of our website, from its interface to its features through its tutorial about the program.
Try-Alt-Ergo 2014

Try-Alt-Ergo was designed to be a powerful and simple tool to use. Its interface was minimalist. It offered three panels, one panel (left) with a text area containing the problem to prove. The centered panel was composed of a button to run Alt-Ergo, load examples, set options. The right panel showed these options, examples and other information. This design lacked some features that have been added to our solver through the years. Features such as models (counter-examples), unsat-core, more options and debug information was missing in this version.
Try-Alt-Ergo did not offer a proper editor (with syntax coloration), a way to save the file problem nor an option to limit the run of the solver with a time limit. Another issue was about the thread. When the solver was called the webpage froze, that behavior was problematic in case of the long run because there was no way to stop the solver.
Alt-Ergo 1.30
The 1.30 version of Alt-Ergo was the version used in the back-end to prove problems. Since this version, a lot of improvements have been done in Alt-Ergo. To learn more about these improvements, see our changelog in the documentation.
Over the years we encountered some difficulties to update the Alt-Ergo version used in Try-Alt-Ergo. We used Js_of_ocaml to compile the OCaml code of our solver to be runnable as a JavaScript code. Some libraries were not available in JavaScript and we needed to manually disable them. The lack of automatism leads to a lack of time to update the JavaScript version of Alt-Ergo in Try-Alt-Ergo.
In 2019 we switched our build system to dune which opens the possibility to ease the cross-compilation of Alt-Ergo in JavaScript.
New back-end
With some simple modification, we were able to compile Alt-Ergo in JavaScript. This modification is simple enough that this process is now automated in our continuous integration. This will enable us to easily provide a JavaScript version of our Solver for each future version.
Two ways of using our solver in JavaScript are available:
alt-ergo.js
, a JavaScript version of the Alt-Ergo CLI. It can be runned withnode
:node alt-ergo.js <options> <file>
. Note that this code is slower than the natively compiled CLI of Alt-Ergo.
In our effort to open the SMT world to more people, an npm package is the next steps of this work.alt-ergo-worker.js
, a web worker of Alt-Ergo. This web worker needs JSON file to input file problem, options into Alt-Ergo and to returns its answers:- Options are sent as a list of couple name:value like:
{"debug":true,
"input_format":"Native",
"steps_bound":100,
"sat_solver": "Tableaux",
"file":"test-file"}
You can specify all options used in Alt-Ergo. If some options are missing, the worker uses the default value for these options. For example, if debug is not specified the worker will use its defaults value :false. - Input file is sent as a list of string, with the following format:
{ "content": [ "goal g: true"] }
- Alt-Ergo answers can be composed with its results, debug information, errors, warnings …
{ "results": [ "File \"test-file\", line 1, characters 9-13: Valid (0.2070) (0 steps) (goal g) ] ,
"debugs": [ "[Debug][Sat_solver]", "use Tableaux-like solver"] }
like the options, a result value likedebugs
does not contains anything,"debugs": [...]
is not returned.
- See the Alt-Ergo web-worker documentation to learn more on how to use it.
- Options are sent as a list of couple name:value like:
New Front-end

The Try-Alt-Ergo has been completely reworked and we added some features:
- The left panel is still composed in an editor and answers area
- Ace editor with custom syntax coloration (both native and smt-lib2) is now used to make it more pleasant to write your problems.
- A top panel that contains the following buttons:
Ask Alt-Ergo
which retrieves content from the editor and options, launch the web worker and print answers in the defined areas.
Load
andSave
files.Documentation
, that sends users to the newly added native syntax documentation of Alt-Ergo.Tutorial
, that opens an interactive tutorial to introduce you to Alt-Ergo native syntax and program verification.

- A right panel composed of tabs:
Start
andAbout
that contains general information about Alt-Ergo, Try-Alt-Ergo and how to use it.Outputs
prints more information than the basic answer area under the editor. In these tabs you can find debugs (long) outputs, unsat-core or models (counter-example) generated by Alt-Ergo.Options
contains every option you can use, such as the time limit / steps limit or to set the format of the input file to prove .Statistics
is still a basic tab that only output axioms used to prove the input problem.Examples
contains some basic examples showing the capabilities of our solver.
We hope you will enjoy this new version of Try-Alt-Ergo, we can’t wait to read your feedback!
This work was done at OCamlpro.
Hide



TZComet's New Token Viewer — Sebastien Mondet, Mar 26, 2021




Release of Frama-Clang 0.0.10 — Frama-C, Mar 08, 2021




Qubes-lite with KVM and Wayland — Thomas Leonard, Mar 07, 2021
I’ve been running QubesOS as my main desktop since 2015. It provides good security, by running applications in different Xen VMs. However, it is also quite slow and has some hardware problems. I’ve recently been trying out NixOS, KVM, Wayland and SpectrumOS, and attempting to create something similar with more modern/compatible/faster technology.
This post gives my initial impressions of these tools and describes my current setup.
Table of Contents
Read more...I’ve been running QubesOS as my main desktop since 2015. It provides good security, by running applications in different Xen VMs. However, it is also quite slow and has some hardware problems. I’ve recently been trying out NixOS, KVM, Wayland and SpectrumOS, and attempting to create something similar with more modern/compatible/faster technology.
This post gives my initial impressions of these tools and describes my current setup.
Table of Contents
( this post also appeared on Hacker News and Lobsters )
QubesOS
QubesOS aims to provide “a reasonably secure operating system”. It does this by running multiple virtual machines under the Xen hypervisor. Each VM’s windows have a different colour and tag, but they appear together as a single desktop. The VMs I run include:
com
for email and similar (the only VM that sees my email password).dev
for software development.shopping
(the only VM that sees my card number).personal
(with no Internet access)untrusted
(general browsing)
The desktop environment itself is another Linux VM (dom0
), used for managing the other VMs.
Most of the VMs are running Fedora (the default for Qubes), although I run Debian in dev
.
There are also a couple of system VMs; one for dealing with the network hardware,
and one providing a firewall between the VMs.
You can run qvm-copy
in a VM to copy a file to another VM.
dom0
pops up a dialog box asking which VM should receive the file, and it arrives there
as ~/QubesIncoming/$source_vm/$file
.
You can also press Ctrl-Shift-C to copy a VM’s clipboard to the global clipboard, and then
press Ctrl-Shift-V in a window of the target VM to copy to that VM’s clipboard,
ready for pasting into an application.
I think Qubes does a very good job at providing a secure environment.
However, it has poor hardware compatibility and it feels sluggish, even on a powerful machine. I bought a new machine a while ago and found that the motherboard only provided a single video output, limited to 30Hz. This meant I had to buy a discrete graphics card. With the card enabled, the machine fails to resume from suspend, and locks up from time to time (it’s completely stable with the card removed or disabled). I spent some time trying to understand the driver code, but I didn’t know enough about graphics, the Linux kernel, PCI suspend, or Xen to fix it.
I was also having some other problems with QubesOS:
- Graphics performance is terrible (especially on a 4k monitor). Qubes disables graphics acceleration in VMs for security reasons, but it was slow even for software rendering.
- It recently started freezing for a couple of seconds from time to time - annoying when you’re trying to type.
- It uses LVM thin-pools for VM storage, which I don’t understand, and which sometimes need repairing (haven’t lost any data, though).
- dom0 is out-of-date and generally not usable. This is intentional (you should be using VMs), but my security needs aren’t that high and it would be nice to be able to do video conferencing these days. Also, being able to print over USB and use bluetooth would be handy.
Anyway, I decided it was time to try something new. Linux now has its own built-in hypervisor (KVM), and I thought that would probably work better with my hardware. I was also keen to try out Wayland, which is built around shared-memory and I thought it might therefore work better with VMs. How easy would it be to recreate a Qubes-like environment directly on Linux?
NixOS
I’ve been meaning to try NixOS properly for some time. Ever since I started using Linux, its package management has struck me as absurd. On Debian, Fedora, etc, installing a package means letting it put files wherever it likes; which effectively gives the package author root on your system. Not a good base for sandboxing!
Also, they make it difficult to try out 3rd-party software, or to test newer versions of just some packages.
In 2003 I created 0install to address these problems, and Nix has very similar goals. I thought Nix was a few years younger, but looking at its Git history the first commit was on Mar 12, 2003. I announced the first preview of 0install just two days later, so both projects must have started writing code within a few days of each other!
NixOS is made up of quite a few components. Here is what I’ve learned so far:
nix-store
The store holds the files of all the programs, and is the central component of the system.
Each version of a package goes in its own directory (or file), at /nix/store/$HASH
.
You can add data to the store directly, like this:
1 2 3 4 5 6 7 |
|
Here, the store location is calculated from the hash of the contents of the file we added (as with 0install store add
or git hash-object
).
However, you can also add things to the store by asking Nix to run a build script. For example, to compile some source code:
- You add the source code and some build instructions (a “derivation” file) to the store.
- You ask the store to build the derivation. It runs your build script in a container sandbox.
- The results are added to the store, using the hash of the build instructions (not the hash of the result) as the directory name.
If a package in the store depends on another one (at build time or run time), it just refers to it by its full path. For example, a bash script in the store will start something like:
1 2 |
|
If two users want to use the same build instructions, the second one will see that the hash already exists and can just reuse that. This allows users to compile software from source and share the resulting binaries, without having to trust each other.
Ideally, builds should be reproducible. To encourage this, builds which use the hash of the build instructions for the result path are built in a sandbox without network access. So, you can’t submit a build job like “Download and compile whatever is the latest version of Vim”. But you can discover the latest version yourself and then submit two separate jobs to the store:
- “Download Vim 8.2, with hash XXX” (a fixed-output job, which therefore has network access)
- “Build Vim from hash XXX”
You can run nix-collect-garbage
to delete everything from the store that isn’t reachable via the symlinks under /nix/var/nix/gcroots/
.
Users can put symlinks to things they care about keeping in /nix/var/nix/gcroots/per-user/$USER/
.
By default, the store is also configured with a trusted binary cache service, and will try to download build results from there instead of compiling locally when possible.
nix-instantiate
Writing derivation files by hand is tedious, so Nix provides a templating language to create them easily.
The Nix language is dynamically typed and based around maps/dictionaries (which it confusingly refers to as “sets”).
nix-instantiate file.nix
will generate a derivation from file.nix
and add it to the store.
An Nix file looks like this:
1
|
|
Running nix-instantiate
on this will:
- Add
myfile
to the store. - Add the generated
foo.drv
to the store, including the full store path ofmyfile
.
nix-pkgs
Writing Nix expressions for every package you want would also be tedious. The nixpkgs Git repository contains a Nix expression that evaluates to a set of derivations, one for each package in the distribution. It also contains a library of useful helper functions for packages (e.g. it knows how to handle GNU autoconf packages automatically).
Rather than evaluating the whole lot, you use -A
to ask for a single package.
For example, you can use nix-instantiate ./nixpkgs/default.nix -A firefox
to generate a derivation for Firefox.
nix-build
is a quick way to create a derivation with nix-instantiate
and build it with nix-store
.
It will also create a ./result
symlink pointing to its path in the store,
as well as registering ./result
with the garbage collector under /nix/var/nix/gcroots/auto/
.
For example, to build and run Firefox:
1 2 |
|
If you use nixpkgs without making any changes, it will be able to download a pre-built binary from the cache service.
nix-env
Keeping track of all these symlinks would be tedious too,
but you can collect them all together by making a package that depends on every application you want.
Its build script will produce a bin
directory full of symlinks to the applications.
Then you could just point your $PATH
variable at that bin
directory in the store.
To make updating easier, you will actually add ~/.nix-profile/bin/
to $PATH
and
update .nix-profile
to point at the latest build of your environment package.
This is essentially what nix-env
does, except with yet more symlinks to allow for
switching between multiple profiles, and to allow rolling back to previous environments
if something goes wrong.
For example, to install Firefox so you can run it via $PATH
:
1
|
|
NixOS
Finally, just as nix-env
can create a user environment with bin
, man
, etc,
a similar process can create a root filesystem for a Linux distribution.
nixos-rebuild
reads the /etc/nixos/configuration.nix
configuration file,
generates a system environment,
and then updates grub and the /run/current-system
symlink to point to it.
In fact, it also lists previous versions of the system environment in the grub file, so if you mess up the configuration you can just choose an earlier one from the boot menu to return to that version.
Installing NixOS
To install NixOS you boot one of the live images at https://nixos.org. Which you use only affects the installation UI, not the system you end up with.
The manual walks you through the installation process, showing how to partition the disk, format and mount the partitions, and how to edit the configuration file. I like this style of installation, where it teaches you things instead of just doing it for you. Most of the effort in switching to a new system is learning about it, so I’d rather spend 3 hours learning stuff following an installation guide than use a 15-minute single-click installer that teaches me nothing.
The configuration file (/etc/nixos/configuration.nix
) is just another Nix expression.
Most things are set to off by default (I approve), but can be changed easily.
For example, if you want sound support you change that setting to sound.enable = true
,
and if you also want to use PulseAudio then you set hardware.pulseaudio.enable = true
too.
Every system service supported by NixOS is controlled from here,
with all kinds of options, from programs.vim.defaultEditor = true
(so you don’t get trapped in nano
)
to services.factorio.autosave-interval
.
Use man configuration.nix
to see the available settings.
NixOS defaults to an X11 desktop, but I wanted to try Wayland (and Sway). Based on the NixOS wiki instructions, I used this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
The xwayland
bit is important; without that you can’t run any X11 applications.
My only complaint with the NixOS installation instructions is that following them will leave you with an unencrypted system,
which isn’t very useful.
When partitioning, you have to skip ahead to the LUKS section of the manual, which just gives some options but no firm advice.
I created two primary partitions: a 1G unencrypted /boot
, and a LUKS partition for the rest of the disk.
Then I created an LVM volume group from the /dev/mapper/crypted
device and added the other partitions in that.
Once the partitions are mounted and the configuration file is complete,
nixos-install
downloads everything and configures grub.
Then you reboot into the new system.
Once running the new system you can made further edits to the configuration file there in the same way,
and use nixos-rebuild switch
to generate a new system.
It seems to be pretty good at updating the running system to the new settings, so you don’t normally need to reboot
after making changes.
The big mistake I made was forgetting to add /boot
to fstab.
When I ran nixos-rebuild
it put all the grub configuration on the encrypted partition, rendering the system unbootable.
I fixed that with chattr +i /boot
on the unmounted partition.
That way, trying to rebuild with /boot
unmounted will just give an error message.
Thoughts on NixOS
I’ve been using the system for a few weeks now and I’ve had no problems with Nix so far. Nix has been fast and reliable and there were fairly up-to-date packages for everything I wanted (I’m using the stable release). There is a lot to learn, but plenty of documentation.
When I wanted a newer package (socat
with vsock support, only just released) I just told Nix to install it from the latest Git checkout of nixpkgs.
Unlike on Debian and similar systems, doing this doesn’t interfere with any other packages (such as forcing a system-wide upgrade of libc).
I think Nix does download more data than most other systems, but networks are fast enough now that it doesn’t seem to matter. For example, let’s say you’re running Python 3.9.0 and you want to update to 3.9.1:
-
With Debian:
apt-get upgrade
downloads the new version, which gets unpacked over the old one. As the files are unpacked, the system moves through an exciting series of intermediate states no-one has thought about. Running programs may crash as they find their library versions changing under them (though it’s usually OK). Only root can update software. -
With 0install:
0install update
downloads the new version, unpacking it to a new directory. Running programs continue to use the old version. When a new program is started, 0install notices the update and runs the solver again. If the program is compatible with the new Python then it uses that. If not, it continues with the old one. You can run any previous version if there is a problem. -
With Nix:
nix-env -u
downloads the new version, unpacking it to a new directory. It also downloads (or rebuilds) every package depending on Python, creating new directories for each of them. It then creates a new environment with symlinks to the latest version of everything. Running programs continue to use the old version. Starting a new program will use the new version. You can revert the whole environment back to the previous version if there is a problem. -
With Docker:
docker pull
downloads the new version of a single application, downloading most or all of the application’s packages, whether Python related or not. Existing containers continue running with the old version. New containers will default to using the new version. You can specify which version to use when starting a program. Other applications continue using the old version of Python until their authors update them (you must update each application individually, rather than just updating Python itself).
The main problem with NixOS is that it’s quite different to other Linux systems, so there’s a lot to relearn.
Also, existing knowledge about how to edit fstab
, sudoers
, etc, isn’t so useful, as you have to provide all configuration in Nix syntax.
However, having a single (fairly sane) syntax for everything is a nice bonus, and being able to generate things using the templating language is useful.
For example, for my network setup I use a bunch of tap devices (one for each of my VMs).
It was easy to write a little Nix function (mktap
) to generate them all from a simple list.
Here’s that section of my configuration.nix
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
Overall, I’m very happy with NixOS so far.
Why use virtual machines?
With NixOS I had a nice host environment, but after using Qubes I wanted to run my applications in VMs.
The basic problem is that Linux is the only thing that knows how to drive all the hardware, but Linux security is not ideal. There are several problems:
- Linux is written in C. This makes security bugs rather common and, more importantly, means that a bug in one part of the code can impact any other part of the code. Nothing is secure unless everything is secure.
- Linux has a rather large API (hundreds of syscalls).
- The Linux (Unix) design predates the Internet, and security has been somewhat bolted on afterwards.
For example, imagine that we want to run a program with access to the network, but not to the graphical display. We can create a new Linux container for it using bubblewrap, like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
The container has an empty home directory, empty /tmp
, and no access to the display sockets.
If we run Firefox in this environment then… it opens its window just fine!
How? strace
shows what happened:
1 2 3 |
|
After failing to connect to Wayland, it then tried using X11 (via Xwayland) instead. Why did that work?
If the first byte of the socket pathname is \0
then Linux instead interprets it as an “abstract” socket address,
not subject to the usual filesystem permission rules.
Trying to anticipate these kinds of special cases is just too much work. Linux really wants everything on by default, and you have to find and disable every feature individually. By contrast, virtual machines tend to have integrations with the host off by default. The also tend to have much smaller APIs (e.g. just reading and writing disk blocks or network frames), with the rich Unix API entirely inside the VM, provided by a separate instance of Linux.
SpectrumOS
I was able to set up a qemu guest and restore my dev
Qubes VM in that, but it didn’t integrate nicely with the rest of the desktop.
Installing ssh allowed me to connect in with ssh -Y dev
, allowing apps in the VM to open an X connection to Xwayland on the host.
That was somewhat usable, but still a bit slower than Qubes had been (which was already a bit too slow).
Searching for a way to forward the Wayland connection directly, I came across the SpectrumOS project. SpectrumOS aims to use one virtual machine per application, using shared directories so that VM files are stored on the host, simplifying management. It uses crosvm from the ChromiumOS project instead of qemu, because it has a driver that allows forwarding Wayland connections (and also because it’s written in Rust rather than C). The project’s single developer is currently taking a break from the project, and says “I’m currently working towards a proof of concept”.
However, there is some useful stuff in the SpectrumOS repository (which is a fork of nixpkgs). In particular, it contains:
- A version of Linux with the
virtwl
kernel module, which connects to crosvm’s Wayland driver. - A package for sommelier, which connects applications to
virtwl
. - A Nix expression to build a root filesystem for the VM.
Building that, I was able to run the project’s demo, which runs the Wayfire compositor inside the VM, appearing in a window on the host. Dragging the nested window around, the pixels flowed smoothly across my screen in exactly the way that pixels on QubesOS don’t.
This was encouraging, but I didn’t want to run a nested window manager. I tried running Firefox directly (without Wayfire), but it complained that sommelier didn’t provide a new enough version of something, and running weston-terminal immediately segfaulted sommelier.
Why do we need the sommelier process anyway?
The problem is that, while virtwl
mostly proxies Wayland messages directly, it can’t send arbitrary FDs to the host.
For example, if you want to forward a writable stream from an application to virtwl
you must first create a pipe from the host using a special virtwl
ioctl,
then read from that and copy the data to the application’s regular Linux pipe.
With help from the mailing list, I managed to get it somewhat usable:
- I enabled
VIRTIO_FS
, allowing me to mount a host directory into the VM (for sharing files). - I created some tap devices (as mentioned above) to get guest networking going.
- Adding ext4 to the kernel image allowed me to mount the VM’s LVM partition.
- Setting
FONTCONFIG_FILE
got some usable fonts (otherwise, there was no monospace font for the terminal). - I hacked sommelier to claim it supported the latest protocols, which got Firefox running.
- Configuring sommelier for Xwayland let X applications run.
- I replaced the non-interactive
bash
shell withfish
so I could edit commands. - I ran
(while true; do socat vsock-listen:5000 exec:dash; done)
at the end of the VM’s boot script. Then I could start e.g. the VM’s Firefox withecho 'firefox&' | socat stdin vsock-connect:7:5000
on the host, allowing me to add launchers for guest applications.
Making changes to the root filesystem was fairly easy once I’d read the Nix manuals.
To add an application (e.g. libreoffice
), you import it at the start of rootfs/default.nix and add it to the path
variable.
The Nix expression gets the transitive dependencies of path
from the Nix store and packs them into a squashfs image.
True, my squashfs image is getting a bit big.
Maybe I should instead make a minimal squashfs boot image, plus a shared directory of hard links to the required files.
That would allow sharing the data with the host.
I could also just share the whole /nix/store
directory, if I wanted to make all host software available to guests.
I made another Nix script to add various VM boot commands to my host environment.
For example, running qvm-start-shopping
boots my shopping VM using crosvm,
with the appropriate LVM data partition, network settings, and shared host directory.
I think, ideally, this would be a systemd socket-activated user service rather than a shell script. Then attempting to run Firefox by sending a command to the VM socket would cause systemd to boot the VM (if not already running). For now, I boot each VM manually in a terminal and then press Win-Shift-2 to banish it to workspace 2, with all the other VM root consoles.
The virlwl
Wayland forwarding feels pretty fast (much faster than Qubes’ X graphics).
Wayland
I now had a mostly functional Qubes-like environment, running most of my applications in VMs, with their windows appearing on the host desktop like any other application. However, I also had some problems:
- A stated goal of Wayland is “every frame is perfect”. However, applications generally seemed to open at the wrong size and then jump to their correct size, which was a bit jarring.
- Vim opened its window with the scrollbar at the far left of the window, making the text invisible until you resized the window.
- Wayland is supposed to have better support for high-DPI displays.
However, this doesn’t work with Xwayland, which turns everything blurry,
and the recommended work-around is to use a scale-factor of 1
and configure each application to use bigger fonts.
This is easy enough with X applications (e.g. set
ft.dpi: 150
withxrdb
), but Wayland apps must be configured individually. - Wayland doesn’t have cursor themes and you have to configure every application individually to use a larger cursor too.
- Copying text didn’t seem to work reliably. Sometimes there would be a long delay, after which the text might or might not appear. More often, it would just paste something completely different and unexpected. Even when it did paste the right text, it would often have ^M characters inserted into it.
I decided it was time to learn more about Wayland. I discovered wayland-book.com, which does a good job of introducing it (though the book is only half finished at the moment).
Protocol
One very nice feature of Wayland is that you can run any Wayland application with WAYLAND_DEBUG=1
and it will display a fairly readable trace of all the Wayland messages it sends and receives.
Let’s look at a simple application that just connects to the server (compositor) and opens a window:
1 2 3 |
|
The client connects to the server’s socket at /run/user/1000/wayland-0
and sends two messages
to object 1 (of type wl_display
), which is the only object available in a new connection.
The get_registry
request asks the server to add the registry to the conversation and call it object 2.
The sync
request just asks the server to confirm it got it, using a new callback object (with ID 3).
Both clients and servers can add objects to the conversation. To avoid numbering conflicts, clients assign low numbers and servers pick high ones.
On the wire, each message gives the object ID, the operation ID, the length in bytes, and then the arguments. Objects are thought of as being at server, so the client sends request messages to objects, while the server emits event messages from objects. At the wire level there’s no difference though.
When the server gets the get_registry
request it adds the registry,
which immediately emits one event for each available service, giving the maximum supported version.
The client receives these messages, followed by the callback notification from the sync
message:
1 2 3 4 5 6 7 8 9 10 |
|
The callback tells the client it has seen all the available services, and so it now picks the ones it wants.
It has to choose a version no higher than the one offered by the server.
Protocols starting with wl_
are from of the core Wayland protocol; the others are extensions.
The leading z
in zxdg_output_manager_v1
indicates that the protocol is “unstable” (under development).
The protocols are defined in various XML files, which are scattered over the web. The core protocol is defined in wayland.xml. These XML files can be used to generate typed bindings for your programming language of choice.
Here, the application picks wl_compositor
(for managing drawing surfaces), wl_shm
(for sharing memory with the server),
and xdg_wm_base
(for desktop windows).
1 2 3 |
|
The bind message is unusual in that the client gives the interface and version of the object it is creating. For other messages, both sides know the type from the schema, and the version is always the same as the parent object. Because the client chose the new IDs, it doesn’t need to wait for the server; it continues by using the new objects to create a top-level window:
1 2 3 4 5 |
|
This API is pretty strange.
The core Wayland protocol says how to make generic drawing surfaces, but not how to make windows,
so the application is using the xdg_wm_base
extension to do that.
Logically, there’s only one object here (a toplevel window),
but it ends up making three separate Wayland objects representing the different aspects of it.
The commit
tells the server that the client has finished setting up the window and the server should
now do something with it.
The above was all in response to the callback firing. The client now processes the last message in that batch, which is the server destroying the callback:
1
|
|
Object destruction is a bit strange in Wayland.
Normally, clients ask for things to be destroyed (by sending a “destructor” message)
and the server confirms by sending delete_id
from object 1.
But this isn’t symmetrical: there is no standard way for a client to confirm deletion when the server calls
a destructor (such as the callback’s done
), so these have to be handled on a case-by-case basis.
Since callbacks don’t accept any messages, there is no need for the client to confirm that it got the done
message and the server just sends a delete message immediately.
The client now waits for the server to respond to all the messages it sent about the new window, and gets a bunch of replies:
1 2 3 4 5 6 7 8 9 |
|
It gets some messages telling it what pixel formats are supported, a ping message (which the server sends from time to time to check the client is still alive), and a configure message giving the size for the new window. Oddly, Sway has set the size to 0x0, which means the client should choose whatever size it likes.
The client picks a suitable default size, allocates some shared memory (by opening a tmpfs file and immediately unlinking it),
shares the file descriptor with the server (create_pool
), and then carves out a portion of the memory to use as a buffer for the pixel data:
1 2 3 |
|
In this case it used the whole memory region. It could also have allocated two buffers for double-buffering.
The client then draws whatever it wants into the buffer (mapping the file into its memory and writing to it directly),
attaches the buffer to the window’s surface, marks the whole area as “damaged” (in need of being redrawn) and calls commit
,
telling the server the surface is ready for display:
1 2 3 |
|
At this point the window appears on the screen! The server lets the client know it has finished with the buffer and the client destroys it:
1 2 3 |
|
Although the window is visible, the content is the wrong size.
Sway now suddenly remembers that it’s a tiling window manager.
It sends another configure
event with the correct size, causing the client to allocate a fresh memory pool of the correct size,
allocate a fresh buffer from it, redraw everything at the new size, and tell the server to draw it.
1 2 |
|
This process of telling the client to pick a size and then overruling it explains why Firefox draws itself incorrectly at first and then flickers into position a moment later. It probably also explains why Vim tries to open a 0x0 window.
Copying text
A bit of searching revealed that the ^M
problem is a known Sway bug.
However, the main reason copying text wasn’t working turned out to be a limitation in the design of the core wl_data_device_manager
protocol.
The normal way to copy text on X11 is to select the text you want to copy,
then click the middle mouse button where you want it (or press Shift-Insert).
X also supports a clipboard mechanism, where you select text, then press Ctrl-C, then click at the destination, then press Ctrl-V. The original Wayland protocol only supports the clipboard system, not the selection, and so Wayland compositors have added selection support through extensions. Sommelier didn’t proxy these extensions, leading to failure when copying in or out of VMs.
I also found that the reason weston-terminal wouldn’t start was because I didn’t have anything in my clipboard, and sommelier was trying to dereference a null pointer.
One problem with the Wayland protocol is that it’s very hard to proxy. Although the wire protocol gives the length in bytes of each message, it doesn’t say how many file descriptors it has. This means that you can’t just pass through messages you don’t understand, because you don’t know which FDs go with which message. Also, the wire protocol doesn’t give types for FDs (nor does the schema), which is a problem for anything that needs to proxy across a VM boundary or over a network.
This all meant that VMs could only use protocols explicitly supported by sommelier, and sommelier limited the version too. Which means that supporting extra extensions or new versions means writing (and debugging) loads of C++ code.
I didn’t have time to write and debug C++ code for every missing Wayland protocol, so I took a short-cut: I wrote my own Wayland library, ocaml-wayland, and then used that to write my own version of sommelier. With that, adding support for copying text was fairly easy.
For each Wayland interface we need to handle each incoming message from the client and forward it to the host,
and also forward each message from the host to the client.
Here’s the code to handle the “selection” event in OCaml,
which we receive from the host and send to the client (c
):
1
|
|
The host passes us an “offer” argument, which is a previously-created host offer object.
We look up the corresponding client object with to_client
and pass that as the argument
to the client.
For comparison, here’s sommelier’s equivalent to this line of code, in C++:
1 2 3 4 5 6 7 8 9 10 |
|
I think this is a great demonstration of the difference between “type safety” and “type ceremony”.
The C++ code is covered in types, making the code very hard to read, yet it crashes at runtime because it
fails to consider that data_offer
can be NULL
.
By contract, the OCaml version has no type annotations, but the compiler would reject if I forgot to handle this (with Option.map
).
Security
According to the GNOME wiki, the original justification for not supporting selection copies was “security concerns with unexpected data stealing if the mere act of selecting a text fragment makes it available to all running applications”. The implication is that applications stealing data instead from the clipboard is OK, and that you should therefore never put anything confidential on the clipboard.
This seemed a bit odd, so I read the security section of the Wayland specification to learn more about its security model. That section of the specification is fairly short, so I’ll reproduce it here in full:
Security and Authentication
- mostly about access to underlying buffers, need new drm auth mechanism (the grant-to ioctl idea), need to check the cmd stream?
- getting the server socket depends on the compositor type, could be a system wide name, through fd passing on the session dbus. or the client is forked by the compositor and the fd is already opened.
It looks like implementations have to figure things out for themselves.
The main advantage of Wayland over X11 here is that Wayland mostly isolates applications from each other. In X11 applications collaborate together to manage a tree of windows, and any application can access any window. In the Wayland protocol, each application’s connection only includes that application’s objects. Applications only get events relevant to their own windows (for example, you only get pointer motion events while the pointer is over your window). Communication between applications (e.g. copy-and-paste or drag-and-drop) is all handled though the compositor.
Also, to request the contents of the clipboard you need to quote the serial number of the mouse click or key press that triggered it. If it’s too far in the past, the compositor can ignore the request.
I’ve also heard people say that security is the reason you can’t take screenshots with Wayland. However, Sway lets you take screenshots, and this worked even from inside a VM through virtwl. I didn’t add screenshot support to the proxy, because I don’t want VMs to be able to take screenshots, but the proxy isn’t a security tool (it runs inside the VM, which isn’t trusted).
Clearly, the way to fix this was with a new compositor. One that would offer a different Wayland socket to each VM, tag the windows with the VM name, colour the frames, confirm copies across VM boundaries, and work with Vim. Luckily, I already had a handy pure-OCaml Wayland protocol library available. Unluckily, at this point I ran out of holiday.
Future work
There are quite a few things left to do here:
-
One problem with
virtwl
is that, while we can receive shared memory FDs from the host, we can’t export guest memory to the host. This is unfortunate, because in Wayland the shared memory for window contents is allocated by the application from guest memory, and the proxy therefore has to copy each frame. If the host provided the memory to the guest, this wouldn’t be needed. There is awl_drm
protocol for allocating video memory, which might help here, but I don’t know how that works and, like many Wayland specifications, it seems to be in the process of being replaced by something else. Also, if we’re going to copy the memory, we should at least only copy the damaged region, not the whole thing. I only got this code working just far enough to run the Wayland applications I use (mainly Firefox and Evince). -
I’m still using ssh to proxy X11 connections (mainly for Vim and gitk). I’d prefer to run Xwayland in the VM, but it seems you need to provide a bit of extra support for that, which I haven’t implemented yet. Sommelier can do this, but then copying doesn’t work.
-
The host Wayland compositor needs to be aware of VMs, so it can colour the titles appropriately and limit access to privileged operations.
-
For the full Qubes experience, the network card should be handled by a VM, with another VM managing the firewall. Perhaps the Mirage unikernel firewall could be made to work on KVM too. I’m not sure how guest-to-guest communication works with KVM.
However, because the host NixOS environment is a fully-working Linux system, I can always trade off some security to get things working (e.g. by doing video conferencing directly on the host).
I hope the SpectrumOS project will resume at some point, or that Qubes will find a solution to its hardware compatibility and performance problems.
Hide



Florence and beyond: the future of Tezos storage — Tarides, Mar 04, 2021
In collaboration with Nomadic Labs, Marigold and DaiLambda, we're happy to announce the completion of the next Tezos protocol proposal: Florence.
Tezos is an open-source decentralised blockchain network providing a platform for smart contracts and digital assets. A crucial feature of Tezos is self-amendment: the network protocol can be upgraded dynamically by the network participants themselves. This amendment process is initiated when a participant makes a proposal, which is then subject to a v…
Read more...In collaboration with Nomadic Labs, Marigold and DaiLambda, we're happy to announce the completion of the next Tezos protocol proposal: Florence.
Tezos is an open-source decentralised blockchain network providing a platform for smart contracts and digital assets. A crucial feature of Tezos is self-amendment: the network protocol can be upgraded dynamically by the network participants themselves. This amendment process is initiated when a participant makes a proposal, which is then subject to a vote. After several years working on the Tezos storage stack, this is our first contribution to a proposal; we hope that it will be the first of many!
As detailed in today's announcement from Nomadic Labs, the Florence proposal contains several important changes, from the introduction of Baking Accounts to major quality-of-life improvements for smart contract developers. Of all of these changes, we're especially excited about the introduction of sub-trees to the blockchain context API. In this post, we'll give a brief tour of what these sub-trees will bring for the future of Tezos. But first, what are they?
Merkle sub-trees
The Tezos protocol runs on top of a versioned tree called the “context”, which holds the chain state (balances, contracts etc.). Ever since the pre-Alpha era, the Tezos context has been implemented using Irmin – an open-source Merkle tree database originally written for use by MirageOS unikernels.
For MirageOS, Irmin’s key strength is flexibility: it can run over arbitrary
backends. This is a perfect fit for Tezos, which must be agile and
widely-deployable. Indeed, the Tezos shell has already leveraged this agility
many times, all the way from initial prototypes using a Git backend to the
optimised irmin-pack
implementation used today.
But Irmin can do more than just swapping backends! It also allows users to manipulate the underlying Merkle tree structure of the store with a high-level API. This “Tree” API enables lots of interesting use-cases of Irmin, from mergeable data types (MRDTs) to zero-knowledge proofs. Tezos doesn't use these more powerful features directly yet; that’s where Merkle proofs come in!
Proofs and lightweight Tezos clients
Since the Tezos context keeps track of the current "state" of the blockchain,
each participant needs their own copy of the tree to run transactions against.
This context can grow to be very large, so it's important that it be stored as
compactly as possible: this goal shaped the design of irmin-pack
, our latest
Irmin backend.
However, it's possible to reduce the storage requirements even further via the magic of Merkle trees: individuals only need to store a fragment of the root tree, provided they can demonstrate that this fragment is valid by sending “proofs” of its membership to the other participants.
This property can be used to support ultra-lightweight Tezos clients, a feature currently being developed by TweagIO. To make this a reality, the Tezos protocol needs fine-grained access to context sub-trees in order build Merkle proofs out of them. Fortunately, Irmin already supports this! We extended the protocol to understand sub-trees, lifting the power of Merkle trees to the user.
We’re excited to work with TweagIO and Nomadic Labs on lowering the barriers to entering the Tezos ecosystem and look forward to seeing what they achieve with sub-trees!
Efficient Merkle proof representations
Simply exposing sub-trees in the Tezos context API isn’t quite enough: lightweight clients will also need to serialize them efficiently, since proofs must be exchanged over the network to establish trust between collaborating nodes. Enter Plebeia.
Plebeia is an alternative Tezos storage layer – developed by DaiLambda – with strengths that complement those of Irmin. In particular, Plebeia is capable of generating very compact Merkle proofs. This is partly due to its specialized store structure, and partly due to clever optimizations such as path compression and inlining.
We’re working with the DaiLambda team to unite the strengths of Irmin and Plebeia, which will bring built-in Merkle proof support to the Tezos storage stack. The future is bright for Merkle proofs in Tezos!
Baking account migrations
Trees don’t just enable new features; they have a big impact on performance too! Currently, indexing into the context always happens from its root, which duplicates effort when accessing adjacent values deep in the tree. Fortunately, the new sub-trees provide a natural representation for “cursors” into the context, allowing the protocol to optimize its interactions with the storage layer.
To take just one example, DaiLambda recently exploited this feature to reduce the migration time necessary to introduce Baking Accounts to the network by a factor of 15! We’ll be teaming up with Nomadic Labs and DaiLambda to ensure that Tezos extracts every bit of performance from its storage.
It's especially exciting to have access to lightning-fast storage migrations, since this enables Tezos to evolve rapidly even as the ecosystem expands.
Storage in other languages
Of course, Tezos isn’t just an OCaml project: the storage layer also has a performant Rust implementation as part of TezEdge. We’re working with Simple Staking to bring Irmin to the Rust community via an FFI toolchain, enabling closer alignment between the different Tezos shell implementations.
Conclusion
All in all, it’s an exciting time to work on Tezos storage, with many open-source collaborators from around the world. We’re especially happy to see Tezos taking greater advantage of Irmin’s features, which will strengthen both projects and help them grow together.
If all of this sounds interesting, you can play with it yourself using the recently-released Irmin 2.5.0. Thanks for reading, and stay tuned for future Tezos development updates!
Hide



The ReScript Association — Reason Documentation Blog, Mar 03, 2021




Coq Platform 2021.02.0 is out — Coq, Feb 26, 2021
The Coq development team is proud to announce the immediate availability of the Coq Platform 2021.02.0
The Coq platform is a distribution of the Coq proof assistant together with a selection of Coq libraries. It provides a set of scripts to compile and install OPAM, Coq, Coq libraries and Coq plugins on MacOS, Windows and many Linux distributions in a reliable way with consistent results.
This release is based on Coq 8.13.1 and provides binary installers for Windows and Linux. We plan t…
Read more...The Coq development team is proud to announce the immediate availability of the Coq Platform 2021.02.0
The Coq platform is a distribution of the Coq proof assistant together with a selection of Coq libraries. It provides a set of scripts to compile and install OPAM, Coq, Coq libraries and Coq plugins on MacOS, Windows and many Linux distributions in a reliable way with consistent results.
This release is based on Coq 8.13.1 and provides binary installers for Windows and Linux. We plan to release 2021.02.1 as soon as the MacOS installer is ready. The following point releases, starting with 2021.02.2, will focus on including more Coq libraries (and no breaking change).
Hide



Coq 8.13.1 is out — Coq, Feb 22, 2021
The Coq development team is proud to announce the immediate availability of Coq 8.13.1
Hotfix:
- Fix arities of VM opcodes for some floating-point operations that could cause memory corruption
Please see the changelog to learn more about this release.




The Burali-Forti argument in HoTT/UF — Andrej Bauer (Martin Escardo), Feb 21, 2021
This is joint work with Marc Bezem, Thierry Coquand, Peter Dybjer.
We use the Burali-Forti argument to show that, in homotopy type theory and univalent foundations, the embedding $$ \mathcal{U} \to \mathcal{U}^+$$ of a universe $\mathcal{U}$ into its successor $\mathcal{U}^+$ is not an equivalence. We also establish this for the types of sets, magmas, monoids and groups. The arguments in this post are also written in Agda.
Ordinals in univalent type theory
The Burali-Forti paradox is about…
Read more...This is joint work with Marc Bezem, Thierry Coquand, Peter Dybjer.
We use the Burali-Forti argument to show that, in homotopy type theory and univalent foundations, the embedding $$ \mathcal{U} \to \mathcal{U}^+$$ of a universe $\mathcal{U}$ into its successor $\mathcal{U}^+$ is not an equivalence. We also establish this for the types of sets, magmas, monoids and groups. The arguments in this post are also written in Agda.
Ordinals in univalent type theory
The Burali-Forti paradox is about the collection of all ordinals. In set theory, this collection cannot be a set, because it is too big, and this is what the Burali-Forti argument shows. This collection is a proper class in set theory.
In univalent type theory, we can collect all ordinals of a universe $\mathcal{U}$ in a type $\operatorname{Ordinal}\,\mathcal{U}$ that lives in the successor universe $\mathcal{U}^+$: $$ \operatorname{Ordinal}\,\mathcal{U} : \mathcal{U}^+.$$ See Chapter 10.3 of the HoTT book, which uses univalence to show that this type is a set in the sense of univalent foundations (meaning that its equality is proposition valued).
The analogue in type theory of the notion of proper class in set theory is that of large type, that is, a type in a successor universe $\mathcal{U}^+$ that doesn't have a copy in the universe $\mathcal{U}$. In this post we show that the type of ordinals is large and derive some consequences from this.
We have two further uses of univalence, at least:
-
to adapt the Burali-Forti argument from set theory to our type theory, and
-
to resize down the values of the order relation of the ordinal of ordinals, to conclude that the ordinal of ordinals is large.
There are also a number of uses of univalence via functional and propositional extensionality.
Propositional resizing rules or axioms are not needed, thanks to (2).
An ordinal in a universe $\mathcal{U}$ is a type $X : \mathcal{U}$ equipped with a relation $$ - \prec - : X \to X \to \mathcal{U}$$
required to be
-
proposition valued,
-
transitive,
-
extensional (any two points with same lower set are the same),
-
well founded (every element is accessible, or, equivalently, the principle of transfinite induction holds).
The HoTT book additionally requires $X$ to be a set, but this follows automatically from the above requirements for the order.
The underlying type of an ordinal $\alpha$ is denoted by $\langle \alpha \rangle$ and its order relation is denoted by $\prec_{\alpha}$ or simply $\prec$ when we believe the reader will be able to infer the missing subscript.
Equivalence of ordinals in universes $\mathcal{U}$ and $\mathcal{V}$, $$ -\simeq_o- : \operatorname{Ordinal}\,\mathcal{U} \to \operatorname{Ordinal}\,\mathcal{V} \to \mathcal{U} \sqcup \mathcal{V},$$ means that there is an equivalence of the underlying types that preserves and reflects order. Here we denote by $\mathcal{U} \sqcup \mathcal{V}$ the least upper bound of the two universes $\mathcal{U}$ and $\mathcal{V}$. The precise definition of the type theory we adopt here, including the handling of universes, can be found in Section 2 of this paper and also in our Midlands Graduate School 2019 lecture notes in Agda form.
For ordinals $\alpha$ and $\beta$ in the same universe, their identity type $\alpha = \beta$ is canonically equivalent to the ordinal-equivalence type $\alpha \simeq_o \beta$, by univalence.
The lower set of a point $x : \langle \alpha \rangle$ is written $\alpha \downarrow x$, and is itself an ordinal under the inherited order. The ordinals in a universe $\mathcal{U}$ form an ordinal in the successor universe $\mathcal{U}^+$, denoted by $$ \operatorname{OO}\,\mathcal{U} : \operatorname{Ordinal}\,\mathcal{U}^+,$$ for ordinal of ordinals.
Its underlying type is $\operatorname{Ordinal}\,\mathcal{U}$ and its order relation is denoted by $-\triangleleft-$ and is defined by $$\alpha \triangleleft \beta = \Sigma b : \langle \beta \rangle , \alpha = (\beta \downarrow b).$$
This order has type $$-\triangleleft- : \operatorname{Ordinal}\,\mathcal{U} \to \operatorname{Ordinal}\,\mathcal{U} \to \mathcal{U}^+,$$ as required for it to make the type $\operatorname{\operatorname{Ordinal}} \mathcal{U}$ into an ordinal in the next universe.
By univalence, this order is equivalent to the order defined by $$\alpha \triangleleft^- \beta = \Sigma b : \langle \beta \rangle , \alpha \simeq_o (\beta \downarrow b).$$ This has the more general type $$ -\triangleleft^-- : \operatorname{\operatorname{Ordinal}}\,\mathcal{U} \to \operatorname{\operatorname{Ordinal}}\,\mathcal{V} \to \mathcal{U} \sqcup \mathcal{V},$$ so that we can compare ordinals in different universes. But also when the universes $\mathcal{U}$ and $\mathcal{V}$ are the same, this order has values in $\mathcal{U}$ rather than $\mathcal{U}^+$. The existence of such a resized-down order is crucial for our corollaries of Burali-Forti, but not for Burali-Forti itself.
For any $\alpha : \operatorname{Ordinal}\,\mathcal{U}$ we have $$ \alpha \simeq_o (\operatorname{OO}\,\mathcal{U} \downarrow \alpha),$$ so that $\alpha$ is an initial segment of the ordinal of ordinals, and hence $$ \alpha \triangleleft^- \operatorname{OO}\,\mathcal{U}.$$
The Burali-Forti theorem in HoTT/UF
We adapt the original formulation and argument from set theory.
Theorem. No ordinal in a universe $\mathcal{U}$ can be equivalent to the ordinal of all ordinals in $\mathcal{U}$.
Proof. Suppose, for the sake of deriving absurdity, that there is an ordinal $\alpha \simeq_o \operatorname{OO}\,\mathcal{U}$ in the universe $\mathcal{U}$. By the above discussion, $\alpha \simeq_o \operatorname{OO}\,\mathcal{U} \downarrow \alpha$, and, hence, by symmetry and transitivity, $\operatorname{OO}\,\mathcal{U} \simeq_o \operatorname{OO}\,\mathcal{U} \downarrow \alpha$. Therefore, by univalence, $\operatorname{OO}\,\mathcal{U} = \operatorname{OO}\,\mathcal{U} \downarrow \alpha$. But this means that $\operatorname{OO}\,\mathcal{U} \triangleleft \operatorname{OO}\,\mathcal{U}$, which is impossible as any accessible relation is irreflexive. $\square$
Some corollaries follow.
The type of ordinals is large
We say that a type in the successor universe $\mathcal{U}^+$ is small if it is equivalent to some type in the universe $\mathcal{U}$, and large otherwise.
Theorem. The type of ordinals of any universe is large.
Proof. Suppose the type of ordinals in the universe $\mathcal{U}$ is small, so that there is a type $X : \mathcal{U}$ equivalent to the type $\operatorname{Ordinal}\, \mathcal{U} : \mathcal{U}^+$. We can then transport the ordinal structure from the type $\operatorname{Ordinal}\, \mathcal{U}$ to $X$ along this equivalence to get an ordinal in $\mathcal{U}$ equivalent to the ordinal of ordinals in $\mathcal{U}$, which is impossible by the Burali-Forti theorem.
But the proof is not concluded yet, because we have to say how we transport the ordinal structure. At first sight we should be able to simply apply univalence. However, this is not possible because the types $X : \mathcal{U}$ and $\operatorname{Ordinal}\,\mathcal{U} :\mathcal{U}^+$ live in different universes. The problem is that only elements of the same type can be compared for equality.
-
In the cumulative universe hierarchy of the HoTT book, we automatically have that $X : \mathcal{U}^+$ and hence, being equivalent to the type $\operatorname{Ordinal}\,\mathcal{U} : \mathcal{U}^+$, the type $X$ is equal to the type $\operatorname{Ordinal}\,\mathcal{U}$ by univalence. But this equality is an element of an identity type of the universe $\mathcal{U}^+$. Therefore when we transport the ordinal structure on the type $\operatorname{Ordinal}\,\mathcal{U}$ to the type $X$ along this equality and equip $X$ with it, we get an ordinal in the successor universe $\mathcal{U}^+$. But, in order to get the desired contradiction, we need to get an ordinal in $\mathcal{U}$.
-
In the non-cumulative universe hierarchy we adopt here, we face essentially the same difficulty. We cannot assert that $X : \mathcal{U}^+$ but we can promote $X$ to an equivalent type in the universe $\mathcal{U}^+$, and from this point on we reach the same obstacle as in the cumulative case.
So we have to transfer the ordinal structure from $\operatorname{Ordinal}\,\mathcal{U}$ to $X$ manually along the given equivalence, call it $$f : X \to \operatorname{Ordinal}\,\mathcal{U}.$$ We define the order of $X$ from that of $\operatorname{Ordinal}\,\mathcal{U}$ by $$ x \prec y = f(x) \triangleleft f(y). $$ It is laborious but not hard to see that this order satisfies the required axioms for making $X$ into an ordinal, except that it has values in $\mathcal{U}^+$ rather than $\mathcal{U}$. But this problem is solved by instead using the resized-down relation $\triangleleft^-$ discussed above, which is equivalent to $\triangleleft$ by univalence. $\square$
There are more types and sets in $\mathcal{U}^+$ than in $\mathcal{U}$
By a universe embedding we mean a map $$f : \mathcal{U} \to \mathcal{V}$$ of universes such that, for all $X : \mathcal{U}$, $$f(X) \simeq X.$$ Of course, any two universe embeddings of $\mathcal{U}$ into $\mathcal{V}$ are equal, by univalence, so that there is at most one universe embedding between any two universes. Moreover, universe embeddings are automatically type embeddings (meaning that they have propositional fibers).
So the following says that the universe $\mathcal{U}^+$ is strictly larger than the universe $\mathcal{U}$:
Theorem. The universe embedding $\mathcal{U} \to \mathcal{U}^+$ doesn't have a section and therefore is not an equivalence.
Proof. A section would give a type in the universe $\mathcal{U}$ equivalent to the type of ordinals in $\mathcal{U}$, but we have seen that there is no such type. $\square$
(However, by Theorem 29 of Injective types in univalent mathematics, if propositional resizing holds then the universe embedding $\mathcal{U} \to \mathcal{U}^+$ is a section.)
The same argument of the above theorem shows that there are more sets in $\mathcal{U}^+$ than in $\mathcal{U}$, because the type of ordinals is a set. For a universe $\mathcal{U}$ define the type $$\operatorname{hSet}\,\mathcal{U} : \mathcal{U}^+$$ by $$ \operatorname{hSet}\,\mathcal{U} = \Sigma A : \mathcal{U} , \text{$A$ is a set}.$$ By an hSet embedding we mean a map $$f : \operatorname{hSet}\,\mathcal{U} → \operatorname{hSet}\,\mathcal{V}$$ such that the underlying type of $f(\mathbb{X})$ is equivalent to the underlying type of $\mathbb{X}$ for every $\mathbb{X} : \operatorname{hSet}\,\mathcal{U}$, that is, $$ \operatorname{pr_1} (f (\mathbb{X})) ≃ \operatorname{pr_1}(\mathbb{X}). $$ Again there is at most one hSet-embedding between any two universes, hSet-embeddings are type embeddings, and we have:
Theorem. The hSet-embedding $\operatorname{hSet}\,\mathcal{U} \to \operatorname{hSet}\,\mathcal{U}^+$ doesn't have a section and therefore is not an equivalence.
There are more magmas and monoids in $\mathcal{U}^+$ than in $\mathcal{U}$
This is because the type of ordinals is a monoid under addition with the ordinal zero as its neutral element, and hence also a magma. If the inclusion of the type of magmas (respectively monoids) of one universe into that of the next were an equivalence, then we would have a small copy of the type of ordinals.
Theorem. The canonical embeddings $\operatorname{Magma}\,\mathcal{U} → \operatorname{Magma}\,\mathcal{U}^+$ and $\operatorname{Monoid}\,\mathcal{U} → \operatorname{Monoid}\,\mathcal{U}^+$ don't have sections and hence are not equivalences.
There are more groups in $\mathcal{U}^+$ than in $\mathcal{U}$
This case is more interesting.
The axiom of choice is equivalent to the statement that any non-empty set can be given the structure of a group. So if we assumed the axiom of choice we would be done. But we are brave and work without assuming excluded middle, and hence without choice.
It is also the case that the type of propositions can be given the structure of a group if and only if the principle of excluded middle holds. And the type of propositions is a retract of the type of ordinals, which makes it unlikely that the type of ordinals can be given the structure of a group without excluded middle.
So our strategy is to embed the type of ordinals into a group, and the free group does the job.
-
First we need to show that the inclusion of generators, or the universal map into the free group, is an embedding.
-
But having a large type $X$ embedded into a type $Y$ is not enough to conclude that $Y$ is also large. For example, if $P$ is a proposition then the unique map $P \to \mathbb{1}$ is an embedding, and the unit type $\mathbb{1}$ is small but $P$ may be large.
-
So more work is needed to show that the group freely generated by the type of ordinals is large. We say that a map is small if each of its fibers is small, and large otherwise. De Jong and Escardo showed that if a map $X \to Y$ is small and the type $Y$ is small, then so is the type $X$, and hence if $X$ is large then so is $Y$. Therefore our approach is to show that the universal map into the free group is small. To do this, we exploit the fact that the type of ordinals is locally small (its identity types are all equivalent to small types).
But we want to be as general as possible, and hence work with a spartan univalent type theory which doesn't include higher inductive types other than propositional truncation. We include the empty type, the unit type, natural numbers, list types (which can actually be constructed from the other type formers), coproduct types, $\Sigma$-types, $\Pi$-types, identity types and a sequence of universes. We also assume the univalence axiom (from which we automatically get functional and propositional extensionality) and the axiom of existence of propositional truncations.
-
We construct the free group as a quotient of a type of words following Mines, Richman and Ruitenburg. To prove that the universal map is an embedding, one first proves a Church-Rosser property for the equivalence relation on words. It is remarkable that this can be done without assuming that the set of generators has decidable equality.
-
Quotients can be constructed from propositional truncation. This construction increases the universe level by one, but eliminates into any universe.
-
To resize back the quotient used to construct the group freely generated by the type of ordinals to the original universe, we exploit the fact that the type of ordinals is locally small.
-
As above, we have to transfer manually group structures between equivalent types of different universes, because univalence can't be applied.
Putting the above together, and leaving many steps to the Agda code, we get the following in our spartan univalent type theory.
Theorem. For any large, locally small set, the free group is also large and locally small.
Corollary. In any successor universe $\mathcal{U}^+$ there is a group which is not isomorphic to any group in the universe $\mathcal{U}$.
Corollary. The canonical embedding $\operatorname{Group}\,\mathcal{U} → \operatorname{Group}\,\mathcal{U}^+$ doesn't have a section and hence is not an equivalence.
Can we formulate and prove a general theorem of this kind that specializes to a wide variety of mathematical structures that occur in practice?
Hide



Partnering for more diversity in Tech — Tarides, Feb 15, 2021
Tarides is very glad to announce our partnership with Ada Tech School.
Founded in 2019 and based in Paris (France), Ada Tech School, named for pioneer computer scientist Ada Lovelace, is a programming school designed for women but open to all. The program is driven by three values: feminism, empathy and singularity. Its mission is to facilitate access to programming positions and promote the feminization of tech, by creating training that tackles the gender and cultural biases of IT.
Unfortunate…
Read more...Tarides is very glad to announce our partnership with Ada Tech School.
Founded in 2019 and based in Paris (France), Ada Tech School, named for pioneer computer scientist Ada Lovelace, is a programming school designed for women but open to all. The program is driven by three values: feminism, empathy and singularity. Its mission is to facilitate access to programming positions and promote the feminization of tech, by creating training that tackles the gender and cultural biases of IT.
Unfortunately, the diversity of the candidate pool is very limited when a company tries to fill positions. Barely 10% of computer science students in France are girls. Ada Tech School is an excellent initiative to democratize software education amongst women. The school was created so that women can land a job easily in the IT industry through rigorous training, and then offer ongoing coaching and support to ascend the professional ladder within tech companies.
At Tarides, we believe that a healthy team is a diverse one; and that trust, fairness and inclusion are values needed to build a strong company.
We are committed to doing better, not only by hiring a diverse team and providing a welcoming work environment, but also by putting people first at every stage. This means providing fair and equitable compensation as well as meaningful career advancement opportunities for every employee.
We believe that a great technology always derives from great people, regardless of their background. Head here to see our currently-open positions.
Hide



opam 2.0.8 release — OCamlPro, Feb 09, 2021
We are pleased to announce the minor release of opam 2.0.8.
This new version contains some backported fixes:
- Critical for fish users! Don’t add
.
toPATH
. [#4078] - Fix sandbox script for newer
ccache
versions. [#4079 and #4087] - Fix sandbox crash when
~/.cache
is a symlink. [#4068] - User modifications to the sandbox script are no longer overwritten by
opam init
. [#4020 & #4092] - macOS sandbox script always mounts
/tmp
read-write, regardless ofTMPDIR
[#3742, addressing ocaml/opam-reposi…
We are pleased to announce the minor release of opam 2.0.8.
This new version contains some backported fixes:
- Critical for fish users! Don’t add
.
toPATH
. [#4078] - Fix sandbox script for newer
ccache
versions. [#4079 and #4087] - Fix sandbox crash when
~/.cache
is a symlink. [#4068] - User modifications to the sandbox script are no longer overwritten by
opam init
. [#4020 & #4092] - macOS sandbox script always mounts
/tmp
read-write, regardless ofTMPDIR
[#3742, addressing ocaml/opam-repository#13339] pre-
andpost-session
hooks can now print to the console [#4359]- Switch-specific pre/post sessions hooks are now actually run [#4472]
- Standalone
opam-installer
now correctly builds from sources [#4173] - Fix
arch
variable detection when using 32bit mode on ARM64 and i486 [#4462]
A more complete release note is available.
Note: To homogenise macOS name on system detection, we decided to keep
macos
, and convertdarwin
tomacos
in opam. For the moment, in order to avoid breaking jobs & CIs, we keep uploadingdarwin
&macos
binaries, but from the 2.1.0 release, onlymacos
ones will be kept.
Installation instructions (unchanged):
- From binaries: run
sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
or download manually from the Github “Releases” page to your PATH. In this case, don’t forget to runopam init --reinit -ni
to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script. - From source, using opam:
opam update; opam install opam-devel
(then copy the opam binary to your PATH as explained, and don’t forget to runopam init --reinit -ni
to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script) - From source, manually: see the instructions in the README.
We hope you enjoy this new minor version, and remain open to bug reports and suggestions.
NOTE: this article is cross-posted on opam.ocaml.org and ocamlpro.com, and published in discuss.ocaml.org. Please head to the latter for the comments!
HideAbout OCamlPro:
OCamlPro is a R&D lab founded in 2011, with the mission to help industrial users benefit from state-of-the art programming languages like OCaml and Rust. We design, create and implement custom ad-hoc software for our clients. We also have a long experience in developing and maintaining open-source tooling for OCaml, such as Opam, TryOCaml, ocp-indent, ocp-index and ocp-browser, and we contribute to the core-development of OCaml, notably with our work on the Flambda optimizer branch. Another area of expertise is that of Formal Methods, with tools such as our SMT Solver Alt-Ergo (check our Alt-Ergo Users’ Club). We also provide vocational trainings in OCaml and Rust, and we can build courses on formal methods on-demand. Do not hesitate to reach out by email: contact@ocamlpro.com.




ReScript 9.0 — Reason Documentation Blog, Feb 09, 2021




opam 2.0.8 release — OCaml Platform (Raja Boujbel - OCamlPro, Louis Gesbert - OCamlPro), Feb 08, 2021
We are pleased to announce the minor release of opam 2.0.8.
This new version contains some backported fixes:
- Critical for fish users! Don't add
.
toPATH
. [#4078] - Fix sandbox script for newer
ccache
versions. [#4079 and #4087] - Fix sandbox crash when
~/.cache
is a symlink. [#4068] - User modifications to the sandbox script are no longer overwritten by
opam init
. [#4020 & #4092] - macOS sandbox script always mounts
/tmp
read-write, regardless ofTMPDIR
[#3742, addressing ocaml/opam-repository#13339]
We are pleased to announce the minor release of opam 2.0.8.
This new version contains some backported fixes:
- Critical for fish users! Don't add
.
toPATH
. [#4078] - Fix sandbox script for newer
ccache
versions. [#4079 and #4087] - Fix sandbox crash when
~/.cache
is a symlink. [#4068] - User modifications to the sandbox script are no longer overwritten by
opam init
. [#4020 & #4092] - macOS sandbox script always mounts
/tmp
read-write, regardless ofTMPDIR
[#3742, addressing ocaml/opam-repository#13339] pre-
andpost-session
hooks can now print to the console [#4359]- Switch-specific pre/post sessions hooks are now actually run [#4472]
- Standalone
opam-installer
now correctly builds from sources [#4173] - Fix
arch
variable detection when using 32bit mode on ARM64 and i486 [#4462]
A more complete release note is available.
Note: To homogenise macOS name on system detection, we decided to keep
macos
, and convertdarwin
tomacos
in opam. For the moment, in order to avoid breaking jobs & CIs, we keep uploadingdarwin
&macos
binaries, but from the 2.1.0 release, onlymacos
ones will be kept.
Installation instructions (unchanged):
From binaries: run
sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
or download manually from the Github "Releases" page to your PATH. In this case, don't forget to run
opam init --reinit -ni
to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.From source, using opam:
opam update; opam install opam-devel
(then copy the opam binary to your PATH as explained, and don't forget to run
opam init --reinit -ni
to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script)From source, manually: see the instructions in the README.
We hope you enjoy this new minor version, and remain open to bug reports and suggestions.
HideNOTE: this article is cross-posted on opam.ocaml.org and ocamlpro.com, and published in discuss.ocaml.org. Please head to the latter for the comments!




opam 2.1.0~beta4 released — OCaml Platform (David Allsopp), Feb 08, 2021
Feedback on this post is welcomed on Discuss!
On behalf of the opam team, it gives me great pleasure to announce the third beta release of opam 2.1. Don’t worry, you didn’t miss beta3 - we had an issue with a configure script that caused beta2 to report as beta3 in some instances, so we skipped to beta4 to avoid any further confusion!
We encourage you to try out this new beta release: there are instructions for doing so in our wiki. The instructions include taking a backup of your ~/.opam
…
Feedback on this post is welcomed on Discuss!
On behalf of the opam team, it gives me great pleasure to announce the third beta release of opam 2.1. Don’t worry, you didn’t miss beta3 - we had an issue with a configure script that caused beta2 to report as beta3 in some instances, so we skipped to beta4 to avoid any further confusion!
We encourage you to try out this new beta release: there are instructions for doing so in our wiki. The instructions include taking a backup of your ~/.opam
root as part of the process, which can be restored in order to wind back. Please note that local switches which are written to by opam 2.1 are upgraded and will need to be rebuilt if you go back to opam 2.0. This can either be done by removing _opam
and repeating whatever you use in your build process to create the switch, or you can use opam switch export switch.export
to backup the switch to a file before installing new packages. Note that opam 2.1 shouldn’t upgrade a local switch unless you upgrade the base packages (i.e. the compiler).
What’s new in opam 2.1?
- Switch invariants
- Improved options configuration (see the new
option
and expandedvar
sub-commands) - Integration of system dependencies (formerly the opam-depext plugin), increasing their reliability as it integrates the solving step
- Creation of lock files for reproducible installations (formerly the opam-lock plugin)
- CLI versioning, allowing cleaner deprecations for opam now and also improvements to semantics in future without breaking backwards-compatibility
- Performance improvements to opam-update, conflict messages, and many other areas
- New plugins: opam-compiler and opam-monorepo
Switch invariants
In opam 2.0, when a switch is created the packages selected are put into the “base” of the switch. These packages are not normally considered for upgrade, in order to ease pressure on opam’s solver. This was a much bigger concern early on in opam 2.0’s development, but is less of a problem with the default mccs solver.
However, it’s a problem for system compilers. opam would detect that your system compiler version had changed, but be unable to upgrade the ocaml-system package unless you went through a slightly convoluted process with --unlock-base
.
In opam 2.1, base packages have been replaced by switch invariants. The switch invariant is a package formula which must be satisfied on every upgrade and install. All existing switches’ base packages could just be expressed as package1 & package2 & package3
etc. but opam 2.1 recognises many existing patterns and simplifies them, so in most cases the invariant will be "ocaml-base-compiler" {= 4.11.1}
, etc. This means that opam switch create my_switch ocaml-system
now creates a switch invariant of "ocaml-system"
rather than a specific version of the ocaml-system
package. If your system OCaml package is updated, opam upgrade
will seamlessly switch to the new package.
This also allows you to have switches which automatically install new point releases of OCaml. For example:
opam switch create ocaml-4.11 --formula='"ocaml-base-compiler" {>= "4.11.0" & < "4.12.0~"}' --repos=old=git+https://github.com/ocaml/opam-repository#a11299d81591
opam install utop
Creates a switch with OCaml 4.11.0 (the --repos=
was just to select a version of opam-repository from before 4.11.1 was released). Now issue:
opam repo set-url old git+https://github.com/ocaml/opam-repository
opam upgrade
and opam 2.1 will automatically offer to upgrade OCaml 4.11.1 along with a rebuild of the switch. There’s not yet a clean CLI for specifying the formula, but we intend to iterate further on this with future opam releases so that there is an easier way of saying “install OCaml 4.11.x”.
opam depext integration
opam has long included the ability to install system dependencies automatically via the depext plugin. This plugin has been promoted to a native feature of opam 2.1.0 onwards, giving the following benefits:
- You no longer have to remember to run
opam depext
, opam always checks depexts (there are options to disable this or automate it for CI use). Installation of an opam package in a CI system is now as easy asopam install .
, without having to do the dance ofopam pin add -n/depext/install
. Just one command now for the common case! - The solver is only called once, which both saves time and also stabilises the behaviour of opam in cases where the solver result is not stable. It was possible to get one package solution for the
opam depext
stage and a different solution for theopam install
stage, resulting in some depexts missing. - opam now has full knowledge of depexts, which means that packages can be automatically selected based on whether a system package is already installed. For example, if you have neither MariaDB nor MySQL dev libraries installed,
opam install mysql
will offer to installconf-mysql
andmysql
, but if you have the MariaDB dev libraries installed, opam will offer to installconf-mariadb
andmysql
.
opam lock files and reproducibility
When opam was first released, it had the mission of gathering together scattered OCaml source code to build a community repository. As time marches on, the size of the opam repository has grown tremendously, to over 3000 unique packages with over 18000 unique versions. opam looks at all these packages and is designed to solve for the best constraints for a given package, so that your project can keep up with releases of your dependencies.
While this works well for libraries, we need a different strategy for projects that need to test and ship using a fixed set of dependencies. To satisfy this use-case, opam 2.0.0 shipped with support for using project.opam.locked
files. These are normal opam files but with exact versions of dependencies. The lock file can be used as simply as opam install . --locked
to have a reproducible package installation.
With opam 2.1.0, the creation of lock files is also now integrated into the client:
opam lock
will create a.locked
file for your current switch and project, that you can check into the repository.opam switch create . --locked
can be used by users to reproduce your dependencies in a fresh switch.
This lets a project simultaneously keep up with the latest dependencies (without lock files) while providing a stricter set for projects that need it (with lock files).
CLI Versioning
A new --cli
switch was added to the first beta release, but it’s only now that it’s being widely used. opam is a complex enough system that sometimes bug fixes need to change the semantics of some commands. For example:
opam show --file
needed to change behaviour- The addition of new controls for setting global variables means that the
opam config
was becoming cluttered and some things want to move toopam var
opam switch create 4.11.1
still works in opam 2.0, but it’s really an OPAM 1.2.2 syntax.
Changing the CLI is exceptionally painful since it can break scripts and tools which themselves need to drive opam
. CLI versioning is our attempt to solve this. The feature is inspired by the (lang dune ...)
stanza in dune-project
files which has allowed the Dune project to rename variables and alter semantics without requiring every single package using Dune to upgrade their dune
files on each release.
Now you can specify which version of opam you expected the command to be run against. In day-to-day use of opam at the terminal, you wouldn’t specify it, and you’ll get the latest version of the CLI. For example: opam var --global
is the same as opam var --cli=2.1 --global
. However, if you issue opam var --cli=2.0 --global
, you will told that --global
was added in 2.1 and so is not available to you. You can see similar things with the renaming of opam upgrade --unlock-base
to opam upgrade --update-invariant
.
The intention is that --cli
should be used in scripts, user guides (e.g. blog posts), and in software which calls opam. The only decision you have to take is the oldest version of opam which you need to support. If your script is using a new opam 2.1 feature (for example opam switch create --formula=
) then you simply don’t support opam 2.0. If you need to support opam 2.0, then you can’t use --formula
and should use --packages
instead. opam 2.0 does not have the --cli
option, so for opam 2.0 instead of --cli=2.0
you should set the environment variable OPAMCLI
to 2.0
. As with all opam command line switches, OPAMCLI
is simply the equivalent of --cli
which opam 2.1 will pick-up but opam 2.0 will quietly ignore (and, as with other options, the command line takes precedence over the environment).
Note that opam 2.1 sets OPAMCLI=2.0
when building packages, so on the rare instances where you need to use the opam
command in a package build:
command (or in your build system), you must specify --cli=2.1
if you’re using new features.
There’s even more detail on this feature in our wiki. We’re still finalising some details on exactly how opam
behaves when --cli
is not given, but we’re hoping that this feature will make it much easier in future releases for opam to make required changes and improvements to the CLI without breaking existing set-ups and tools.
What’s new since the last beta?
- opam now uses CLI versioning (#4385)
- opam now exits with code 31 if all failures were during fetch operations (#4214)
opam install
now has a--download-only
flag (#4036), allowing opam’s caches to be primedopam init
now advises the correct shell-specific command foreval $(opam env)
(#4427)post-install
hooks are now allowed to modify or remove installed files (#4388)- New package variable
opamfile-loc
with the location of the installed package opam file (#4402) opam update
now has--depexts
flag (#4355), allowing the system package manager to update too- depext support NetBSD and DragonFlyBSD added (#4396)
- The format-preserving opam file printer has been overhauled (#3993, #4298 and #4302)
- pins are now fetched in parallel (#4315)
os-family=ubuntu
is now treated asos-family=debian
(#4441)opam lint
now checks that strings in filtered package formulae are booleans or variables (#4439)
and many other bug fixes as listed on the release page.
New Plugins
Several features that were formerly plugins have been integrated into opam 2.1.0. We have also developed some new plugins that satisfy emerging workflows from the community and the core OCaml team. They are available for use with the opam 2.1 beta as well, and feedback on them should be directed to the respective GitHub trackers for those plugins.
opam compiler
The opam compiler
plugin can be used to create switches from various sources such as the main opam repository, the ocaml-multicore fork, or a local development directory. It can use Git tag names, branch names, or PR numbers to specify what to install.
Once installed, these are normal opam switches, and one can install packages in them. To iterate on a compiler feature and try opam packages at the same time, it supports two ways to reinstall the compiler: either a safe and slow technique that will reinstall all packages, or a quick way that will just overwrite the compiler in place.
opam monorepo
The opam monorepo
plugin lets you assemble standalone dune workspaces with your projects and all of their opam dependencies, letting you build it all from scratch using only Dune and OCaml. This satisfies the “monorepo” workflow which is commonly requested by large projects that need all of their dependencies in one place. It is also being used by projects that need global cross-compilation for all aspects of a codebase (including C stubs in packages), such as the MirageOS unikernel framework.
Next Steps
This is anticipated to be the final beta in the 2.1 series, and we will be moving to release candidate status after this. We could really use your help with testing this release in your infrastructure and projects and let us know if you run into any blockers. If you have feature requests, please also report them on our issue tracker -- we will be planning the next release cycle once we ship opam 2.1.0 shortly.
Hide



Synthetic mathematics with an excursion into computability theory — Andrej Bauer, Feb 02, 2021
It is my pleasure to have the opportunity to speak at the University of Wisconsin Logic seminar. The hosts are graciously keeping the seminar open for everyone. I will speak about a favorite topic of mine:
Synthetic mathematics with an excursion into computability theory
Speaker: Andrej Bauer (University of Ljubljana)
Location: University of Wisconsin Logic seminar
Time: February 8th 2021, 21:30 UTC (3:30pm CST in Wisconsin, 22:30 CET in Ljubljana)
Video link: the Zoom link is available at the…
It is my pleasure to have the opportunity to speak at the University of Wisconsin Logic seminar. The hosts are graciously keeping the seminar open for everyone. I will speak about a favorite topic of mine:
Synthetic mathematics with an excursion into computability theory
Speaker: Andrej Bauer (University of Ljubljana)
Location: University of Wisconsin Logic seminar
Time: February 8th 2021, 21:30 UTC (3:30pm CST in Wisconsin, 22:30 CET in Ljubljana)
Video link: the Zoom link is available at the seminar page
Abstract:
According to Felix Klein, “synthetic geometry is that which studies figures as such, without recourse to formulae, whereas analytic geometry consistently makes use of such formulae as can be written down after the adoption of an appropriate system of coordinates”. To put it less eloquently, the synthetic method axiomatizes geometry directly by construing points and lines as primitive notions, whereas the analytic method builds a model, the Euclidean plane, from the real numbers.
Do other branches of mathematics posses the synthetic method, too? For instance, what would “synthetic topology” look like? To build spaces out of sets, as topologists usually do, is the analytic way. The synthetic approach must construe spaces as primitive and axiomatize them directly, without any recourse to sets. It cannot introduce continuity as a desirable property of functions, for that would be like identifying straight lines as the non-bending curves.
It is indeed possible to build the synthetic worlds of topology, smooth analysis, measure theory, and computability. In each of them, the basic structure – topological, smooth, measurable, computable – is implicit by virtue of permeating everything, even logic itself. The synthetic worlds demand an economy of thought that the unaccustomed mind finds frustrating at first, but eventually rewards it with new elegance and conceptual clarity. The synthetic method is still fruitfully related to the analytic method by interpretation of the former in models provided by the latter.
We demonstrate the approach by taking a closer look at synthetic computability, whose central axiom states that there are countably many countable subsets of the natural numbers. The axiom is validated and explained by its interpretation in the effective topos, where it corresponds to the familiar fact that the computably enumerable sets may be computably enumerated. Classic theorems of computability may be proved in a straightforward manner, without reference to any notion of computation. We end by showing that in synthetic computability Turing reducibility is expressed in terms of sequential continuity of maps between directed-complete partial orders.
The slides and the video recording of the talk are now available.
Hide



2020 at OCamlPro — OCamlPro, Feb 02, 2021
OCamlPro was created in 2011 to advocate the adoption of the OCaml language and formal methods in general in the industry. While building a team of highly-skilled engineers, we navigated through our expertise domains, delivering works on the OCaml language and tooling, training companies to the use of strongly-typed languages like OCaml but also Rust, tackling formal verification challenges with formal methods, maintaining the SMT solver Alt-Ergo, designing languages and tools for smart contrac…
Read more...OCamlPro was created in 2011 to advocate the adoption of the OCaml language and formal methods in general in the industry. While building a team of highly-skilled engineers, we navigated through our expertise domains, delivering works on the OCaml language and tooling, training companies to the use of strongly-typed languages like OCaml but also Rust, tackling formal verification challenges with formal methods, maintaining the SMT solver Alt-Ergo, designing languages and tools for smart contracts and blockchains, and more!
In this article, as every year, we review some of the work we did during 2020, in many different worlds.
Table of contents
- In the World of OCaml
- In the World of Formal Methods
- In the World of Rust
- In the World of Blockchain Languages
We warmly thank all our partners, clients and friends for their support and collaboration during this peculiar year!
The first lockdown was a surprise and we took advantage of this special moment to go over our past contributions and sum it up in a timeline that gives an overview of the key events that made OCamlPro over the years. The timeline format is amazing to reconnect with our history and to take stock in our accomplishments.
Now this will turn into a generic timeline edition tool on the Web, stay tuned if you are interested in our internal project to be available to the general public! If you think that a timeline would fit your needs and audience, we designed a simplistic tool, tailored for users who want complete control over their data.
In the World of OCaml
Flambda & Compilation Team 
* Work by Pierre Chambart, Vincent Laviron, Guillaume Bury, Pierrick Couderc and Louis Gesbert

OCamlPro is proud to be working on Flambda2, an ambitious work on an OCaml optimizing compiler, in close collaboration with Mark Shinwell from our long-term partner and client Jane Street. Flambda focuses on reducing the runtime cost of abstractions and removing as many short-lived allocations as possible. In 2020, the Flambda team worked on a considerable number of fixes and improvements, transforming Flambda2 from an experimental prototype to a version ready for testing in production!
This year also marked the conclusion of our work on the pack rehabilitation (see our two recent posts Part1 and Part2, and a much simpler Version in 2011). Our work aimed to give them a new youth and utility by adding the possibility to generate functors or recursive packs. This improvement allows programmers to define big functors, functors that are split among multiple files, resulting in what we can view as a way to implement some form of parameterized libraries.
This work is allowed thanks to Jane Street’s funding.
Opam, the Ocaml Package Manager 
* Work by Raja Boujbel, Louis Gesbert and Thomas Blanc

Opam is OCamlPro’s source-based package manager. The first specification draft was written in early 2012 and went on to become OCaml’s official package manager — though it may be used for other languages and projects, since Opam is language-agnostic! If you need to install, upgrade and manage your compiler(s), tools and libraries easily, Opam is meant for you. It supports multiple simultaneous compiler installations, flexible package constraints, and a Git-friendly development workflow.
Our 2020 work on Opam led to the release of two versions of opam 2.0 with small fixes, and the release of three alphas and two betas of Opam 2.1!
Opam 2.1.0 will soon go to release candidate and will introduce a seamless integration of depexts (system dependencies handling), dependency locking, pinning sub-directories, invariant-based definition for Opam switches, the configuration of Opam from the command-line without the need for a manual edition of the configuration files, and the CLI versioning for better handling of CLI evolutions.
This work is greatly helped by Jane Street’s funding and support.
Encouraging Ocaml Adoption: Trainings and Resources for Ocaml 
* Work by Pierre Chambart, Vincent Laviron, Adrien Champion, Mattias, Louis Gesbert and Thomas Blanc

OCamlPro is also a training centre. We organise yearly training sessions for programmers from multiple companies in our offices: from OCaml to OCaml tooling to Rust! We can also design custom and on-site trainings to meet specific needs.
We released a brand new version of TryOCaml, a tool born from our work on Learn-OCaml! Try OCaml has been highly praised by professors at the beginning of the Covid lockdown. Even if it can be used as a personal sandbox, it’s also possible to adapt its usage for classes. TryOCaml is a hassle-free tool that lowers significantly the barriers to start coding in OCaml, as no installation is required.
We regularly release cheat sheets for developers: in 2020, we shared the long-awaited Opam 2.0 cheat sheet, with a new theme! In just two pages, you’ll have in one place the everyday commands you need as an Opam user. We also shine some light on unsung features which may just change your coding life.
2020 was also an important year for the OCaml language itself: we were pleased to welcome OCaml 4.10! One of the highlights of the release was the “Best-fit” Garbage Collector Strategy. We had an in-depth look at this exciting change.
This work is self-funded by OCamlPro as part of its effort to ease the adoption of OCaml.
Open Source Tooling and Libraries for OCaml
* Work by Fabrice Le Fessant, Léo Andrès and David Declerck

OCamlPro has a long history of developing open source tooling and libraries for the community. 2020 was no exception!
drom is a simple tool to create new OCaml projects that will use best OCaml practices, i.e. Opam, Dune and tests. Its goal is to provide a cargo-like user experience and helps onboarding new developers in the community. drom is available in the official opam repository.
directories is a new OCaml Library that provides configuration, cache and data paths (and more!). The library follows the suitable conventions on Linux, MacOS and Windows.
opam-bin is a framework to create and use binary packages with Opam. It enables you to create, use and share binary packages easily with opam, and to create as many local switches as you want spending no time, no disk space! If you often use Opam, opam-bin is a must-have!
We also released a number of libraries, focused on making things easy for developers… so we named them with an ez_
prefix: ez_cmdliner provides an Arg-like interface for cmdliner, ez_file provides simple functions to read and write files, ez_subst provides easily configurable string substitutions for shell-like variable syntax, ez_config provides abstract options stored in configuration files with an OCaml syntax. There are also a lot of ezjs-* libraries, that are bindings to Javascript libraries that we used in some of our js_of_ocaml projects.
This work was self-funded by OCamlPro as part of its effort to improve the OCaml ecosystem.
Supporting the OCaml Software Foundation 
OCamlPro was proud and happy to initiate the OCaml User Survey 2020 as part of the mission of the OCaml Software Foundation. The goal of the survey was to better understand the community and its needs. The final results have not yet been published by the Foundation, we are looking forward to reading them soon!
Events 
Though the year took its toll on our usual tour of the world conferences and events, OCamlPro members still took part in the annual 72-hour team programming competition organised by the International Conference on Functional Programming (ICFP). Our joint team “crapo on acid” went through the final!
In the World of Formal Methods
* Work by Albin Coquereau, Mattias, Sylvain Conchon, Guillaume Bury and Louis Rustenholz

Sylvain Conchon joined us as Formal Methods Chief Scientific Officer in 2020!
Alt-Ergo Development 
OCamlPro develops and maintains Alt-Ergo, an automatic solver of mathematical formulas designed for program verification and based on Satisfiability Modulo Theories (SMT). Alt-Ergo was initially created within the VALS team at University of Paris-Saclay.
In 2020, we focused on the maintainability of our solver. The first part of this work was to maintain and fix issues within the already released version. The 2.3.0 (released in 2019) had some issues that needed to be fixed (minor releases).
The second part of the maintainability of Alt-Ergo contains more major features. All these features were released in the new version 2.4.0 of Alt-Ergo. The main goal of this release was to focus on the user experience and the documentation. This release also contains bug fixes and many other improvements. Alt-Ergo has a new documentation and in particular a documentation on its native syntax.
We also tried to improve the command line experience of our tools with the use of the cmdliner library to parse Alt-Ergo option. This library allows us to improve the manpage of our tool. We tried to harmonise the debug messages and to improve all of Alt-Ergo’s outputs to make it clearer for the users.
Alt-Ergo Users’ Club and R&D Projects 
We thank our partners from the Alt-Ergo Users’ Club, Adacore, CEA List, MERCE (Mitsubishi Electric R&D Centre Europe) and Trust-In-Soft, for their trust. Their support allows us to maintain our tool.
The club was launched in 2019 and the second annual meeting of the Alt-Ergo Users’ Club was held in mid-February 2020. Our annual meeting is the perfect place to review each partner’s needs regarding Alt-Ergo. This year, we had the pleasure of receiving our partners to discuss the roadmap for future Alt-Ergo developments and enhancements. If you want to join us for the next meeting (coming soon), contact us!
We also want to thank our partners from the FUI R&D Project LCHIP. Thanks to this project, we were able to add a new major feature in Alt-Ergo: the support for incremental commands (push
, pop
and check-sat-assuming
) from the smt-lib2 standard.
Alt-Ergo’s Roadmap 
Some of the work we did in 2020 is not yet available. Thanks to our partner MERCE (Mitsubishi Electric R&D Centre Europe), we worked on the SMT model generation. Alt-Ergo is now (partially) able to output a model in the smt-lib2 format. Thanks to the Why3 team from University of Paris-Saclay, we hope that this work will be available in the Why3 platform to help users in their program verification efforts.
Another project was launched in 2020 but is still in early development: the complete rework of our Try-Alt-Ergo website with new features such as model generation. Try Alt-Ergo (current version) allows users to use Alt-Ergo directly from their browsers (Firefox, Chromium) without the need of a server for computations.
This work needed a JavaScript compatible version of Alt-Ergo. We have made some work to build our solver in two versions, one compatible with Node.js and another as a webworker. We hope that this work can make it easier to use our SMT solver in web applications.
This work is funded in part by the FUI R&D Project LCHIP, MERCE, Adacore and with the support of the Alt-Ergo Users’ Club.
In the World of Rust
* Work by Adrien Champion

As OCaml-ians, we naturally saw in the Rust language a beautiful complement to our approach. One opportunity to explore this state-of-the art language has been to pursue our work on ocp-memprof and build Memthol, a visualizer and analyzer to profile OCaml programs. It works on memory dumps containing information about the size and (de)allocation date of part of the allocations performed by some execution of a program.
Between lockdowns, we’ve also been able to hold our Rust training. It’s designed as a highly-modular vocational course, from 1 to 4 days. The training covers a beginner introduction to Rust’s basics features, crucial features and libraries for real-life development and advanced features, all through complex use-cases one would find in real life.
This work was self-funded by OCamlPro as part of our exploration of other statically and strongly typed functional languages.
In the World of Blockchain Languages
* Work by David Declerck and Steven de Oliveira

One of our favourite activities is to develop new programming languages, specialized for specific domains, but with nice properties like clear semantics, strong typing, static typing and functional features. In 2020, we applied our skills in the domain of blockchains and smart contracts, with the creation of a new language, Love, and work on a well-known language, Solidity.
In 2020, our blockchain experts released Love, a type-safe language with a ML syntax and suited for formal verification. In a few words, Love is designed to be expressive for fast development, efficient in execution time and cheap in storage, and readable in terms of smart contracts auditability. Yet, it has a clear and formal semantics and a strong type system to detect bugs. It allows contracts to use other contracts as libraries, and to call viewers on other contracts. Contracts developed in Love can also be formally verified.
We also released a Solidity parser and printer written in OCaml using Menhir, and used it to implement a full interpreter directly in a blockchain. Solidity is probably the most used language for smart contracts, it was first born on Ethereum but many other blockchains provide it as a way to easily onboard new developers coming from the Ethereum ecosystem. In the future, we plan to extend this work with formal verification of Solidity smart contracts.
This is a joint effort with Origin Labs, the company created to tackle blockchain-related challenges.
Towards 2021

Adaptability and continuous improvement, that’s what 2020 brought to OCamlPro! We will remember 2020 as a complicated year, but one that allowed us to surpass ourselves and challenge our projects. We are very proud of our team who all continued to grow, learn, and develop our projects in this particular context. We are more motivated than ever for the coming year, which marks our tenth year anniversary! We’re excited to continue sharing our knowledge of the OCaml world and to accompany you in your own projects.
Drop us a line or call us if you’d like to collaborate with us, we’d love to hear from you!
Hide



Recent and upcoming changes to Merlin — Tarides, Jan 26, 2021
Merlin is a language server for the OCaml programming language; that is, a daemon that connects to your favourite text editor and provides the usual services of an IDE: instant feedback on warnings and errors, autocompletion, "type of the code under the cursor", "go to definition", etc. As we (Frédéric Bour, Ulysse Gérard and I) are about to do a new major release, we thought now would be a good time to talk a bit about some of the changes that are going into this release.
Project configurati…
Read more...Merlin is a language server for the OCaml programming language; that is, a daemon that connects to your favourite text editor and provides the usual services of an IDE: instant feedback on warnings and errors, autocompletion, "type of the code under the cursor", "go to definition", etc. As we (Frédéric Bour, Ulysse Gérard and I) are about to do a new major release, we thought now would be a good time to talk a bit about some of the changes that are going into this release.
Project configuration
Since its very first release, merlin has been getting information about the
project being worked on through a .merlin
file, which used to be written by
the user, but is now often generated by build systems.
This had the advantage of being fairly simple: Merlin would just look in the current directory if such a file existed, otherwise it would look in the parent directories until it found one; and then read it. But there were also some sore points: the granularity of the configuration is the directory not the file, and this information is duplicated from the build system configuration (be it dune, Makefiles, or, back in the days, ocamlbuild).
After years of thinking about it, we've finally decided to make some light
changes to this process. Since version 3.4, when it scans the filesystem Merlin
is now looking for either a .merlin
file or a dune (or dune-project) file. And
when it finds one of those, it starts an external process in the directory where
that file lives, and asks that process for the configuration of the ml(i) file
being edited.
The process in charge of communicating the configuration to Merlin will either
be a specific dune subcommand (when a dune file is found), or a dedicated
.merlin
reader program.
We see several advantages in doing things this way (rather than, for instance,
changing the format of .merlin
files):
- this change is entirely backward compatible, and indeed the transition has
already happened silently; although dune is still emitting
.merlin
files, this will only stop with dune 2.8. - externalizing the reading of
.merlin
files and simply requiring a "normalized" version of the config (i.e. with no mention of packages, just of flags and paths) allowed us to simplify the internals of Merlin. - talking to the build system directly not only gets us a much finer grained configuration (which is important when you build different executables with different flags in the same directory, or if you apply different ppxes to different files of a library), it opens the door to getting a nicer behavior of Merlin in some circumstances. For instance, the build system can (and does) tell Merlin when the project isn't built. Currently we only report that information to the user when he asks for errors, alongside all the other (mostly rubbish) errors. Which is already helpful in itself. But in the future we can start filtering the other errors to only report those that would remain even after building the project (e.g. parse errors).
There are however some changes to look out for:
- people who still use
.merlin
files but do not install Merlin using opam need to make sure to also have thedot-merlin-reader
binary in their PATH (it is available as an opam package, but is also buildable from Merlin's git repository) - vim and emacs users who could previously load packages interactively (by
calling
M-x merlin-use
or:MerlinUse
) cannot do that anymore, since Merlin itself stopped linking with findlib. They'll have to write a.merlin
file.
Dropping support for old versions of OCaml
Until now, every release of Merlin has kept support from OCaml 4.02 to the latest version of OCaml available at the time of that release.
We have done this by having one version of "the frontend" (i.e. handling of buffer state, project configuration; analyses like jump-to-definition, prefix-completion, etc.), but several versions of "the backend" (OCaml's ASTs, parser and typechecker), and choosing at build time which one to use. The reason for doing this instead of having, for instance, one branch of Merlin per version of OCaml, is that while the backends are fairly stable once released, Merlin's frontend keeps evolving. Having just one version of it makes it easier to add features and fix bugs (patches don't need to be duplicated), whilst ensuring that Merlin's behavior is consistent across every version of OCaml that we support.
For this to work however, one needs a well defined API between the frontend and all the versions of the backend. This implies mapping every versions of OCaml's internal ASTs (which receive modifications from one version to the next), to a unified one, so as to keep Merlin's various features version agnostic. But it also means being resilient to OCaml's internal API changes. For instance between 4.02 and 4.11 there were big refactorings impacting: the way one accesses the typing environment, the way one accesses the "load path" (the part of the file system the compiler/Merlin is aware of), the way error message are produced, ...
The rate of changes on the compiler is a lot higher than what it was when we first started Merlin (7 years ago now!) which doesn't just mean that we have to spend more and more time on updating the common interface, but also that the interface is getting harder to define. Recently (with the 4.11 release) some of the changes were significant enough that for some parts of the backend we just didn't manage to produce a single interface to access old and new versions, so instead we had to start duplicating and specializing parts of the frontend. And we don't expect things to get much better in the near future.
Furthermore, Merlin's backends are patched to be more resilient to parsing and typing errors in the user's code. Those patches also need to be evolved at each new release of the compiler. The work required to keep the "unified interface" working was taking time away from updating our patches properly, and our support of user errors has slowly been getting worse over the past few years, resulting in less precise type information when asked, incomplete results when asking for auto-completion, etc.
Therefore we have decided to stop dragging older versions of OCaml along. We plan to switch to a system where we have one branch of Merlin per version of OCaml, and each opam release of Merlin will only be buildable with one version of OCaml. We will keep maintaining all the relatively recent branches (that is: 4.02 definitely will not get fixes, but 4.06 is still in the clear). However, all the new features will be developed against the latest version of OCaml and cherry-picked to older branches if, and only if, there are no merge conflicts and they work as expected without changes.
We hope that this will make it easier for us to update to new versions of OCaml (actually, we already know it does, working on adding support for 4.12 was easier than for any of the other recent versions), will allow us to clean up Merlin's codebase (let's call that a work in progress), and will free some time to work on new features.
You might wonder what all this changes for you, as a user, in practice. Well, it depends:
- if you install Merlin from opam: nothing, or almost nothing. Everything that you currently do with Merlin will keep working. In the future, perhaps some new feature will appear that won't work on all versions. But that day hasn't come yet.
- if you install Merlin some other way (manually?): you can't just fetch master and build it anymore. You have to pick the appropriate branch for your version of OCaml.
- if you're reusing Merlin's codebase as part of another project and (even worse) have patches on it: come and talk to us if you haven't done so already! We can try and integrate your patches, so that you only need to worry about vendoring the right version(s) for your needs.
Over the years, Merlin has received bugfixes and improvements from a long list of people, but for the upcoming release Frédéric and I are particularly grateful to Rudi Grinberg, a long time and regular contributor who also maintains the OCaml LSP project, as well as Ulysse Gérard, who joined our team a year ago now. They are in particular the main authors of the work to improve the handling of projects' configuration.
We hope you'll be as excited as us by all these changes!
Hide



The road ahead for MirageOS in 2021 — Hannes Mehnert (hannes), Jan 25, 2021
Introduction
2020 was an intense year. I hope you're healthy and keep being healthy. I am privileged (as lots of software engineers and academics are) to be able to work from home during the pandemic. Let's not forget people in less privileged situations, and let’s try to give them as much practical, psychological and financial support as we can these days. And as much joy as possible to everyone around :)
I cancelled the autumn MirageOS retreat due to the pandemic. Instead I colle…
Read more...Introduction
2020 was an intense year. I hope you're healthy and keep being healthy. I am privileged (as lots of software engineers and academics are) to be able to work from home during the pandemic. Let's not forget people in less privileged situations, and let’s try to give them as much practical, psychological and financial support as we can these days. And as much joy as possible to everyone around :)
I cancelled the autumn MirageOS retreat due to the pandemic. Instead I collected donations for our hosts in Marrakech - they were very happy to receive our financial support, since they had a difficult year, since their income is based on tourism. I hope that in autumn 2021 we'll have an on-site retreat again.
For 2021, we (at robur) got a grant from the EU (via NGI pointer) for "Deploying MirageOS" (more details below), and another grant from OCaml software foundation for securing the opam supply chain (using conex). Some long-awaited releases for MirageOS libraries, namely a ssh implementation and a rewrite of our git implementation have already been published.
With my MirageOS view, 2020 was a pretty successful year, where we managed to add more features, fixed lots of bugs, and paved the road ahead. I want to thank OCamlLabs for funding work on MirageOS maintenance.
Recap 2020
Here is a very subjective random collection of accomplishments in 2020, where I was involved with some degree.
NetHSM
NetHSM is a hardware security module in software. It is a product that uses MirageOS for security, and is based on the muen separation kernel. We at robur were heavily involved in this product. It already has been security audited by an external team. You can pre-order it from Nitrokey.
TLS 1.3
Dating back to 2016, at the TRON (TLS 1.3 Ready or NOt), we developed a first draft of a 1.3 implementation of OCaml-TLS. Finally in May 2020 we got our act together, including ECC (ECDH P256 from fiat, X25519 from hacl) and testing with tlsfuzzer, and release tls 0.12.0 with TLS 1.3 support. Later we added ECC ciphersuites to TLS version 1.2, implemented ChaCha20/Poly1305, and fixed an interoperability issue with Go's implementation.
Mirage-crypto provides the underlying cryptographic primitives, initially released in March 2020 as a fork of nocrypto -- huge thanks to pqwy for his great work. Mirage-crypto detects CPU features at runtime (thanks to Julow) (bugfix for bswap), using constant time modular exponentation (powm_sec) and hardens against Lenstra's CRT attack, supports compilation on Windows (thanks to avsm), async entropy harvesting (thanks to seliopou), 32 bit support, chacha20/poly1305 (thanks to abeaumont), cross-compilation (thanks to EduardoRFS) and various bug fixes, even memory leak (thanks to talex5 for reporting several of these issues), and RSA interoperability (thanks to psafont for investigation and mattjbray for reporting). This library feels very mature now - being used by multiple stakeholders, and lots of issues have been fixed in 2020.
Qubes Firewall
The MirageOS based Qubes firewall is the most widely used MirageOS unikernel. And it got major updates: in May Steffi announced her and Mindy's work on improving it for Qubes 4.0 - including dynamic firewall rules via QubesDB. Thanks to prototypefund for sponsoring.
In October 2020, we released Mirage 3.9 with PVH virtualization mode (thanks to mato). There's still a memory leak to be investigated and fixed.
IPv6
In December, with Mirage 3.10 we got the IPv6 code up and running. Now MirageOS unikernels have a dual stack available, besides IPv4-only and IPv6-only network stacks. Thanks to nojb for the initial code and MagnusS.
Turns out this blog, but also robur services, are now available via IPv6 :)
Albatross
Also in December, I pushed an initial release of albatross, a unikernel orchestration system with remote access. Deploy your unikernel via a TLS handshake -- the unikernel image is embedded in the TLS client certificates.
Thanks to reynir for statistics support on Linux and improvements of the systemd service scripts. Also thanks to cfcs for the initial Linux port.
CA certs
For several years I postponed the problem of how to actually use the operating system trust anchors for OCaml-TLS connections. Thanks to emillon for initial code, there are now ca-certs and ca-certs-nss opam packages (see release announcement) which fills this gap.
Unikernels
I developed several useful unikernels in 2020, and also pushed a unikernel gallery to the Mirage website:
Traceroute in MirageOS
I already wrote about traceroute which traces the routing to a given remote host.
Unipi - static website hosting
Unipi is a static site webserver which retrieves the content from a remote git repository. Let's encrypt certificate provisioning and dynamic updates via a webhook to be executed for every push.
TLSTunnel - TLS demultiplexing
The physical machine this blog and other robur infrastructure runs on has been relocated from Sweden to Germany mid-December. Thanks to UPS! Fewer IPv4 addresses are available in the new data center, which motivated me to develop tlstunnel.
The new behaviour is as follows (see the monitoring
branch):
- listener on TCP port 80 which replies with a permanent redirect to
https
- listener on TCP port 443 which forwards to a backend host if the requested server name is configured
- its configuration is stored on a block device, and can be dynamically changed (with a custom protocol authenticated with a HMAC)
- it is setup to hold a wildcard TLS certificate and in DNS a wildcard entry is pointing to it
- setting up a new service is very straightforward: only the new name needs to be registered with tlstunnel together with the TCP backend, and everything will just work
2021
The year started with a release of awa, a SSH implementation in OCaml (thanks to haesbaert for initial code). This was followed by a git 3.0 release (thanks to dinosaure).
Deploying MirageOS - NGI Pointer
For 2021 we at robur received funding from the EU (via NGI pointer) for "Deploying MirageOS", which boils down into three parts:
- reproducible binary releases of MirageOS unikernels,
- monitoring (and other devops features: profiling) and integration into existing infrastructure,
- and further documentation and advertisement.
Of course this will all be available open source. Please get in touch via eMail (team aT robur dot coop) if you're eager to integrate MirageOS unikernels into your infrastructure.
We discovered at an initial meeting with an infrastructure provider that a DNS resolver is of interest - even more now that dnsmasq suffered from dnspooq. We are already working on an implementation of DNSSec.
MirageOS unikernels are binary reproducible, and infrastructure tools are available. We are working hard on a web interface (and REST API - think of it as "Docker Hub for MirageOS unikernels"), and more tooling to verify reproducibility.
Conex - securing the supply chain
Another funding from the OCSF is to continue development and deploy conex - to bring trust into opam-repository. This is a great combination with the reproducible build efforts, and will bring much more trust into retrieving OCaml packages and using MirageOS unikernels.
MirageOS 4.0
Mirage so far still uses ocamlbuild and ocamlfind for compiling the virtual machine binary. But the switch to dune is close, a lot of effort has been done. This will make the developer experience of MirageOS much more smooth, with a per-unikernel monorepo workflow where you can push your changes to the individual libraries.
Footer
If you want to support our work on MirageOS unikernels, please donate to robur. I'm interested in feedback, either via twitter, hannesm@mastodon.social or an issue on the data repository.
Hide



Release of Alt-Ergo 2.4.0 — OCamlPro, Jan 22, 2021
A new release of Alt-Ergo (version 2.4.0) is available.
You can get it from Alt-Ergo’s website. The associated opam package will be published in the next few days.
This release contains some major novelties:
- Alt-Ergo supports incremental commands (push/pop) from the smt-lib standard.
- We switched command line parsing to use cmdliner. You will need to use
--<option name>
instead of-<option name>
. Some options have also been renamed, see the manpage or the documentati…
A new release of Alt-Ergo (version 2.4.0) is available.
You can get it from Alt-Ergo’s website. The associated opam package will be published in the next few days.
This release contains some major novelties:
- Alt-Ergo supports incremental commands (push/pop) from the smt-lib standard.
- We switched command line parsing to use cmdliner. You will need to use
--<option name>
instead of-<option name>
. Some options have also been renamed, see the manpage or the documentation. - We improved the online documentation of your solver, available here.
This release also contains some minor novelties:
-
.mlw
and.why
extension are depreciated, the use of.ae
extension is advised. - Add
--input
(resp--output
) option to manually set the input (resp output) file format - Add
--pretty-output
option to add better debug formatting and to add colors - Add exponentiation operation,
**
in native Alt-Ergo syntax. The operator is fully interpreted when applied to constants - Fix
--steps-count
and improve the way steps are counted (AdaCore contribution) - Add
--instantiation-heuristic
option that can enable lighter or heavier instantiation - Reduce the instantiation context (considered foralls / exists) in CDCL-Tableaux to better mimic the Tableaux-like SAT solver
- Multiple bugfixes
The full list of changes is available here. As usual, do not hesitate to report bugs, to ask questions, or to give your feedback!
HideAbout OCamlPro:
OCamlPro SAS is an INRIA spin-off company created in April 2011 to promote the use of the OCaml programming language in the industrial sector. It actively participates in research and development aimed at improving safety and security in the security of computer applications in general. You will find more information on our website: http://www.ocamlpro.com.




Coq 8.13.0 is out — Coq, Jan 07, 2021
The Coq development team is proud to announce the immediate availability of Coq 8.13.0
Highlights:
- Introduction of primitive persistent arrays in the core language, implemented using imperative persistent arrays.
- Introduction of definitional proof irrelevance for the equality type defined in the SProp sort.
- Many improvements to the handling of notations, including number notations, recursive notations and notations with bindings. A new algorithm chooses the most precise notation availab…
The Coq development team is proud to announce the immediate availability of Coq 8.13.0
Highlights:
- Introduction of primitive persistent arrays in the core language, implemented using imperative persistent arrays.
- Introduction of definitional proof irrelevance for the equality type defined in the SProp sort.
- Many improvements to the handling of notations, including number notations, recursive notations and notations with bindings. A new algorithm chooses the most precise notation available to print an expression, which might introduce changes in printing behavior.
Please see the changelog to learn more about this release.
Hide



How We Lost at The Delphi Oracle Challenge — Sebastien Mondet, Dec 23, 2020




Tarides sponsors the Oxbridge Women in Computer Science Conference 2020 — Tarides, Dec 14, 2020
The Oxbridge Women in Computer Science conference is an annual one-day event hosted by the Universities of Oxford and Cambridge (UK). The conference is free and open to everyone from any discipline, regardless of gender identity. Its purpose is to spotlight the successes of women within computer science and strengthen the network of women in computer science within a supportive and friendly environment.
This year, the conference was organised by the University of Cambridge and was held virtually…
Read more...The Oxbridge Women in Computer Science conference is an annual one-day event hosted by the Universities of Oxford and Cambridge (UK). The conference is free and open to everyone from any discipline, regardless of gender identity. Its purpose is to spotlight the successes of women within computer science and strengthen the network of women in computer science within a supportive and friendly environment.
This year, the conference was organised by the University of Cambridge and was held virtually on December 7th.
Tarides is very glad to sponsor this event as we strongly believe that diversity and inclusive culture is a key factor in building a competitive and innovative company. Our employees come from 8 different countries and are 1⁄3 women. Tarides promotes transparency, openness and autonomy, creating a work atmosphere auspicious for employees to strive in their work, to solve novel, impactful and technical challenges. By working on open-source projects, a collaboration is possible with worldwide experts from both academia and industry, encouraging continuous training and education; in this context, it is very important to have teams with diverse backgrounds and experience.
The underrepresentation of women in tech, and particularly in computer science, is not a new problem and gender equality remains a major issue in the corporate world. By celebrating female computer scientists, as during the Oxbridge Women in Computer Science Conference, it will hopefully encourage more women to pursue their interests and careers in the tech field. Head here to see our currently-open positions.
For the event, we made a short video to experience a day in the life of a software engineer:




Coq 8.12.2 is out — Coq, Dec 11, 2020
We are happy to announce the release of Coq 8.12.2.
This release fixes two impacting 8.12 regressions (in notations and the implicit argument inference of the exists tactic). See the changelog for details.




First release of MetAcsl plugin — Frama-C, Dec 09, 2020




Announcing MirageOS 3.10 — MirageOS (Hannes Mehnert), Dec 08, 2020
MirageOS 3.10 release - IPv6
IPv6 and dual (IPv4 and IPv6) stack support https://github.com/mirage/mirage/pull/1187 https://github.com/mirage/mirage/issues/1190
Since a long time, IPv6 code was around in our TCP/IP stack (thanks to @nojb who developed it in 2014). Some months ago, @hannesm and @MagnusS got excited to use it. After we managed to fix some bugs and add some test cases, and writing more code to setup IPv6-only and dual stacks, we are eager to share this support for MirageOS in a re…
Read more...MirageOS 3.10 release - IPv6
IPv6 and dual (IPv4 and IPv6) stack support https://github.com/mirage/mirage/pull/1187 https://github.com/mirage/mirage/issues/1190
Since a long time, IPv6 code was around in our TCP/IP stack (thanks to @nojb who developed it in 2014). Some months ago, @hannesm and @MagnusS got excited to use it. After we managed to fix some bugs and add some test cases, and writing more code to setup IPv6-only and dual stacks, we are eager to share this support for MirageOS in a released version. We expect there to be bugs lingering around, but duplicate address detection (neighbour solicitation and advertisements) has been implemented, and (unless "--accept-router-advertisement=false") router advertisements are decoded and used to configure the IPv6 part of the stack. Configuring a static IPv6 address is also possible (with "--ipv6=2001::42/64").
While at it, we unified the boot arguments between the different targets: namely, on Unix (when using the socket stack), you can now pass "--ipv4=127.0.0.1/24" to the same effect as the direct stack: only listen on 127.0.0.1 (the subnet mask is ignored for the Unix socket stack).
A dual stack unikernel has "--ipv4-only=BOOL" and "--ipv6-only=BOOL" parameters, so a unikernel binary could support both Internet Protocol versions, while the operator can decide which protocol version to use. I.e. now there are both development-time (stackv4 vs stackv6 vs stackv4v6) choices, as well as the run-time choice (via boot parameter).
I'm keen to remove the stackv4 & stackv6 in future versions, and always develop with dual stack (leaving it to configuration & startup time to decide whether to enable ipv4 and ipv6).
Please also note that the default IPv4 network configuration no longer uses 10.0.0.1 as default gateway (since there was no way to unset the default gateway https://github.com/mirage/mirage/issues/1147).
For unikernel developers, there are some API changes in the Mirage module
- New "v4v6" types for IP protocols and stacks
- The ipv6_config record was adjusted in the same fashion as the ipv4_config type: it is now a record of a network (V6.Prefix.t) and gateway (V6.t option)
Some parts of the Mirage_key module were unified as well:
- Arp.ip_address is available (for a dual Ipaddr.t)
- Arg.ipv6_address replaces Arg.ipv6 (for an Ipaddr.V6.t)
- Arg.ipv6 replaces Arg.ipv6_prefix (for a Ipaddr.V6.Prefix.t)
- V6.network and V6.gateway are available, mirroring the V4 submodule
If you're ready to experiment with the dual stack: below is a diff for our basic network example (from mirage-skeleton/device-usage/network) replacing IPv4 with a dual stack, and the tlstunnel unikernel commit https://github.com/roburio/tlstunnel/commit/2cb3e5aa11fca4b48bb524f3c0dbb754a6c8739b changed tlstunnel from IPv4 stack to dual stack.
diff --git a/device-usage/network/config.ml b/device-usage/network/config.ml
index c425edb..eabc9d6 100644
--- a/device-usage/network/config.ml
+++ b/device-usage/network/config.ml
@@ -4,9 +4,9 @@ let port =
let doc = Key.Arg.info ~doc:"The TCP port on which to listen for
incoming connections." ["port"] in
Key.(create "port" Arg.(opt int 8080 doc))
-let main = foreign ~keys:[Key.abstract port] "Unikernel.Main" (stackv4
@-> job)
+let main = foreign ~keys:[Key.abstract port] "Unikernel.Main"
(stackv4v6 @-> job)
-let stack = generic_stackv4 default_network
+let stack = generic_stackv4v6 default_network
let () =
register "network" [
diff --git a/device-usage/network/unikernel.ml
b/device-usage/network/unikernel.ml
index 5d29111..1bf1228 100644
--- a/device-usage/network/unikernel.ml
+++ b/device-usage/network/unikernel.ml
@@ -1,19 +1,19 @@
open Lwt.Infix
-module Main (S: Mirage_stack.V4) = struct
+module Main (S: Mirage_stack.V4V6) = struct
let start s =
let port = Key_gen.port () in
- S.listen_tcpv4 s ~port (fun flow ->
- let dst, dst_port = S.TCPV4.dst flow in
+ S.listen_tcp s ~port (fun flow ->
+ let dst, dst_port = S.TCP.dst flow in
Logs.info (fun f -> f "new tcp connection from IP %s on port %d"
- (Ipaddr.V4.to_string dst) dst_port);
- S.TCPV4.read flow >>= function
+ (Ipaddr.to_string dst) dst_port);
+ S.TCP.read flow >>= function
| Ok `Eof -> Logs.info (fun f -> f "Closing connection!");
Lwt.return_unit
- | Error e -> Logs.warn (fun f -> f "Error reading data from
established connection: %a" S.TCPV4.pp_error e); Lwt.return_unit
+ | Error e -> Logs.warn (fun f -> f "Error reading data from
established connection: %a" S.TCP.pp_error e); Lwt.return_unit
| Ok (`Data b) ->
Logs.debug (fun f -> f "read: %d bytes:\n%s" (Cstruct.len b)
(Cstruct.to_string b));
- S.TCPV4.close flow
+ S.TCP.close flow
);
S.listen s
Other bug fixes include https://github.com/mirage/mirage/issues/1188 (in https://github.com/mirage/mirage/pull/1201) and adapt to charrua 1.3.0 and arp 2.3.0 changes (https://github.com/mirage/mirage/pull/1199).
Hide



ReScript 8.4 — Reason Documentation Blog, Dec 07, 2020




Coq 8.13+beta1 is out — Coq, Dec 07, 2020
The Coq development team is proud to announce the immediate availability of Coq 8.13+beta1
Here the full list of changes.
We encourage our users to test this beta release, in particular:
- The windows installer is now based on the Coq platform: This greatly simplifies its build process and makes it easy to add more packages. At the same time this new installer was only tested by two people, so if you use Windows please give us feedback on any problem you may encounter.
- The…
The Coq development team is proud to announce the immediate availability of Coq 8.13+beta1
Here the full list of changes.
We encourage our users to test this beta release, in particular:
- The windows installer is now based on the Coq platform: This greatly simplifies its build process and makes it easy to add more packages. At the same time this new installer was only tested by two people, so if you use Windows please give us feedback on any problem you may encounter.
- The notation system received many fixes and improvements, in particular the way notations are selected for printing changed: Coq now prefers notations which match a larger part of the term to abbreviate, and takes into account the order in which notations are imported in the current scope only in a second instance. The new rules were designed together with power users, and tested by some of them, but our automatic testing infrastructure for regressions in notation printing is still weak. If your Coq library makes heavy use of notations, please give us feedback on any regression.
The 8.13.0 release is expected to occur about one month from now.
Hide



Memthol: exploring program profiling — OCamlPro, Dec 01, 2020
Memthol is a visualizer and analyzer for program profiling. It works on memory dumps containing information about the size and (de)allocation date of part of the allocations performed by some execution of a program.
For information regarding building memthol, features, browser compatibility… refer to the memthol github repository.
Please note that Memthol, as a side project, is a work in progress that remains in beta status for now.

Memthol’s background
The Memthol wor…
Read more...Memthol is a visualizer and analyzer for program profiling. It works on memory dumps containing information about the size and (de)allocation date of part of the allocations performed by some execution of a program.
For information regarding building memthol, features, browser compatibility… refer to the memthol github repository.
Please note that Memthol, as a side project, is a work in progress that remains in beta status for now.

Memthol’s background
The Memthol work was started more than a year ago (we had published a short introductory paper at the JFLA2020). The whole idea was to use the previous work originally achieved on ocp-memprof, and look for some extra funding to achieve a usable and industrial version.
Then came the excellent memtrace profiler by Jane Street’s team (congrats!)
Memthol is a self-funded side project, that we think it still is worth giving to the OCaml community. Its approach is valuable, and can be complementary. It is released under the free GPL licence v3.
Memthol’s versatility: supporting memtrace’s dump format
The memtrace format is nicely designed and polished enough to be considered a future standard for other tools.
This is why Memthol supports Jane Street’s dumper format, instead of our own dumper library’s.
Why choose Rust to implement Memthol?
We’ve been exploring the Rust language for more than a year now.
The Memthol work was the opportunity to further explore this state-of-the-art language.
We are open to extra funding, to deepen the Memthol work should industrial users be interested.
Memthol’s How-to
The following steps are from the Memthol Github howto.
https://ocamlpro.github.io/memthol/mini_tutorial/
1. Introduction
2. Basics
3. Charts
4. Global Settings
5. Callstack Filters
Introduction
This tutorial deals with the BUI (Browser User Interface) aspect of the profiling. How the dumps are generated is outside of the scope of this document. Currently, memthol accepts memory dumps produced by Memtrace (github repository here). A memtrace dump for a program execution is a single Common Trace Format (CTF) file.
This tutorial uses CTF files from the memthol repository. All paths mentioned in the examples are from its root.
Memthol is written in Rust and is composed of
- a server, written in pure Rust, and
- a client, written in Rust and compiled to web assembly.
The server contains the client, which it will serve at some address on some port when launched.
Running Memthol
Memthol must be given a path to a CTF file generated by memtrace.
> ls rsc/dumps/ctf/flamba.ctf rsc/dumps/ctf/flamba.ctf > memthol rsc/dumps/ctf/flamba.ctf |===| Starting | url: http://localhost:7878 | target: `rsc/dumps/ctf/flamba.ctf` |===|
Basics
Our running example in this section will be rsc/dumps/mini_ae.ctf
:
❯ memthol --filter_gen none rsc/dumps/ctf/mini_ae.ctf |===| Starting | url: http://localhost:7878 | target: `rsc/dumps/ctf/mini_ae.ctf` |===|
Notice the odd --filter_gen none
passed to memthol. Ignore it for now, it will be discussed later in this section.
Once memthol is running, http://localhost:7878/
(here) will lead you to memthol’s BUI, which should look something like this:

Click on the orange everything tab at the bottom left of the screen.

Memthol’s interface is split in three parts:
- the central, main part displays charts. There is only one here, showing the evolution of the program’s total memory size over time based on the memory dump.
- the header gives statistics about the memory dump and handles general settings. There is currently only one, the time window.
- the footer controls your filters (there is only one here), which we are going to discuss right now.
Filters
Filters allow to split allocations and display them separately. A filter is essentially a set of allocations. Memthol has two built-in filters. The first one is the everything filter. You cannot really do anything with it except for changing its name and color using the filter settings in the footer.

Notice that when a filter is modified, two buttons appear in the top-left part of the footer. The first reverts the changes while the second one saves them. Let’s save these changes.

The everything filter always contains all allocations in the memory dump. It cannot be changed besides the cosmetic changes we just did. These changes are reverted in the rest of the section.
Custom Filters
Let’s create a new filter using the +
add button in the top-right part of the footer.

Notice that, unlike everything, the settings for our new filter have a Catch allocation if … (empty) section with a +
add button. Let’s click on that.

This adds a criterion to our filter. Let’s modify it so that the our filter catches everything of size greater than zero machine words, rename the filter, and save these changes.

The tab for our filter now shows (3) next to its name, indicating that this filter catches 3 allocations, which is all the allocations of the (tiny) dump.
Now, create a new filter and modify it so that it catches allocations made in file weak.ml
. This requires
- creating a filter,
- adding a criterion to that filter,
- switching it from
size
tocallstack
- removing the trailing
**
(anything) by erasing it, - write
weak.ml
as the last file that should appear in the callstack.
After saving it, you should get the following.

Sadly, this filter does not match anything, although some allocations fit this filter. This is because a custom filter F
“catches” an allocation if
- all of the criteria of
F
are true for this allocation, and - the allocation is not caught by any custom filter at the left of
F
(note that the everything filter is not a **custom filter).
In other words, all allocations go through the list of custom filters from left to right, and are caught by the first filter such that all of its criteria are true for this allocation. As such, it is similar to switch/case and pattern matching.
Let’s move our new filter to the left by clicking the left arrow next to it, and save the change.

Nice.
You can remove a filter by selecting it and clicking the -
remove button in the top-right part of the footer, next to the +
add filter button. This only works for custom filters, you cannot remove built-in filters.
Now, remove the first filter we created (size ≥ 0), which should give you this:

Out of nowhere, we get the second and last built-in filter: catch-all. When some allocations are not caught by any of your filters, they will end up in this filter. Catch-all is not visible when it does not catch any allocation, which is why it was (mostly) not visible until now. The filter we wrote previously where catching all the allocations.
In the switch/case analogy, catch-all is the
else
/default
branch. In pattern matching, it would be a trailing wildcard_
.
So, weak.ml
only catches one of the three allocations: catch-all appears and indicates it matches the remaining two.
It is also possible to write filter criteria over allocations’ callstacks. This is discussed in the Callstack Filters Section.
Filter Generation
When we launched this section’s running example, we passed --filter_gen none
to memthol. This is because, by default, memthol will run automatic filter generation which scans allocations and generates filters. The default (and currently only) one creates one filter per allocation-site file.
For more details, in particular filter generation customization, run
memthol --filter_gen help
.
If we relaunch the example without --filter_gen none
❯ memthol rsc/dumps/ctf/mini_ae.ctf |===| Starting | url: http://localhost:7878 | target: `rsc/dumps/ctf/mini_ae.ctf` |===|
we get something like this (actual colors may vary):

Charts
This section uses the same running example as the last section.
❯ memthol rsc/dumps/ctf/mini_ae.ctf |===| Starting | url: http://localhost:7878 | target: `rsc/dumps/ctf/mini_ae.ctf` |===|
Filter Toggling
The first way to interact with a chart is to (de)activate filters. Each chart has its own filter tabs allowing to toggle filters on/off.
From the initial settings

click on all filters but everything to toggle them off.

Let’s create a new chart. The only kind of chart that can be constructed currently is total size over time, so click on create chart below our current, lone chart.

Deactivate everything in the second chart.

Nice. We now have the overall total size over time in the first chart, and the details for each filter in the second one.
Next, notice that both charts have, on the left of their title, a down (first chart) and up (second chart) arrow. This moves the charts up and down.
On the right of the title, we have a settings ...
buttons which is discussed below. The next button collapses the chart. If we click on the collapse button of the first chart, it collapses and the button turns into an expand button.

The last button in the chart header removes the chart.
Chart Settings
Clicking the settings ...
button in the header of any chart display its settings. (Clicking on the button again hides them.)

Currently, these chart settings only allow to rename the chart and change its display mode.
Display Mode
In memthol, a chart can be displayed in one of three ways:
- normal, the one we used so far,
- stacked area, where the values of each filter are displayed on top of each other, and
- stacked area percent, same as stacked area but values are displayed as percents of the total.
Here is the second chart from our example displayed as stacked area for instance:

Global Settings
This section uses the same running example as the last section.
❯ memthol rsc/dumps/ctf/mini_ae.ctf |===| Starting | url: http://localhost:7878 | target: `rsc/dumps/ctf/mini_ae.ctf` |===|
There is currently only one global setting: the time window.
Time Window
The time window global setting controls the time interval displayed by all the charts.
In our example,

not much is happening before (roughly) 0.065
seconds. Let’s have the time window start at that point:

Similar to filter edition, we can apply or cancel this change using the two buttons that appeared in the bottom-left corner of the header.
Saving these changes yields

Here is the same chart but with the time window upper-bound set at 0.074
.

Callstack Filters
Callstack filters are filters operating over allocation properties that are sequences of strings (potentially with some other data). Currently, this means allocation callstacks, where the strings are file names with line/column information.
String Filters
A string filter can have three shapes: an actual string value, a regex, or a match anything / wildcard filter represented by the string "..."
. This wildcard filter is discussed in its own section below.
A string value is simply given as a value. To match precisely the string "file_name"
, one only needs to write file_name
. So, a filter that matches precisely the list of strings [ "file_name_1", "file_name_2" ]
will be written
string list | contains | [ file_name_1 file_name_2 ] |
A regex on the other hand has to be written between #"
and "#
. If we want the same filter as above, but want to relax the first string description to be file_name_<i>
where <i>
is a single digit, we write the filter as
string list | contains | [ #"file_name_[0-9]"# file_name_2 ] |
The Wildcard Filter
The wildcard filter, written ...
, lazily (in general, see below) matches a repetition of any string-like element of the list. To break this definition down, let us separate two cases: the first one is when ...
is not followed by another string-like filter, and second one is when it is followed by another filter.
In the first case, ...
simply matches everything. Consider for instance the filter
string list | contain | [ #"file_name_[0-9]"# ... ] |
This filter matches any list of strings that starts with a string accepted by the first regex filter. The following lists of strings are all accepted by the filter above.
[ file_name_0 ]
[ file_name_7 anything at all ]
[ file_name_3 file_name_7 ]
Now, there is one case when ...
is not actually lazy: when the n
string-filters after it are not ...
. In this case, all elements of the list but the n
last ones will be skipped, leaving them for the n
last string filters.
For this reason
string list | contain | [ … #"file_name_[0-9]"# ] |
does work as expected. For example, on the string list
[ "some_file_name" "file_name_7" "another_file_name" "file_name_0" ]
a lazy behavior would not match. First, ...
would match anything up to and excluding a string recognized by #"file_name_[0-9]"#
. So ...
would match some_file_name
, but that’s it since file_name_7
is a match for #"file_name_[0-9]"#
. Hence the filter would reject this list of strings, because there should be nothing left after the match for #"file_name_[0-9]"#
. But there are still another_file_name
and file_name_0
left.
Instead, the filter works as expected. ...
discards all elements but the last one file_name_0
, which is accepted by #"file_name_[0-9]"#
.
Callstack (Location) Filters
Allocation callstack information is a list of tuples containing:
- the name of the file,
- the line in the file,
- a column range.
Currently, the range information is ignored. The line in the file is not, and one can specify a line constraint while writing a callstack filter. The normal syntax is
<string-filter>:<line-filter>
Now, a line filter has two basic shapes
_
: anything,<number>
: an actual value.
It can also be a range:
[<basic-line-filter>, <basic-line-filter>]
: a potentially open range.
Line Filter Examples
_ | matches any line at all |
7 | matches line 7 |
[50, 102] | matches any line between 50 and 102 |
[50, _] | matches any line greater than 50 |
[_, 102] | matches any line less than 102 |
[_, _] | same as _ (matches any line) |
Callstack Filter Examples
Whitespaces are inserted for readability but are not needed:
src/main.ml : _ | matches any line of src/main.ml |
#".*/main.ml"# : 107 | matches line 107 of any main.ml file regardless of its path |
About OCamlPro:
OCamlPro is a company founded in 2011, with the mission to help industrial users benefit from state-of-the art programming languages like OCaml and Rust. We design, create and implement custom ad-hoc software for our clients. We also have a long experience in developing and maintaining open-source tooling for OCaml, such as Opam, TryOCaml, ocp-indent, ocp-index, and ocp-browser, and we contribute to the core-development of OCaml, notably with our work on the Flambda optimizer branch. Another area of expertise is that of Formal Methods, with tools such as our SMT Solver Alt-Ergo (check our Alt-Ergo Users’ Club). We also provide vocational trainings in OCaml and Rust, and we can build courses on formal methods on-demand. Do not hesitate to reach out by email: contact@ocamlpro.com.
.
Hide



Growing the Hardcaml toolset — Jane Street, Dec 01, 2020
I am pleased to announce that we have recently released a slew of new Hardcaml libraries!




Editor Plugin for VSCode and Vim Officially Released! — Reason Documentation Blog, Nov 26, 2020
View older blog posts.
Syndications


- Ahrefs
- Andrej Bauer
- Andy Ray
- Ashish Agarwal
- CUFP
- Cameleon news
- Caml INRIA
- Caml Spotting
- Coherent Graphics
- Coq
- Cranial Burnout
- Cryptosense
- Daniel Bünzli
- Daniel Bünzli (log)
- Daniil Baturin
- Dario Teixeira
- David Baelde
- David Mentré
- David Teller
- Eray Özkural
- Erik de Castro Lopo
- Etienne Millon
- Frama-C
- GaGallium
- Gabriel Radanne
- Gaius Hammond
- Gemma Gordon (OCaml Labs)
- Gerd Stolpmann
- GitHub Jobs
- Grant Rettke
- Hannes Mehnert
- Hong bo Zhang
- Jake Donham
- Jane Street
- KC Sivaramakrishnan
- Leo White
- Magnus Skjegstad
- Marc Simpson
- Matthias Puech
- Matías Giovannini
- Mike Lin
- Mike McClurg
- Mindy Preston
- MirageOS
- OCaml Book
- OCaml Labs compiler hacking
- OCaml Platform
- OCaml-Java
- OCamlCore.com
- OCamlPro
- ODNS project
- Ocaml XMPP project
- Ocsigen project
- Opa
- Orbitz
- Paolo Donadeo
- Perpetually Curious
- Peter Zotov
- Psellos
- Reason Documentation Blog
- Richard Jones
- Rudenoise
- Rudi Grinberg
- Sebastien Mondet
- Shayne Fletcher
- Stefano Zacchiroli
- Tarides
- The BAP Blog
- Thomas Leonard
- Till Varoquaux
- Xinuo Chen
- Yan Shvartzshnaider