aboutsummaryrefslogtreecommitdiff
path: root/_posts
diff options
context:
space:
mode:
authorEuAndreh <eu@euandre.org>2020-10-05 21:27:57 -0300
committerEuAndreh <eu@euandre.org>2020-10-10 14:31:24 -0300
commita7c4db7e9215694ef6c50debcc0b4e7402265687 (patch)
treed45b89f1758b34416e5157cb0e7404276c6fa49a /_posts
parentcargo2nix: Use overlay on example (diff)
downloadeuandre.org-a7c4db7e9215694ef6c50debcc0b4e7402265687.tar.gz
euandre.org-a7c4db7e9215694ef6c50debcc0b4e7402265687.tar.xz
Refactor i18n structure, remove layouts, add slides
Yep, this commit is too big big I didn't want to take the trouble of splitting it now. - _config.yml translation keys are now simetrical on the entitiy: articles, pastebins, tils, slides, etc.; - _posts were moved to _articles: the _posts collection had special treatment in Jekyll which I wanted to avoid; - the filtering of entries for the Atom feed is now done inside the _includes/feed.atom file instead of every feed file; - all entities are now dealt with using the pluralized name: articles, pastebins, tils, slides. No more inconsistencies on the key names, they now should only make sense as the translation value on the dictionary; - add base reveal.js infrastruture, with Jekyll generating the listing page and nothing else.
Diffstat (limited to '_posts')
-rw-r--r--_posts/2018-07-17-running-guix-on-nixos.md196
-rw-r--r--_posts/2018-08-01-verifying-npm-ci-reproducibility.md147
-rw-r--r--_posts/2018-12-21-using-youtube-dl-to-manage-youtube-subscriptions.md273
-rw-r--r--_posts/2019-06-02-stateless-os.md144
-rw-r--r--_posts/2020-08-10-guix-inside-sourcehut-builds-sr-ht-ci.md128
-rw-r--r--_posts/2020-08-31-the-database-i-wish-i-had.md295
-rw-r--r--_posts/2020-10-05-cargo2nix-dramatically-simpler-rust-in-nix.md76
-rw-r--r--_posts/2020-10-05-swift2nix-run-swift-inside-nix-builds.md197
8 files changed, 0 insertions, 1456 deletions
diff --git a/_posts/2018-07-17-running-guix-on-nixos.md b/_posts/2018-07-17-running-guix-on-nixos.md
deleted file mode 100644
index e409f3c..0000000
--- a/_posts/2018-07-17-running-guix-on-nixos.md
+++ /dev/null
@@ -1,196 +0,0 @@
----
-title: Running Guix on NixOS
-date: 2018-07-17
-layout: post
-lang: en
-ref: running-guix-on-nixos
----
-I wanted to run
-Guix on a NixOS machine. Even though the Guix manual explains how to do it
-[step by step][0], I needed a few extra ones to make it work properly.
-
-[0]: https://www.gnu.org/software/guix/manual/en/html_node/Binary-Installation.html#Binary-Installation
-
-I couldn't just install GuixSD because my wireless network card
-doesn't have any free/libre drivers (yet).
-
-## Creating `guixbuilder` users
-
-Guix requires you to create non-root users that will be used to perform
-the builds in the isolated environments.
-
-The [manual][1] already provides you with a ready to run (as root) command for
-creating the build users:
-
-[1]: https://www.gnu.org/software/guix/manual/en/html_node/Build-Environment-Setup.html#Build-Environment-Setup
-
-```bash
-groupadd --system guixbuild
-for i in `seq -w 1 10`;
-do
- useradd -g guixbuild -G guixbuild \
- -d /var/empty -s `which nologin` \
- -c "Guix build user $i" --system \
- guixbuilder$i;
-done
-```
-
-However, In my personal NixOS I have disabled [`users.mutableUsers`][2], which
-means that even if I run the above command it means that they'll be removed once
-I rebuild my OS:
-
-[2]: https://nixos.org/nixos/manual/index.html#sec-user-management
-
-```shell
-$ sudo nixos-rebuild switch
-(...)
-removing user ‘guixbuilder7’
-removing user ‘guixbuilder3’
-removing user ‘guixbuilder10’
-removing user ‘guixbuilder1’
-removing user ‘guixbuilder6’
-removing user ‘guixbuilder9’
-removing user ‘guixbuilder4’
-removing user ‘guixbuilder2’
-removing user ‘guixbuilder8’
-removing user ‘guixbuilder5’
-(...)
-```
-
-Instead of enabling `users.mutableUsers` I could add the Guix users by
-adding them to my system configuration:
-
-```nix
-{ config, pkgs, ...}:
-
-{
-
- # ... NixOS usual config ellided ...
-
- users = {
- mutableUsers = false;
-
- extraUsers =
- let
- andrehUser = {
- andreh = {
- # my custom user config
- };
- };
- buildUser = (i:
- {
- "guixbuilder${i}" = { # guixbuilder$i
- group = "guixbuild"; # -g guixbuild
- extraGroups = ["guixbuild"]; # -G guixbuild
- home = "/var/empty"; # -d /var/empty
- shell = pkgs.nologin; # -s `which nologin`
- description = "Guix build user ${i}"; # -c "Guix buid user $i"
- isSystemUser = true; # --system
- };
- }
- );
- in
- # merge all users
- pkgs.lib.fold (str: acc: acc // buildUser str)
- andrehUser
- # for i in `seq -w 1 10`
- (map (pkgs.lib.fixedWidthNumber 2) (builtins.genList (n: n+1) 10));
-
- extraGroups.guixbuild = {
- name = "guixbuild";
- };
- };
-}
-```
-
-Here I used `fold` and the `//` operator to merge all of the
-configuration sets into a single `extraUsers` value.
-
-## Creating the `systemd` service
-
-One other thing missing was the `systemd` service.
-
-First I couldn't just copy the `.service` file to `/etc` since in NixOS
-that folder isn't writable. But also I wanted the service to be better
-integrated with the OS.
-
-That was a little easier than creating the users, all I had to do was translate
-the provided [`guix-daemon.service.in`][3] configuration to an equivalent Nix
-expression
-
-[3]: https://git.savannah.gnu.org/cgit/guix.git/tree/etc/guix-daemon.service.in?id=00c86a888488b16ce30634d3a3a9d871ed6734a2
-
-```ini
-# This is a "service unit file" for the systemd init system to launch
-# 'guix-daemon'. Drop it in /etc/systemd/system or similar to have
-# 'guix-daemon' automatically started.
-
-[Unit]
-Description=Build daemon for GNU Guix
-
-[Service]
-ExecStart=/var/guix/profiles/per-user/root/guix-profile/bin/guix-daemon --build-users-group=guixbuild
-Environment=GUIX_LOCPATH=/root/.guix-profile/lib/locale
-RemainAfterExit=yes
-StandardOutput=syslog
-StandardError=syslog
-
-# See <https://lists.gnu.org/archive/html/guix-devel/2016-04/msg00608.html>.
-# Some package builds (for example, go@1.8.1) may require even more than
-# 1024 tasks.
-TasksMax=8192
-
-[Install]
-WantedBy=multi-user.target
-```
-
-This sample `systemd` configuration file became:
-
-```nix
-guix-daemon = {
- enable = true;
- description = "Build daemon for GNU Guix";
- serviceConfig = {
- ExecStart = "/var/guix/profiles/per-user/root/guix-profile/bin/guix-daemon --build-users-group=guixbuild";
- Environment="GUIX_LOCPATH=/root/.guix-profile/lib/locale";
- RemainAfterExit="yes";
- StandardOutput="syslog";
- StandardError="syslog";
- TaskMax= "8192";
- };
- wantedBy = [ "multi-user.target" ];
-};
-```
-
-There you go! After running `sudo nixos-rebuild switch` I could get Guix
-up and running:
-
-```bash
-$ guix package -i hello
-The following package will be installed:
- hello 2.10 /gnu/store/bihfrh609gkxb9dp7n96wlpigiv3krfy-hello-2.10
-
-substitute: updating substitutes from 'https://mirror.hydra.gnu.org'... 100.0%
-The following derivations will be built:
- /gnu/store/nznmdn6inpwxnlkrasydmda4s2vsp9hg-profile.drv
- /gnu/store/vibqrvw4c8lacxjrkqyzqsdrmckv77kq-fonts-dir.drv
- /gnu/store/hi8alg7wi0wgfdi3rn8cpp37zhx8ykf3-info-dir.drv
- /gnu/store/cvkbp378cvfjikz7mjymhrimv7j12p0i-ca-certificate-bundle.drv
- /gnu/store/d62fvxymnp95rzahhmhf456bsf0xg1c6-manual-database.drv
-Creating manual page database...
-1 entries processed in 0.0 s
-2 packages in profile
-$ hello
-Hello, world!
-```
-
-Some improvements to this approach are:
-
-1. looking into [NixOS modules][4] and trying to bundle everything together
- into a single logical unit;
-2. [build Guix from source][5] and share the Nix store and daemon with Guix.
-
-Happy Guix/Nix hacking!
-
-[4]: https://nixos.org/nixos/manual/index.html#sec-writing-modules
-[5]: https://www.gnu.org/software/guix/manual/en/html_node/Requirements.html#Requirements
diff --git a/_posts/2018-08-01-verifying-npm-ci-reproducibility.md b/_posts/2018-08-01-verifying-npm-ci-reproducibility.md
deleted file mode 100644
index ff983e5..0000000
--- a/_posts/2018-08-01-verifying-npm-ci-reproducibility.md
+++ /dev/null
@@ -1,147 +0,0 @@
----
-title: Verifying "npm ci" reproducibility
-date: 2018-08-01
-layout: post
-lang: en
-ref: veryfing-npm-ci-reproducibility
-updated_at: 2019-05-22
----
-When [npm@5](https://blog.npmjs.org/post/161081169345/v500) came bringing
-[package-locks](https://docs.npmjs.com/files/package-locks) with it, I was
-confused about the benefits it provided, since running `npm install` more than
-once could resolve all the dependencies again and yield yet another fresh
-`package-lock.json` file. The message saying "you should add this file to
-version control" left me hesitant on what to do[^package-lock-message].
-
-However the [addition of `npm ci`](https://blog.npmjs.org/post/171556855892/introducing-npm-ci-for-faster-more-reliable)
-filled this gap: it's a stricter variation of `npm install` which
-guarantees that "[subsequent installs are able to generate identical trees](https://docs.npmjs.com/files/package-lock.json)". But are they
-really identical? I could see that I didn't have the same problems of
-different installation outputs, but I didn't know for **sure** if it
-was really identical.
-
-## Computing the hash of a directory's content
-
-I quickly searched for a way to check for the hash signature of an
-entire directory tree, but I couldn't find one. I've made a poor
-man's [Merkle tree](https://en.wikipedia.org/wiki/Merkle_tree)
-implementation using `sha256sum` and a few piped commands at the
-terminal:
-
-```bash
-merkle-tree () {
- dirname="${1-.}"
- pushd "$dirname"
- find . -type f | \
- sort | \
- xargs -I{} sha256sum "{}" | \
- sha256sum | \
- awk '{print $1}'
- popd
-}
-```
-
-Going through it line by line:
-
-- #1 we define a Bash function called `merkle-tree`;
-- #2 it accepts a single argument: the directory to compute the
- merkle tree from. If nothing is given, it runs on the current
- directory (`.`);
-- #3 we go to the directory, so we don't get different prefixes in
- `find`'s output (like `../a/b`);
-- #4 we get all files from the directory tree. Since we're using
- `sha256sum` to compute the hash of the file contents, we need to
- filter out folders from it;
-- #5 we need to sort the output, since different file systems and
- `find` implementations may return files in different orders;
-- #6 we use `xargs` to compute the hash of each file individually
- through `sha256sum`. Since a file may contain spaces we need to
- escape it with quotes;
-- #7 we compute the hash of the combined hashes. Since `sha256sum`
- output is formatted like `<hash> <filename>`, it produces a
- different final hash if a file ever changes name without changing
- it's content;
-- #8 we get the final hash output, excluding the `<filename>` (which
- is `-` in this case, aka `stdin`).
-
-### Positive points:
-
-1. ignore timestamp: running more than once on different installation
- yields the same hash;
-2. the name of the file is included in the final hash computation.
-
-### Limitations:
-
-1. it ignores empty folders from the hash computation;
-2. the implementation's only goal is to represent using a digest
- whether the content of a given directory is the same or not. Leaf
- presence checking is obviously missing from it.
-
-### Testing locally with sample data
-
-```bash
-mkdir /tmp/merkle-tree-test/
-cd /tmp/merkle-tree-test/
-mkdir -p a/b/ a/c/ d/
-echo "one" > a/b/one.txt
-echo "two" > a/c/two.txt
-echo "three" > d/three.txt
-merkle-tree . # output is be343bb01fe00aeb8fef14a3e16b1c3d1dccbf86d7e41b4753e6ccb7dc3a57c3
-merkle-tree . # output still is be343bb01fe00aeb8fef14a3e16b1c3d1dccbf86d7e41b4753e6ccb7dc3a57c3
-echo "four" > d/four.txt
-merkle-tree . # output is now b5464b958969ed81815641ace96b33f7fd52c20db71a7fccc45a36b3a2ae4d4c
-rm d/four.txt
-merkle-tree . # output back to be343bb01fe00aeb8fef14a3e16b1c3d1dccbf86d7e41b4753e6ccb7dc3a57c3
-echo "hidden-five" > a/b/one.txt
-merkle-tree . # output changed 471fae0d074947e4955e9ac53e95b56e4bc08d263d89d82003fb58a0ffba66f5
-```
-
-It seems to work for this simple test case.
-
-You can try copying and pasting it to verify the hash signatures.
-
-## Using `merkle-tree` to check the output of `npm ci`
-
-*I've done all of the following using Node.js v8.11.3 and npm@6.1.0.*
-
-In this test case I'll take the main repo of
-[Lerna](https://lernajs.io/)[^lerna-package-lock]:
-
-```bash
-cd /tmp/
-git clone https://github.com/lerna/lerna.git
-cd lerna/
-git checkout 57ff865c0839df75dbe1974971d7310f235e1109
-npm ci
-merkle-tree node_modules/ # outputs 11e218c4ac32fac8a9607a8da644fe870a25c99821167d21b607af45699afafa
-rm -rf node_modules/
-npm ci
-merkle-tree node_modules/ # outputs 11e218c4ac32fac8a9607a8da644fe870a25c99821167d21b607af45699afafa
-npm ci # test if it also works with an existing node_modules/ folder
-merkle-tree node_modules/ # outputs 11e218c4ac32fac8a9607a8da644fe870a25c99821167d21b607af45699afafa
-```
-
-Good job `npm ci` :)
-
-#6 and #9 take some time to run (21 seconds in my machine), but this
-specific use case isn't performance sensitive. The slowest step is
-computing the hash of each individual file.
-
-## Conclusion
-
-`npm ci` really "generates identical trees".
-
-I'm not aware of any other existing solution for verifying the hash
-signature of a directory. If you know any I'd [like to know](mailto:eu@euandre.org).
-
-## *Edit*
-
-2019/05/22: Fix spelling.
-
-[^package-lock-message]: The
- [documentation](https://docs.npmjs.com/cli/install#description) claims `npm
- install` is driven by the existing `package-lock.json`, but that's actually
- [a little bit tricky](https://github.com/npm/npm/issues/17979#issuecomment-332701215).
-
-[^lerna-package-lock]: Finding a big known repo that actually committed the
- `package-lock.json` file was harder than I expected.
diff --git a/_posts/2018-12-21-using-youtube-dl-to-manage-youtube-subscriptions.md b/_posts/2018-12-21-using-youtube-dl-to-manage-youtube-subscriptions.md
deleted file mode 100644
index a9a605b..0000000
--- a/_posts/2018-12-21-using-youtube-dl-to-manage-youtube-subscriptions.md
+++ /dev/null
@@ -1,273 +0,0 @@
----
-title: Using "youtube-dl" to manage YouTube subscriptions
-date: 2018-12-21
-layout: post
-lang: en
-ref: using-youtube-dl-to-manage-youtube-subscriptions
----
-I've recently read the
-[announcement](https://www.reddit.com/r/DataHoarder/comments/9sg8q5/i_built_a_selfhosted_youtube_subscription_manager/)
-of a very nice [self-hosted YouTube subscription
-manager](https://github.com/chibicitiberiu/ytsm). I haven't used
-YouTube's built-in subscriptions for a while now, and haven't missed
-it at all. When I saw the announcement, I considered writing about the
-solution I've built on top of [youtube-dl](https://youtube-dl.org/).
-
-## Background: the problem with YouTube
-
-In many ways, I agree with [André Staltz's view on data ownership and
-privacy](https://staltz.com/what-happens-when-you-block-internet-giants.html):
-
-> I started with the basic premise that "I want to be in control of my
-> data". Sometimes that meant choosing when to interact with an internet
-> giant and how much I feel like revealing to them. Most of times it
-> meant not interacting with them at all. I don't want to let them be in
-> full control of how much they can know about me. I don't want to be in
-> autopilot mode. (...) Which leads us to YouTube. While I was able to
-> find alternatives to Gmail (Fastmail), Calendar (Fastmail), Translate
-> (Yandex Translate), etc, YouTube remains as the most indispensable
-> Google-owned web service. It is really really hard to avoid consuming
-> YouTube content. It was probably the smartest startup acquisition
-> ever. My privacy-oriented alternative is to watch YouTube videos
-> through Tor, which is technically feasible but not polite to use the
-> Tor bandwidth for these purposes. I'm still scratching my head with
-> this issue.
-
-Even though I don't use most alternative services he mentions, I do
-watch videos from YouTube. But I also feel uncomfortable logging in to
-YouTube with a Google account, watching videos, creating playlists and
-similar things.
-
-Using the mobile app is worse: you can't even block ads in there.
-You're in less control on what you share with YouTube and Google.
-
-## youtube-dl
-
-youtube-dl is a command-line tool for downloading videos, from YouTube
-and [many other sites](https://rg3.github.io/youtube-dl/supportedsites.html):
-
-```shell
-$ youtube-dl https://www.youtube.com/watch?v=rnMYZnY3uLA
-[youtube] rnMYZnY3uLA: Downloading webpage
-[youtube] rnMYZnY3uLA: Downloading video info webpage
-[download] Destination: A Origem da Vida _ Nerdologia-rnMYZnY3uLA.mp4
-[download] 100% of 32.11MiB in 00:12
-```
-
-It can be used to download individual videos as showed above, but it
-also has some interesting flags that we can use:
-
-- `--output`: use a custom template to create the name of the
- downloaded file;
-- `--download-archive`: use a text file for recording and remembering
- which videos were already downloaded;
-- `--prefer-free-formats`: prefer free video formats, like `webm`,
- `ogv` and Matroska `mkv`;
-- `--playlist-end`: how many videos to download from a "playlist" (a
- channel, a user or an actual playlist);
-- `--write-description`: write the video description to a
- `.description` file, useful for accessing links and extra content.
-
-Putting it all together:
-
-```shell
-$ youtube-dl "https://www.youtube.com/channel/UClu474HMt895mVxZdlIHXEA" \
- --download-archive ~/Nextcloud/cache/youtube-dl-seen.conf \
- --prefer-free-formats \
- --playlist-end 20 \
- --write-description \
- --output "~/Downloads/yt-dl/%(uploader)s/%(upload_date)s - %(title)s.%(ext)s"
-```
-
-This will download the latest 20 videos from the selected channel, and
-write down the video IDs in the `youtube-dl-seen.conf` file. Running it
-immediately after one more time won't have any effect.
-
-If the channel posts one more video, running the same command again will
-download only the last video, since the other 19 were already
-downloaded.
-
-With this basic setup you have a minimal subscription system at work,
-and you can create some functions to help you manage that:
-
-```shell
-#!/bin/sh
-
-export DEFAULT_PLAYLIST_END=15
-
-download() {
- youtube-dl "$1" \
- --download-archive ~/Nextcloud/cache/youtube-dl-seen.conf \
- --prefer-free-formats \
- --playlist-end $2 \
- --write-description \
- --output "~/Downloads/yt-dl/%(uploader)s/%(upload_date)s - %(title)s.%(ext)s"
-}
-export -f download
-
-
-download_user() {
- download "https://www.youtube.com/user/$1" ${2-$DEFAULT_PLAYLIST_END}
-}
-export -f download_user
-
-
-download_channel() {
- download "https://www.youtube.com/channel/$1" ${2-$DEFAULT_PLAYLIST_END}
-}
-export -f download_channel
-
-
-download_playlist() {
- download "https://www.youtube.com/playlist?list=$1" ${2-$DEFAULT_PLAYLIST_END}
-}
-export -f download_playlist
-```
-
-With these functions, you now can have a subscription fetching script to
-download the latest videos from your favorite channels:
-
-```shell
-#!/bin/sh
-
-download_user ClojureTV 15
-download_channel "UCmEClzCBDx-vrt0GuSKBd9g" 100
-download_playlist "PLqG7fA3EaMRPzL5jzd83tWcjCUH9ZUsbX" 15
-```
-
-Now, whenever you want to watch the latest videos, just run the above
-script and you'll get all of them in your local machine.
-
-## Tradeoffs
-
-### I've made it for myself, with my use case in mind
-
-1. Offline
-
- My internet speed it somewhat reasonable[^internet-speed], but it is really
- unstable. Either at work or at home, it's not uncommon to loose internet
- access for 2 minutes 3~5 times every day, and stay completely offline for a
- couple of hours once every week.
-
- Working through the hassle of keeping a playlist on disk has payed
- off many, many times. Sometimes I even not notice when the
- connection drops for some minutes, because I'm watching a video and
- working on some document, all on my local computer.
-
- There's also no quality adjustment for YouTube's web player, I
- always pick the higher quality and it doesn't change during the
- video. For some types of content, like a podcast with some tiny
- visual resources, this doesn't change much. For other types of
- content, like a keynote presentation with text written on the
- slides, watching on 144p isn't really an option.
-
- If the internet connection drops during the video download,
- youtube-dl will resume from where it stopped.
-
- This is an offline first benefit that I really like, and works well
- for me.
-
-2. Sync the "seen" file
-
- I already have a running instance of Nextcloud, so just dumping the
- `youtube-dl-seen.conf` file inside Nextcloud was a no-brainer.
-
- You could try putting it in a dedicated git repository, and wrap the
- script with an autocommit after every run. If you ever had a merge
- conflict, you'd simply accept all changes and then run:
-
- ```shell
- $ uniq youtube-dl-seen.conf > youtube-dl-seen.conf
- ```
-
- to tidy up the file.
-
-3. Doesn't work on mobile
-
- My primary device that I use everyday is my laptop, not my phone. It
- works well for me this way.
-
- Also, it's harder to add ad-blockers to mobile phones, and most
- mobile software still depends on Google's and Apple's blessing.
-
- If you wish, you can sync the videos to the SD card periodically,
- but that's a bit of extra manual work.
-
-### The Good
-
-1. Better privacy
-
- We don't even have to configure the ad-blocker to keep ads and
- trackers away!
-
- YouTube still has your IP address, so using a VPN is always a good
- idea. However, a timing analysis would be able to identify you
- (considering the current implementation).
-
-2. No need to self-host
-
- There's no host that needs maintenance. Everything runs locally.
-
- As long as you keep youtube-dl itself up to date and sync your
- "seen" file, there's little extra work to do.
-
-3. Track your subscriptions with git
-
- After creating a `subscriptions.sh` executable that downloads all
- the videos, you can add it to git and use it to track metadata about
- your subscriptions.
-
-### The Bad
-
-1. Maximum playlist size is your disk size
-
- This is a good thing for getting a realistic view on your actual
- "watch later" list. However I've run out of disk space many
- times, and now I need to be more aware of how much is left.
-
-### The Ugly
-
-We can only avoid all the bad parts of YouTube with youtube-dl as long
-as YouTube keeps the videos public and programmatically accessible. If
-YouTube ever blocks that we'd loose the ability to consume content this
-way, but also loose confidence on considering YouTube a healthy
-repository of videos on the internet.
-
-## Going beyond
-
-Since you're running everything locally, here are some possibilities to
-be explored:
-
-### A playlist that is too long for being downloaded all at once
-
-You can wrap the `download_playlist` function (let's call the wrapper
-`inc_download`) and instead of passing it a fixed number to the
-`--playlist-end` parameter, you can store the `$n` in a folder
-(something like `$HOME/.yt-db/$PLAYLIST_ID`) and increment it by `$step`
-every time you run `inc_download`.
-
-This way you can incrementally download videos from a huge playlist
-without filling your disk with gigabytes of content all at once.
-
-### Multiple computer scenario
-
-The `download_playlist` function could be aware of the specific machine
-that it is running on and apply specific policies depending on the
-machine: always download everything; only download videos that aren't
-present anywhere else; etc.
-
-## Conclusion
-
-youtube-dl is a great tool to keep at hand. It covers a really large
-range of video websites and works robustly.
-
-Feel free to copy and modify this code, and [send me](mailto:eu@euandre.org)
-suggestions of improvements or related content.
-
-## *Edit*
-
-2019/05/22: Fix spelling.
-
-[^internet-speed]: Considering how expensive it is and the many ways it could be
- better, but also how much it has improved over the last years, I say it's
- reasonable.
diff --git a/_posts/2019-06-02-stateless-os.md b/_posts/2019-06-02-stateless-os.md
deleted file mode 100644
index 2614b44..0000000
--- a/_posts/2019-06-02-stateless-os.md
+++ /dev/null
@@ -1,144 +0,0 @@
----
-title: Using NixOS as an stateless workstation
-date: 2019-06-02
-layout: post
-lang: en
-ref: stateless-os
----
-Last week[^last-week] I changed back to an old[^old-computer] Samsung laptop, and installed
-[NixOS](https://nixos.org/) on it.
-
-After using NixOS on another laptop for around two years, I wanted
-verify how reproducible was my desktop environment, and how far does
-NixOS actually can go on recreating my whole OS from my configuration
-files and personal data. I gravitated towards NixOS after trying (and
-failing) to create an `install.sh` script that would imperatively
-install and configure my whole OS using apt-get. When I found a
-GNU/Linux distribution that was built on top of the idea of
-declaratively specifying the whole OS I was automatically convinced[^convinced-by-declarative-aspect].
-
-I was impressed. Even though I've been experiencing the benefits of Nix
-isolation daily, I always felt skeptical that something would be
-missing, because the devil is always on the details. But the result was
-much better than expected!
-
-There were only 2 missing configurations:
-
-1. tap-to-click on the touchpad wasn't enabled by default;
-2. the default theme from the gnome-terminal is "Black on white"
- instead of "White on black".
-
-That's all.
-
-I haven't checked if I can configure those in NixOS GNOME module, but I
-guess both are scriptable and could be set in a fictional `setup.sh`
-run.
-
-This makes me really happy, actually. More happy than I anticipated.
-
-Having such a powerful declarative OS makes me feel like my data is the
-really important stuff (as it should be), and I can interact with it on
-any workstation. All I need is an internet connection and a few hours to
-download everything. It feels like my physical workstation and the
-installed OS are serving me and my data, instead of me feeling as
-hostage to the specific OS configuration at the moment. Having a few
-backup copies of everything important extends such peacefulness.
-
-After this positive experience with recreating my OS from simple Nix
-expressions, I started to wonder how far I could go with this, and
-started considering other areas of improvements:
-
-### First run on a fresh NixOS installation
-
-Right now the initial setup relies on non-declarative manual tasks, like
-decrypting some credentials, or manually downloading **this** git
-repository with specific configurations before **that** one.
-
-I wonder what some areas of improvements are on this topic, and if
-investing on it is worth it (both time-wise and happiness-wise).
-
-### Emacs
-
-Right now I'm using the [Spacemacs](http://spacemacs.org/), which is a
-community package curation and configuration on top of
-[Emacs](https://www.gnu.org/software/emacs/).
-
-Spacemacs does support the notion of
-[layers](http://spacemacs.org/doc/LAYERS.html), which you can
-declaratively specify and let Spacemacs do the rest.
-
-However this solution isn't nearly as robust as Nix: being purely
-functional, Nix does describe everything required to build a derivation,
-and knows how to do so. Spacemacs it closer to more traditional package
-managers: even though the layers list is declarative, the installation
-is still very much imperative. I've had trouble with Spacemacs not
-behaving the same on different computers, both with identical
-configurations, only brought to convergence back again after a
-`git clean -fdx` inside `~/.emacs.d/`.
-
-The ideal solution would be managing Emacs packages with Nix itself.
-After a quick search I did found that [there is support for Emacs
-packages in
-Nix](https://nixos.org/nixos/manual/index.html#module-services-emacs-adding-packages).
-So far I was only aware of [Guix support for Emacs packages](https://www.gnu.org/software/guix/manual/en/html_node/Application-Setup.html#Emacs-Packages).
-
-This isn't a trivial change because Spacemacs does include extra
-curation and configuration on top of Emacs packages. I'm not sure the
-best way to improve this right now.
-
-### myrepos
-
-I'm using [myrepos](https://myrepos.branchable.com/) to manage all my
-git repositories, and the general rule I apply is to add any repository
-specific configuration in myrepos' `checkout` phase:
-
-```shell
-# sample ~/.mrconfig file snippet
-[dev/guix/guix]
-checkout =
- git clone https://git.savannah.gnu.org/git/guix.git guix
- cd guix/
- git config sendemail.to guix-patches@gnu.org
-```
-
-This way when I clone this repo again the email sending is already
-pre-configured.
-
-This works well enough, but the solution is too imperative, and my
-`checkout` phases tend to become brittle over time if not enough care is
-taken.
-
-### GNU Stow
-
-For my home profile and personal configuration I already have a few
-dozens of symlinks that I manage manually. This has worked so far, but
-the solution is sometimes fragile and [not declarative at
-all](https://git.sr.ht/~euandreh/dotfiles/tree/316939aa215181b1d22b69e94241eef757add98d/bash/symlinks.sh#L14-75).
-I wonder if something like [GNU
-Stow](https://www.gnu.org/software/stow/) can help me simplify this.
-
-## Conclusion
-
-I'm really satisfied with NixOS, and I intend to keep using it. If what
-I've said interests you, maybe try tinkering with the [Nix package
-manager](https://nixos.org/nix/) (not the whole NixOS) on your current
-distribution (it can live alongside any other package manager).
-
-If you have experience with declarative Emacs package managements, GNU
-Stow or any similar tool, etc., [I'd like some
-tips](mailto:eu@euandre.org). If you don't have any experience at all,
-[I'd still love to hear from you](mailto:eu@euandre.org).
-
-[^last-week]: "Last week" as of the start of this writing, so around the end of
- May 2019.
-
-[^old-computer]: I was using a 32GB RAM, i7 and 250GB SSD Samsung laptop. The
- switch was back to a 8GB RAM, i5 and 500GB HDD Dell laptop. The biggest
- difference I noticed was on faster memory, both RAM availability and the
- disk speed, but I had 250GB less local storage space.
-
-[^convinced-by-declarative-aspect]: The declarative configuration aspect is
- something that I now completely take for granted, and wouldn't consider
- using something which isn't declarative. A good metric to show this is me
- realising that I can't pinpoint the moment when I decided to switch to
- NixOS. It's like I had a distant past when this wasn't true.
diff --git a/_posts/2020-08-10-guix-inside-sourcehut-builds-sr-ht-ci.md b/_posts/2020-08-10-guix-inside-sourcehut-builds-sr-ht-ci.md
deleted file mode 100644
index 3ce2acf..0000000
--- a/_posts/2020-08-10-guix-inside-sourcehut-builds-sr-ht-ci.md
+++ /dev/null
@@ -1,128 +0,0 @@
----
-title: Guix inside sourcehut builds.sr.ht CI
-date: 2020-08-10
-updated_at: 2020-08-19
-layout: post
-lang: en
-ref: guix-sourcehut-ci
----
-After the release of the [NixOS images in builds.sr.ht][0] and much
-usage of it, I also started looking at [Guix][1] and
-wondered if I could get it on the awesome builds.sr.ht service.
-
-[0]: https://man.sr.ht/builds.sr.ht/compatibility.md#nixos
-[1]: https://guix.gnu.org/
-
-The Guix manual section on the [binary installation][2] is very thorough, and
-even a [shell installer script][3] is provided, but it is built towards someone
-installing Guix on their personal computer, and relies heavily on interactive
-input.
-
-[2]: https://guix.gnu.org/manual/en/guix.html#Binary-Installation
-[3]: https://git.savannah.gnu.org/cgit/guix.git/plain/etc/guix-install.sh
-
-I developed the following set of scripts that I have been using for some time to
-run Guix tasks inside builds.sr.ht jobs. First, `install-guix.sh`:
-
-```shell
-#!/usr/bin/env bash
-set -x
-set -Eeuo pipefail
-
-VERSION='1.0.1'
-SYSTEM='x86_64-linux'
-BINARY="guix-binary-${VERSION}.${SYSTEM}.tar.xz"
-
-cd /tmp
-wget "https://ftp.gnu.org/gnu/guix/${BINARY}"
-tar -xf "${BINARY}"
-
-sudo mv var/guix /var/
-sudo mv gnu /
-sudo mkdir -p ~root/.config/guix
-sudo ln -fs /var/guix/profiles/per-user/root/current-guix ~root/.config/guix/current
-
-GUIX_PROFILE="$(echo ~root)/.config/guix/current"
-source "${GUIX_PROFILE}/etc/profile"
-
-groupadd --system guixbuild
-for i in $(seq -w 1 10);
-do
- useradd -g guixbuild \
- -G guixbuild \
- -d /var/empty \
- -s "$(command -v nologin)" \
- -c "Guix build user ${i}" --system \
- "guixbuilder${i}";
-done
-
-mkdir -p /usr/local/bin
-cd /usr/local/bin
-ln -s /var/guix/profiles/per-user/root/current-guix/bin/guix .
-ln -s /var/guix/profiles/per-user/root/current-guix/bin/guix-daemon .
-
-guix archive --authorize < ~root/.config/guix/current/share/guix/ci.guix.gnu.org.pub
-```
-
-Almost all of it is taken directly from the [binary installation][2] section
-from the manual, with the interactive bits stripped out: after downloading and
-extracting the Guix tarball, we create some symlinks, add guixbuild users and
-authorize the `ci.guix.gnu.org.pub` signing key.
-
-After installing Guix, we perform a `guix pull` to update Guix inside `start-guix.sh`:
-```shell
-#!/usr/bin/env bash
-set -x
-set -Eeuo pipefail
-
-sudo guix-daemon --build-users-group=guixbuild &
-guix pull
-guix package -u
-guix --version
-```
-
-Then we can put it all together in a sample `.build.yml` configuration file I'm
-using myself:
-
-```yaml
-image: debian/stable
-packages:
- - wget
-sources:
- - https://git.sr.ht/~euandreh/songbooks
-tasks:
- - install-guix: |
- cd ./songbooks/
- ./scripts/install-guix.sh
- ./scripts/start-guix.sh
- echo 'sudo guix-daemon --build-users-group=guixbuild &' >> ~/.buildenv
- echo 'export PATH="${HOME}/.config/guix/current/bin${PATH:+:}$PATH"' >> ~/.buildenv
- - tests: |
- cd ./songbooks/
- guix environment -m build-aux/guix.scm -- make check
- - docs: |
- cd ./songbooks/
- guix environment -m build-aux/guix.scm -- make publish-dist
-```
-
-We have to add the `guix-daemon` to `~/.buildenv` so it can be started on every
-following task run. Also, since we used `wget` inside `install-guix.sh`, we had
-to add it to the images package list.
-
-After the `install-guix` task, you can use Guix to build and test your project,
-or run any `guix environment --ad-hoc my-package -- my script` :)
-
-## Improvements
-
-When I originally created this code I had a reason why to have both a `sudo`
-call for `sudo ./scripts/install-guix.sh` and `sudo` usages inside
-`install-guix.sh` itself. I couldn't figure out why (it feels like my past self
-was a bit smarter 😬), but it feels ugly now. If it is truly required I could
-add an explanation for it, or remove this entirely in favor of a more elegant solution.
-
-I could also contribute the Guix image upstream to builds.sr.ht, but there
-wasn't any build or smoke tests in the original [repository][4], so I wasn't
-inclined to make something that just "works on my machine" or add a maintainence
-burden to the author. I didn't look at it again recently, though.
-
-[4]: https://git.sr.ht/~sircmpwn/builds.sr.ht
diff --git a/_posts/2020-08-31-the-database-i-wish-i-had.md b/_posts/2020-08-31-the-database-i-wish-i-had.md
deleted file mode 100644
index 004a558..0000000
--- a/_posts/2020-08-31-the-database-i-wish-i-had.md
+++ /dev/null
@@ -1,295 +0,0 @@
----
-title: The database I wish I had
-date: 2020-08-31
-updated_at: 2020-09-03
-layout: post
-lang: en
-ref: the-database-i-wish-i-had
-category: mediator
----
-
-I watched the talk
-"[Platform as a Reflection of Values: Joyent, Node.js and beyond][platform-values]"
-by Bryan Cantrill, and I think he was able to put into words something I already
-felt for some time: if there's no piece of software out there that reflects your
-values, it's time for you to build that software[^talk-time].
-
-[platform-values]: https://vimeo.com/230142234
-[^talk-time]: At the very end, at time 29:49. When talking about the draft of
- this article with a friend, he noted that Bryan O'Sullivan (a different
- Bryan) says a similar thing on his talk
- "[Running a startup on Haskell](https://www.youtube.com/watch?v=ZR3Jirqk6W8)",
- at time 4:15.
-
-I kind of agree with what he said, because this is already happening to me. I
-long for a database with a certain set of values, and for a few years I was just
-waiting for someone to finally write it. After watching his talk, Bryan is
-saying to me: "time to stop waiting, and start writing it yourself".
-
-So let me try to give an overview of such database, and go over its values.
-
-## Overview
-
-I want a database that allows me to create decentralized client-side
-applications that can sync data.
-
-The best one-line description I can give right now is:
-
-> It's sort of like PouchDB, Git, Datomic, SQLite and Mentat.
-
-A more descriptive version could be:
-
-> An embedded, immutable, syncable relational database.
-
-Let's go over what I mean by each of those aspects one by one.
-
-### Embedded
-
-I think the server-side database landscape is diverse and mature enough for
-my needs (even though I end up choosing SQLite most of the time), and what I'm
-after is a database to be embedded on client-side applications itself, be it
-desktop, browser, mobile, etc.
-
-The purpose of such database is not to keep some local cache of data in case of
-lost connectivity: we have good solutions for that already. It should serve as
-the source of truth, and allow the application to work on top of it.
-
-[**SQLite**][sqlite] is a great example of that: it is a very powerful
-relational database that runs [almost anywhere][sqlite-whentouse]. What I miss
-from it that SQLite doesn't provide is the ability to run it on the browser:
-even though you could compile it to WebAssembly, ~~it assumes a POSIX filesystem
-that would have to be emulated~~[^posix-sqlite].
-
-[sqlite]: https://sqlite.org/index.html
-[sqlite-whentouse]: https://sqlite.org/whentouse.html
-[^posix-sqlite]: It was [pointed out to me](https://news.ycombinator.com/item?id=24338881)
- that SQLite doesn't assume the existence of a POSIX filesystem, as I wrongly
- stated. Thanks for the correction.
-
- This makes me consider it as a storage backend all by itself. I
- initially considered having an SQLite storage backend as one implementation
- of the POSIX filesystem storage API that I mentioned. My goal was to rely on
- it so I could validate the correctness of the actual implementation, given
- SQLite's robustness.
-
- However it may even better to just use SQLite, and get an ACID backend
- without recreating a big part of SQLite from scratch. In fact, both Datomic
- and PouchDB didn't create an storage backend for themselves, they just
- plugged on what already existed and already worked. I'm beginning to think
- that it would be wiser to just do the same, and drop entirely the from
- scratch implementation that I mentioned.
-
- That's not to say that adding an IndexedDB compatibility layer to SQLite
- would be enough to make it fit the other requirements I mention on this
- page. SQLite still is an implementation of a update-in-place, SQL,
- table-oriented database. It is probably true that cherry-picking the
- relevant parts of SQLite (like storage access, consistency, crash recovery,
- parser generator, etc.) and leaving out the unwanted parts (SQL, tables,
- threading, etc.) would be better than including the full SQLite stack, but
- that's simply an optimization. Both could even coexist, if desired.
-
- SQLite would have to be treated similarly to how Datomic treats SQL
- databases: instead of having a table for each entities, spread attributes
- over the tables, etc., it treats SQL databases as a key-value storage so it
- doesn't have to re-implement interacting with the disk that other databases
- do well.
-
- The tables would contain blocks of binary data, so there isn't a difference
- on how the SQLite storage backend behaves and how the IndexedDB storage
- backend behaves, much like how Datomic works the same regardless of the
- storage backend, same for PouchDB.
-
- I welcome corrections on what I said above, too.
-
-[**PouchDB**][pouchdb] is another great example: it's a full reimplementation of
-[CouchDB][couchdb] that targets JavaScript environments, mainly the browser and
-Node.js. However I want a tool that can be deployed anywhere, and not limit its
-applications to places that already have a JavaScript runtime environment, or
-force the developer to bundle a JavaScript runtime environment with their
-application. This is true for GTK+ applications, command line programs, Android
-apps, etc.
-
-[pouchdb]: https://pouchdb.com/
-[couchdb]: https://couchdb.apache.org/
-
-[**Mentat**][mentat] was an interesting project, but its reliance on SQLite
-makes it inherit most of the downsides (and benefits too) of SQLite itself.
-
-[mentat]: https://github.com/mozilla/mentat
-
-Having such a requirement imposes a different approach to storage: we have to
-decouple the knowledge about the intricacies of storage from the usage of
-storage itself, so that a module (say query processing) can access storage
-through an API without needing to know about its implementation. This allows
-the database to target a POSIX filesystems storage API and an IndexedDB storage
-API, and make the rest of the code agnostic about storage. PouchDB has such
-mechanism (called [adapters][pouchdb-adapters]) and Datomic has them too (called
-[storage services][datomic-storage-services]).
-
-[pouchdb-adapters]: https://pouchdb.com/adapters.html
-[datomic-storage-services]: https://docs.datomic.com/on-prem/storage.html
-
-This would allow the database to adapt to where it is embedded: when targeting
-the browser the IndexedDB storage API would provide the persistence layer
-that the database requires, and similarly the POSIX filesystem storage API would
-provide the persistence layer when targeting POSIX systems (like desktops,
-mobile, etc.).
-
-But there's also an extra restriction that comes from by being embedded: it
-needs to provide and embeddable artifact, most likely a binary library object
-that exposes a C compatible FFI, similar to
-[how SQLite does][sqlite-amalgamation]. Bundling a full runtime environment is
-possible, but doesn't make it a compelling solution for embedding. This rules
-out most languages, and leaves us with C, Rust, Zig, and similar options that
-can target POSIX systems and WebAssembly.
-
-[sqlite-amalgamation]: https://www.sqlite.org/amalgamation.html
-
-### Immutable
-
-Being immutable means that only new information is added, no in-place update
-ever happens, and nothing is ever deleted.
-
-Having an immutable database presents us with similar trade-offs found in
-persistent data structures, like lack of coordination when doing reads, caches
-being always coherent, and more usage of space.
-
-[**Datomic**][datomic] is the go to database example of this: it will only add
-information (datoms) and allows you to query them in a multitude of ways. Stuart
-Halloway calls it "accumulate-only" over "append-only"[^accumulate-only]:
-
-> It's accumulate-only, it is not append-only. So append-only, most people when
-> they say that they're implying something physical about what happens.
-
-[datomic]: https://www.datomic.com/
-[^accumulate-only]: Video "[Day of Datomic Part 2](https://vimeo.com/116315075)"
- on Datomic's information model, at time 12:28.
-
-Also a database can be append-only and overwrite existing information with new
-information, by doing clean-ups of "stale" data. I prefer to adopt the
-"accumulate-only" naming and approach.
-
-[**Git**][git] is another example of this: new commits are always added on top
-of the previous data, and it grows by adding commits instead of replacing
-existing ones.
-
-[git]: https://git-scm.com/
-
-Git repositories can only grow in size, and that is not only an acceptable
-condition, but also one of the reasons to use it.
-
-All this means that no in-place updates happens on data, and the database will
-be much more concerned about how compact and efficiently it stores data than how
-fast it does writes to disk. Being embedded, the storage limitation is either a)
-how much storage the device has or b) how much storage was designed for the
-application to consume. So even though the database could theoretically operate
-with hundreds of TBs, a browser page or mobile application wouldn't have access
-to this amount of storage. SQLite even [says][sqlite-limits] that it does
-support approximately 280 TBs of data, but those limits are untested.
-
-The upside of keeping everything is that you can have historical views of your
-data, which is very powerful. This also means that applications should turn this
-off when not relevant[^no-history].
-
-[sqlite-limits]: https://sqlite.org/limits.html
-[^no-history]: Similar to
- [Datomic's `:db/noHistory`](https://docs.datomic.com/cloud/best.html#nohistory-for-high-churn).
-
-### Syncable
-
-This is a frequent topic when talking about offline-first solutions. When
-building applications that:
-
-- can fully work offline,
-- stores data,
-- propagates that data to other application instances,
-
-then you'll need a conflict resolution strategy to handle all the situations
-where different application instances disagree. Those application instances
-could be a desktop and a browser version of the same application, or the same
-mobile app in different devices.
-
-A three-way merge seems to be the best approach, on top of which you could add
-application specific conflict resolution functions, like:
-
-- pick the change with higher timestamp;
-- if one change is a delete, pick it;
-- present the diff on the screen and allow the user to merge them.
-
-Some databases try to make this "easy", by choosing a strategy for you, but I've
-found that different applications require different conflict resolution
-strategies. Instead, the database should leave this up to the user to decide,
-and provide tools for them to do it.
-
-[**Three-way merges in version control**][3-way-merge] are the best example,
-performing automatic merges when possible and asking the user to resolve
-conflicts when they appear.
-
-The unit of conflict for a version control system is a line of text. The
-database equivalent would probably be a single attribute, not a full entity or a
-full row.
-
-Making all the conflict resolution logic be local should allow the database to
-have encrypted remotes similar to how [git-remote-gcrypt][git-remote-gcrypt]
-adds this functionality to Git. This would enable users to sync the application
-data across devices using an untrusted intermediary.
-
-[3-way-merge]: https://en.wikipedia.org/wiki/Merge_(version_control)
-[git-remote-gcrypt]: https://spwhitton.name/tech/code/git-remote-gcrypt/
-
-### Relational
-
-I want the power of relational queries on the client applications.
-
-Most of the arguments against traditional table-oriented relational databases
-are related to write performance, but those don't apply here. The bottlenecks
-for client applications usually aren't write throughput. Nobody is interested in
-differentiating between 1 MB/s or 10 MB/s when you're limited to 500 MB total.
-
-The relational model of the database could either be based on SQL and tables
-like in SQLite, or maybe [datalog][datalog] and [datoms][datoms] like in
-Datomic.
-
-[datalog]: https://docs.datomic.com/on-prem/query.html
-[datoms]: https://docs.datomic.com/cloud/whatis/data-model.html#datoms
-
-## From aspects to values
-
-Now let's try to translate the aspects above into values, as suggested by Bryan
-Cantrill.
-
-### Portability
-
-Being able to target so many different platforms is a bold goal, and the
-embedded nature of the database demands portability to be a core value.
-
-### Integrity
-
-When the local database becomes the source of truth of the application, it must
-provide consistency guarantees that enables applications to rely on it.
-
-### Expressiveness
-
-The database should empower applications to slice and dice the data in any way
-it wants to.
-
-## Next steps
-
-Since I can't find any database that fits these requirements, I've finally come
-to terms with doing it myself.
-
-It's probably going to take me a few years to do it, and making it portable
-between POSIX and IndexedDB will probably be the biggest challenge. I got myself
-a few books on databases to start.
-
-I wonder if I'll ever be able to get this done.
-
-## External links
-
-See discussions on [Reddit][reddit], [lobsters][lobsters], [HN][hn] and
-[a lengthy email exchange][lengthy-email].
-
-[reddit]: https://www.reddit.com/r/programming/comments/ijwz5b/the_database_i_wish_i_had/
-[lobsters]: https://lobste.rs/s/m9vkg4/database_i_wish_i_had
-[hn]: https://news.ycombinator.com/item?id=24337244
-[lengthy-email]: https://lists.sr.ht/~euandreh/public-inbox/%3C010101744a592b75-1dce9281-f0b8-4226-9d50-fd2c7901fa72-000000%40us-west-2.amazonses.com%3E
diff --git a/_posts/2020-10-05-cargo2nix-dramatically-simpler-rust-in-nix.md b/_posts/2020-10-05-cargo2nix-dramatically-simpler-rust-in-nix.md
deleted file mode 100644
index 4d4a8d5..0000000
--- a/_posts/2020-10-05-cargo2nix-dramatically-simpler-rust-in-nix.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-title: "cargo2nix: Dramatically simpler Rust inside Nix"
-date: 2020-10-05
-layout: post
-lang: en
-ref: cargo2nix-dramatically-simpler-rust-in-nix
----
-
-In the same vein of my earlier post on
-[swift2nix]({% post_url 2020-10-05-swift2nix-run-swift-inside-nix-builds %}), I
-was able to quickly prototype a Rust and Cargo variation of it:
-[cargo2nix][cargo2nix].
-
-
-The initial prototype is even smaller than swift2nix: it has only
-[37 lines of code][37-lines].
-
-[cargo2nix]: https://git.euandreh.xyz/cargo2nix/about/
-[37-lines]: https://git.euandreh.xyz/cargo2nix/tree/default.nix?id=472dde8898296c8b6cffcbd10b3b2c3ba195846d
-
-Here's how to use it (snippet taken from the repo's README):
-
-```nix
-let
- niv-sources = import ./nix/sources.nix;
- mozilla-overlay = import niv-sources.nixpkgs-mozilla;
- pkgs = import niv-sources.nixpkgs { overlays = [ mozilla-overlay ]; };
- src = pkgs.nix-gitignore.gitignoreSource [ ] ./.;
- cargo2nix = pkgs.callPackage niv-sources.cargo2nix {
- lockfile = ./Cargo.lock;
- };
-in pkgs.stdenv.mkDerivation {
- inherit src;
- name = "cargo-test";
- buildInputs = [ pkgs.latest.rustChannels.nightly.rust ];
- phases = [ "unpackPhase" "buildPhase" ];
- buildPhase = ''
- # Setup dependencies path to satisfy Cargo
- mkdir .cargo/
- ln -s ${cargo2nix.env.cargo-config} .cargo/config
- ln -s ${cargo2nix.env.vendor} vendor
-
- # Run the tests
- cargo test
- touch $out
- '';
-}
-```
-
-That `cargo test` part on line 20 is what I have been fighting with every
-"\*2nix" available for Rust out there. I don't want to bash any of them. All I
-want is to have full control of what Cargo commands to run, and the "*2nix" tool
-should only setup the environment for me. Let me drive Cargo myself, no need to
-parameterize how the tool runs it for me, or even replicate its internal
-behaviour by calling the Rust compiler directly.
-
-Sure it doesn't support private registries or Git dependencies, but how much
-bigger does it has to be to support them? Also, it doesn't support those **yet**,
-there's no reason it can't be extended. I just haven't needed it yet, so I
-haven't added. Patches welcome.
-
-The layout of the `vendor/` directory is more explicit and public then what
-swift2nix does: it is whatever the command `cargo vendor` returns. However I
-haven't checked if the shape of the `.cargo-checksum.json` is specified, or
-internal to Cargo.
-
-Try out the demo (also taken from the repo's README):
-
-```shell
-pushd "$(mktemp -d)"
-git clone https://git.euandreh.xyz/cargo2nix-demo
-cd cargo2nix-demo/
-nix-build
-```
-
-Report back if you wish. Again, patches welcome.
diff --git a/_posts/2020-10-05-swift2nix-run-swift-inside-nix-builds.md b/_posts/2020-10-05-swift2nix-run-swift-inside-nix-builds.md
deleted file mode 100644
index 7960b8b..0000000
--- a/_posts/2020-10-05-swift2nix-run-swift-inside-nix-builds.md
+++ /dev/null
@@ -1,197 +0,0 @@
----
-title: "swift2nix: Run Swift inside Nix builds"
-date: 2020-10-05
-layout: post
-lang: en
-ref: swift2nix-run-swift-inside-nix-builds
-category: mediator
----
-
-While working on a Swift project, I didn't find any tool that would allow Swift
-to run inside [Nix][nix] builds. Even thought you *can* run Swift, the real
-problem arises when using the package manager. It has many of the same problems
-that other package managers have when trying to integrate with Nix, more on this
-below.
-
-I wrote a simple little tool called [swift2nix][swift2nix] that allows you trick
-Swift's package manager into assuming everything is set up. Here's the example
-from swift2nix's README file:
-
-```
-let
- niv-sources = import ./nix/sources.nix;
- pkgs = import niv-sources.nixpkgs { };
- src = pkgs.nix-gitignore.gitignoreSource [ ] ./.;
- swift2nix = pkgs.callPackage niv-sources.swift2nix {
- package-resolved = ./Package.resolved;
- };
-in pkgs.stdenv.mkDerivation {
- inherit src;
- name = "swift-test";
- buildInputs = with pkgs; [ swift ];
- phases = [ "unpackPhase" "buildPhase" ];
- buildPhase = ''
- # Setup dependencies path to satisfy SwiftPM
- mkdir .build
- ln -s ${swift2nix.env.dependencies-state-json} .build/dependencies-state.json
- ln -s ${swift2nix.env.checkouts} .build/checkouts
-
- # Run the tests
- swift test
- touch $out
- '';
-}
-```
-
-The key parts are lines 15~17: we just fake enough files inside `.build/` that
-Swift believes it has already downloaded and checked-out all dependencies, and
-just moves on to building them.
-
-I've worked on it just enough to make it usable for myself, so beware of
-unimplemented cases. Patches welcome.
-
-[nix]: https://nixos.org/
-[swift2nix]: https://git.euandreh.xyz/swift2nix/about/
-[actual-code]: https://git.euandreh.xyz/swift2nix/tree/default.nix?id=2af83ffe43fac631a8297ffaa8be3ff93b2b9e7c
-
-## Design
-
-What swift2nix does is just provide you with the bare minimum that Swift
-requires, and readily get out of the way:
-
-1. I explicitly did not want to generated a `Package.nix` file, since
- `Package.resolved` already exists and contains the required information;
-2. I didn't want to have an "easy" interface right out of the gate, after
- fighting with "*2nix" tools that focus too much on that.
-
-The final [actual code][actual-code] was so small (46 lines) that it made me
-think about package managers, "*2nix" tools and some problems with many of them.
-
-## Problems with package managers
-
-I'm going to talk about solely language package managers. Think npm and cargo,
-not apt-get.
-
-Package managers want to do too much, or assume too much, or just want to take
-control of the entire build of the dependencies.
-
-This is a recurrent problem in package managers, but I don't see it as an
-inherit one. There's nothing about a "package manager" that prevents it from
-*declaring* what it expects to encounter and in which format. The *declaring*
-part is important: it should be data, not code, otherwise you're back in the
-same problem, just like lockfiles are just data. Those work in any language, and
-tools can cooperate happily.
-
-There's no need for this declarative expectation to be standardized, or be made
-compatible across languages. That would lead to a poor format that no package
-manager really likes. Instead, If every package manager could say out loud what
-it wants to see exactly, than more tools like swift2nix could exist, and they
-would be more reliable.
-
-This could even work fully offline, and be simply a mapping from the lockfile
-(the `Package.resolved` in Swift's case) to the filesystem representation. For
-Swift, the `.build/dependencies-state.json` comes very close, but it is internal
-to the package manager.
-
-Even though this pain only exists when trying to use Swift inside Nix, it sheds
-light into this common implicit coupling that package managers have. They
-usually have fuzzy boundaries and tight coupling between:
-
-1. resolving the dependency tree and using some heuristic to pick a package
- version;
-2. generating a lockfile with the exact pinned versions;
-3. downloading the dependencies present on the lockfile into some local cache;
-4. arranging the dependencies from the cache in a meaningful way for itself inside
- the project;
-5. work using the dependencies while *assuming* that step 4 was done.
-
-When you run `npm install` in a repository with no lockfile, it does 1~4. If you
-do the same with `cargo build`, it does 1~5. That's too much: many of those
-assumptions are implicit and internal to the package manager, and if you ever
-need to rearrange them, you're on your own. Even though you can perform some of
-those steps, you can't compose or rearrange them.
-
-Instead a much saner approach could be:
-
-1. this stays the same;
-2. this also stays the same;
-3. be able to generate some JSON/TOML/edn which represents the local expected
- filesystem layout with dependencies (i.e. exposing what the package manager
- expects to find), let's call it `local-registry.json`;
-4. if a `local-registry.json` was provided, do a build using that. Otherwise
- generate its own, by downloading the dependencies, arranging them, etc.
-
-The point is just making what the package manager requires visible to the
-outside world via some declarative data. If this data wasn't provided, it can
-move on to doing its own automatic things.
-
-By making the expectation explicit and public, one can plug tools *à la carte*
-if desired, but doesn't prevent the default code path of doing things the exact
-same way they are now.
-
-## Problems with "*2nix" tools
-
-I have to admit: I'm unhappy with most of they.
-
-They conflate "using Nix" with "replicating every command of the package manager
-inside Nix".
-
-The avoidance of an "easy" interface that I mentioned above comes from me
-fighting with some of the "\*2nix" tools much like I have to fight with package
-managers: I don't want to offload all build responsibilities to the "*2nix"
-tool, I just want to let it download some of the dependencies and get out of the
-way. I want to stick with `npm test` or `cargo build`, and Nix should only
-provide the environment.
-
-This is something that [node2nix][node2nix] does right. It allows you to build
-the Node.js environment to satisfy NPM, and you can keep using NPM for
-everything else:
-
-```shell
-ln -s ${node2nix-package.shell.nodeDependencies}/lib/node_modules ./node_modules
-npm test
-```
-
-Its natural to want to put as much things into Nix as possible to benefit from
-Nix's advantages. Isn't that how NixOS itself was born?
-
-But a "*2nix" tool should leverage Nix, not be coupled with it. The above
-example lets you run any arbitrary NPM command while profiting from isolation
-and reproducibility that Nix provides. It is even less brittle: any changes to
-how NPM runs some things will be future-compatible, since node2nix isn't trying
-to replicate what NPM does, or fiddling with NPM's internal.
-
-**A "*2nix" tool should build the environment, preferably from the lockfile
-directly and offload everything else to the package manager**. The rest is just
-nice-to-have.
-
-swift2nix itself could provide an "easy" interface, something that allows you to
-write:
-
-```shell
-nix-build -A swift2nix.release
-nix-build -A swift2nix.test
-```
-
-The implementation of those would be obvious: create a new
-`pkgs.stdenv.mkDerivation` and call `swift build -c release` and `swift test`
-while using `swift2nix.env` under the hood.
-
-[node2nix]: https://github.com/svanderburg/node2nix
-
-## Conclusion
-
-Package managers should provide exact dependencies via a data representation,
-i.e. lockfiles, and expose via another data representation how they expect those
-dependencies to appear on the filesystem, i.e. `local-registry.json`. This
-allows package managers to provide an API so that external tools can create
-mirrors, offline builds, other registries, isolated builds, etc.
-
-"\*2nix" tools should build simple functions that leverage that
-`local-registry.json`[^local-registry] data and offload all the rest back to the
-package manager itself. This allows the "*2nix" to not keep chasing the package
-manager evolution, always trying to duplicate its behaviour.
-
-[^local-registry]: This `local-registry.json` file doesn't have to be checked-in
- the repository at all. It could be always generated on the fly, much like
- how Swift's `dependencies-state.json` is.