aboutsummaryrefslogtreecommitdiff
path: root/src/content/tils
diff options
context:
space:
mode:
authorEuAndreh <eu@euandre.org>2024-11-18 08:21:58 -0300
committerEuAndreh <eu@euandre.org>2024-11-18 08:44:57 -0300
commit960e4410f76801356ebd42801c914b2910a302a7 (patch)
tree615d379416f72956d0c1666c63ce062859041fbe /src/content/tils
parentRemove jekyll infrastructure setup (diff)
downloadeuandre.org-main.tar.gz
euandre.org-main.tar.xz
v0 migration to mkwbHEADmain
Diffstat (limited to 'src/content/tils')
-rw-r--r--src/content/tils/2020/08/12/filename-timestamp.adoc44
-rw-r--r--src/content/tils/2020/08/13/code-jekyll.adoc155
-rw-r--r--src/content/tils/2020/08/14/browse-git.adoc84
-rw-r--r--src/content/tils/2020/08/16/git-search.adoc59
-rw-r--r--src/content/tils/2020/08/28/grep-online.adoc139
-rw-r--r--src/content/tils/2020/09/04/email-cli-fun-profit.adoc80
-rw-r--r--src/content/tils/2020/09/05/oldschool-pr.adoc118
-rw-r--r--src/content/tils/2020/10/11/search-git-history.adoc41
-rw-r--r--src/content/tils/2020/11/08/find-broken-symlink.adoc36
-rw-r--r--src/content/tils/2020/11/12/diy-nix-bash-ci.adoc74
-rw-r--r--src/content/tils/2020/11/12/git-bisect-automation.adoc35
-rw-r--r--src/content/tils/2020/11/12/useful-bashvars.adoc72
-rw-r--r--src/content/tils/2020/11/14/gpodder-media.adoc33
-rw-r--r--src/content/tils/2020/11/30/git-notes-ci.adoc122
-rw-r--r--src/content/tils/2020/12/15/shellcheck-repo.adoc171
-rw-r--r--src/content/tils/2020/12/29/svg.adoc134
-rw-r--r--src/content/tils/2021/01/12/curl-awk-emails.adoc142
-rw-r--r--src/content/tils/2021/01/17/posix-shebang.adoc55
-rw-r--r--src/content/tils/2021/04/24/cl-generic-precedence.adoc137
-rw-r--r--src/content/tils/2021/04/24/clojure-autocurry.adoc135
-rw-r--r--src/content/tils/2021/04/24/scm-nif.adoc63
-rw-r--r--src/content/tils/2021/07/23/git-tls-gpg.adoc56
-rw-r--r--src/content/tils/2021/08/11/js-bigint-reviver.adoc100
-rw-r--r--src/content/tils/index.adoc1
24 files changed, 2086 insertions, 0 deletions
diff --git a/src/content/tils/2020/08/12/filename-timestamp.adoc b/src/content/tils/2020/08/12/filename-timestamp.adoc
new file mode 100644
index 0000000..7495fc9
--- /dev/null
+++ b/src/content/tils/2020/08/12/filename-timestamp.adoc
@@ -0,0 +1,44 @@
+---
+
+title: Simple filename timestamp
+
+date: 2020-08-12
+
+updated_at:
+
+layout: post
+
+lang: en
+
+ref: simple-filename-timestamp
+
+eu_categories: shell
+
+---
+
+When writing Jekyll posts or creating log files with dates on them, I usually
+struggle with finding a direct way of accomplishing that. There's a simple
+solution: `date -I`.
+
+```shell
+./my-program.sh > my-program.$(date -I).log
+cp post-template.md _posts/$(date -I)-post-slug.md
+```
+
+Using this built-in GNU/Linux tool allows you to `touch $(date -I).md` to readily
+create a `2020-08-12.md` file.
+
+I always had to read `man date` or search the web over and over, and after doing
+this repeatedly it became clear that both `date -I` and `date -Is` (`s` here
+stands for seconds) are the thing that I'm looking for 95% of the time:
+
+```shell
+# inside my-program.sh
+echo "Program started at $(date -Is)"
+# output is:
+# Program started at 2020-08-12T09:04:58-03:00
+```
+
+Both date formats are hierarchical, having the bigger time intervals to the
+left. This means that you can easily sort them (and even tab-complete them) with
+no extra effort or tool required.
diff --git a/src/content/tils/2020/08/13/code-jekyll.adoc b/src/content/tils/2020/08/13/code-jekyll.adoc
new file mode 100644
index 0000000..6566928
--- /dev/null
+++ b/src/content/tils/2020/08/13/code-jekyll.adoc
@@ -0,0 +1,155 @@
+---
+title: Anchor headers and code lines in Jekyll
+date: 2020-08-13
+layout: post
+lang: en
+ref: anchor-headers-and-code-lines-in-jekyll
+---
+The default Jekyll toolbox ([Jekyll][0], [kramdown][1] and [rouge][2]) doesn't
+provide with a configuration option to add anchors to headers and code blocks.
+
+[0]: https://jekyllrb.com/
+[1]: https://kramdown.gettalong.org/
+[2]: http://rouge.jneen.net/
+
+The best way I found of doing this is by creating a simple Jekyll plugin, more
+specifically, a [Jekyll hook][3]. These allow you to jump in to the Jekyll build
+and add a processing stage before of after Jekyll performs something.
+
+[3]: https://jekyllrb.com/docs/plugins/hooks/
+
+All you have to do is add the code to `_plugins/my-jekyll-plugin-code.rb`, and
+Jekyll knows to pick it up and call your code on the appropriate time.
+
+## Anchor on headers
+
+Since I wanted to add anchors to headers in all documents, this Jekyll hook
+works on `:documents` after they have been transformed into HTML, the
+`:post_render` phase:
+
+```ruby
+Jekyll::Hooks.register :documents, :post_render do |doc|
+ if doc.output_ext == ".html"
+ doc.output =
+ doc.output.gsub(
+ /<h([1-6])(.*?)id="([\w-]+)"(.*?)>(.*?)<\/h[1-6]>/,
+ '<a href="#\3"><h\1\2id="\3"\4>\5</h\1></a>'
+ )
+ end
+end
+```
+
+I've derived my implementations from two "official"[^official] hooks,
+[jemoji][4] and [jekyll-mentions][5].
+
+[4]: https://github.com/jekyll/jemoji
+[5]: https://github.com/jekyll/jekyll-mentions
+[^official]: I don't know how official they are, I just assumed it because they
+ live in the same organization inside GitHub that Jekyll does.
+
+All I did was to wrap the header tag inside an `<a>`, and set the `href` of that
+`<a>` to the existing id of the header. Before the hook the HTML looks like:
+
+```html
+...some unmodified text...
+<h2 id="my-header">
+ My header
+</h2>
+...more unmodified text...
+```
+
+And after the hook should turn that into:
+
+```html
+...some unmodified text...
+<a href="#my-header">
+ <h2 id="my-header">
+ My header
+ </h2>
+</a>
+...more unmodified text...
+```
+
+The used regexp tries to match only h1-h6 tags, and keep the rest of the HTML
+attributes untouched, since this isn't a general HTML parser, but the generated HTML
+is somewhat under your control. Use at your own risk because
+[you shouldn't parse HTML with regexps][6]. Also I used this strategy in my
+environment, where no other plugins are installed. I haven't considered how this
+approach may conflict with other Jekyll plugins.
+
+[6]: https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454
+
+In the new anchor tag you can add your custom CSS class to style it as you wish.
+
+## Anchor on code blocks
+
+Adding anchors to code blocks needs a little bit of extra work, because line
+numbers themselves don't have preexisting ids, so we need to generate them
+without duplications between multiple code blocks in the same page.
+
+Similarly, this Jekyll hook also works on `:documents` in the `:post_render`
+phase:
+
+```ruby
+PREFIX = '<pre class="lineno">'
+POSTFIX = '</pre>'
+Jekyll::Hooks.register :documents, :post_render do |doc|
+ if doc.output_ext == ".html"
+ code_block_counter = 1
+ doc.output = doc.output.gsub(/<pre class="lineno">[\n0-9]+<\/pre>/) do |match|
+ line_numbers = match
+ .gsub(/<pre class="lineno">([\n0-9]+)<\/pre>/, '\1')
+ .split("\n")
+
+ anchored_line_numbers_array = line_numbers.map do |n|
+ id = "B#{code_block_counter}-L#{n}"
+ "<a id=\"#{id}\" href=\"##{id}\">#{n}</a>"
+ end
+ code_block_counter += 1
+
+ PREFIX + anchored_line_numbers_array.join("\n") + POSTFIX
+ end
+ end
+end
+```
+
+This solution assumes the default Jekyll toolbox with code line numbers turned
+on in `_config.yml`:
+
+```yaml
+kramdown:
+ syntax_highlighter_opts:
+ span:
+ line_numbers: false
+ block:
+ line_numbers: true
+```
+
+The anchors go from B1-L1 to BN-LN, using the `code_block_counter` to track
+which code block we're in and don't duplicate anchor ids. Before the hook the
+HTML looks like:
+
+```html
+...some unmodified text...
+<pre class="lineno">1
+2
+3
+4
+5
+</pre>
+...more unmodified text...
+```
+
+And after the hook should turn that into:
+
+```html
+...some unmodified text...
+<pre class="lineno"><a id="B1-L1" href="#B1-L1">1</a>
+<a id="B1-L2" href="#B1-L2">2</a>
+<a id="B1-L3" href="#B1-L3">3</a>
+<a id="B1-L4" href="#B1-L4">4</a>
+<a id="B1-L5" href="#B1-L5">5</a></pre>
+...more unmodified text...
+```
+
+Happy writing :)
diff --git a/src/content/tils/2020/08/14/browse-git.adoc b/src/content/tils/2020/08/14/browse-git.adoc
new file mode 100644
index 0000000..d06f0c1
--- /dev/null
+++ b/src/content/tils/2020/08/14/browse-git.adoc
@@ -0,0 +1,84 @@
+---
+
+title: Browse a git repository at a specific commit
+
+date: 2020-08-14
+
+layout: post
+
+lang: en
+
+ref: browse-a-git-repository-at-a-specific-commit
+
+eu_categories: git
+
+---
+
+I commonly use tools like `git log` together with `git show` when inspecting
+past changes in a repository:
+
+```shell
+git log
+# search for a the commit I'm looking for
+git show <my-commit>
+# see the diff for the commit
+```
+
+But I also wanted to not only be able to look at the diff of a specific commit,
+but to browse the whole repository at that specific commit.
+
+I used to accomplish it the "brute force" way: clone the whole repository in
+another folder and checkout the commit there:
+
+```shell
+git clone <original-repo> /tmp/tmp-repo-clone
+cd /tmp-repo-clone
+git checkout <my-commit>
+```
+
+But git itself allows we to specific the directory of the checkout by using the
+`--work-tree` global git flag. This is what `man git` says about it:
+
+```txt
+--work-tree=<path>
+ Set the path to the working tree. It can be an absolute path or a path relative to the current working
+ directory. This can also be controlled by setting the GIT_WORK_TREE environment variable and the
+ core.worktree configuration variable (see core.worktree in git-config(1) for a more detailed
+ discussion).
+```
+
+So it allows us to set the desired path of the working tree. So if we want to
+copy the contents of the current working tree into `copy/`:
+
+```shell
+mkdir copy
+git --work-tree=copy/ checkout .
+```
+
+After that `copy/` will contain a replica of the code in HEAD. But to checkout a
+specific, we need some extra parameters:
+
+```shell
+git --work-tree=<dir> checkout <my-commit> -- .
+```
+
+There's an extra `-- .` at the end, which initially looks like we're sending
+Morse signals to git, but we're actually saying to `git-checkout` which
+sub directory of `<my-commit>` we want to look at. Which means we can do
+something like:
+
+```shell
+git --work-tree=<dir> checkout <my-commit> -- src/
+```
+
+And with that `<dir>` will only contain what was inside `src/` at `<commit>`.
+
+After any of those checkouts, you have to `git reset .` to reset your current
+staging area back to what it was before the checkout.
+
+
+## References
+
+1. [GIT: Checkout to a specific folder][0] (StackOverflow)
+
+[0]: https://stackoverflow.com/a/16493707
diff --git a/src/content/tils/2020/08/16/git-search.adoc b/src/content/tils/2020/08/16/git-search.adoc
new file mode 100644
index 0000000..f3ae6f0
--- /dev/null
+++ b/src/content/tils/2020/08/16/git-search.adoc
@@ -0,0 +1,59 @@
+---
+
+title: Search in git
+
+date: 2020-08-16
+
+layout: post
+
+lang: en
+
+ref: search-in-git
+
+eu_categories: git
+
+---
+
+Here's a useful trio to know about to help you search things in git:
+
+1. `git show <commit>`
+2. `git log --grep='<regexp>'`
+3. `git grep '<regexp>' [commit]`
+
+## 1. `git show <commit>`
+
+Show a specific commit and it's diff:
+
+```shell
+git show
+# shows the latest commit
+git show <commit>
+# shows an specific <commit>
+git show v1.2
+# shows commit tagged with v1.2
+```
+
+## 2. `git log --grep='<regexp>'`
+
+Search through the commit messages:
+
+```shell
+git log --grep='refactor'
+```
+
+## 3. `git grep '<regexp>' [commit]`
+
+Search content in git history:
+
+```shell
+git grep 'TODO'
+# search the repository for the "TODO" string
+git grep 'TODO' $(git rev-list --all)
+# search the whole history for "TODO" string
+```
+
+And if you find an occurrence of the regexp in a specific commit and you want to
+browse the repository in that point in time, you can
+[use git checkout for that][0].
+
+[0]: {% link _tils/2020-08-14-browse-a-git-repository-at-a-specific-commit.md %}
diff --git a/src/content/tils/2020/08/28/grep-online.adoc b/src/content/tils/2020/08/28/grep-online.adoc
new file mode 100644
index 0000000..8b3b63f
--- /dev/null
+++ b/src/content/tils/2020/08/28/grep-online.adoc
@@ -0,0 +1,139 @@
+---
+
+title: Grep online repositories
+
+date: 2020-08-28
+
+layout: post
+
+lang: en
+
+ref: grep-online-repositories
+
+eu_categories: git
+
+---
+
+I often find interesting source code repositories online that I want to grep for
+some pattern but I can't, because either:
+
+- the repository is on [cgit][cgit] or a similar code repository that doesn't
+ allow search in files, or;
+- the search function is really bad, and doesn't allow me to use regular expressions for searching patterns in the code.
+
+[cgit]: https://git.zx2c4.com/cgit/
+
+Here's a simple script that allows you to overcome that problem easily:
+
+```shell
+#!/usr/bin/env bash
+set -eu
+
+end="\033[0m"
+red="\033[0;31m"
+red() { echo -e "${red}${1}${end}"; }
+
+usage() {
+ red "Missing argument $1.\n"
+ cat <<EOF
+Usage:
+ $0 <REGEX_PATTERN> <REPOSITORY_URL>
+
+ Arguments:
+ REGEX_PATTERN Regular expression that "git grep" can search
+ REPOSITORY_URL URL address that "git clone" can download the repository from
+
+Examples:
+ Searching "make get-git" in cgit repository:
+ git search 'make get-git' https://git.zx2c4.com/cgit/
+ git search 'make get-git' https://git.zx2c4.com/cgit/ -- \$(git rev-list --all)
+EOF
+ exit 2
+}
+
+
+REGEX_PATTERN="${1:-}"
+REPOSITORY_URL="${2:-}"
+[[ -z "${REGEX_PATTERN}" ]] && usage 'REGEX_PATTERN'
+[[ -z "${REPOSITORY_URL}" ]] && usage 'REPOSITORY_URL'
+
+mkdir -p /tmp/git-search
+DIRNAME="$(echo "${REPOSITORY_URL%/}" | rev | cut -d/ -f1 | rev)"
+if [[ ! -d "/tmp/git-search/${DIRNAME}" ]]; then
+ git clone "${REPOSITORY_URL}" "/tmp/git-search/${DIRNAME}"
+fi
+pushd "/tmp/git-search/${DIRNAME}"
+
+shift 3 || shift 2 # when "--" is missing
+git grep "${REGEX_PATTERN}" "${@}"
+```
+
+It is a wrapper around `git grep` that downloads the repository when missing.
+Save in a file called `git-search`, make the file executable and add it to your
+path.
+
+Overview:
+
+- *lines 1~2*:
+
+ Bash shebang and the `set -eu` options to exit on error or undefined
+ variables.
+
+- *lines 4~30*:
+
+ Usage text to be printed when providing less arguments than expected.
+
+- *line 33*:
+
+ Extract the repository name from the URL, removing trailing slashes.
+
+- *lines 34~37*:
+
+ Download the repository when missing and go to the folder.
+
+- *line 39*:
+
+ Make the variable `$@` contain the rest of the unused arguments.
+
+- *line 40*:
+
+ Perform `git grep`, forwarding the remaining arguments from `$@`.
+
+Example output:
+```shell
+$ git search 'make get-git' https://git.zx2c4.com/cgit/
+Clonage dans '/tmp/git-search/cgit'...
+remote: Enumerating objects: 542, done.
+remote: Counting objects: 100% (542/542), done.
+remote: Compressing objects: 100% (101/101), done.
+warning: object 51dd1eff1edc663674df9ab85d2786a40f7ae3a5: gitmodulesParse: could not parse gitmodules blob
+remote: Total 7063 (delta 496), reused 446 (delta 441), pack-reused 6521
+Réception d'objets: 100% (7063/7063), 8.69 Mio | 5.39 Mio/s, fait.
+Résolution des deltas: 100% (5047/5047), fait.
+/tmp/git-search/cgit ~/dev/libre/songbooks/docs
+README: $ make get-git
+
+$ git search 'make get-git' https://git.zx2c4.com/cgit/
+/tmp/git-search/cgit ~/dev/libre/songbooks/docs
+README: $ make get-git
+```
+
+Subsequent greps on the same repository are faster because no download is needed.
+
+When no argument is provided, it prints the usage text:
+```shell
+$ git search
+Missing argument REGEX_PATTERN.
+
+Usage:
+ /home/andreh/dev/libre/dotfiles/scripts/ad-hoc/git-search <REGEX_PATTERN> <REPOSITORY_URL>
+
+ Arguments:
+ REGEX_PATTERN Regular expression that "git grep" can search
+ REPOSITORY_URL URL address that "git clone" can download the repository from
+
+Examples:
+ Searching "make get-git" in cgit repository:
+ git search 'make get-git' https://git.zx2c4.com/cgit/
+ git search 'make get-git' https://git.zx2c4.com/cgit/ -- $(git rev-list --all)
+```
diff --git a/src/content/tils/2020/09/04/email-cli-fun-profit.adoc b/src/content/tils/2020/09/04/email-cli-fun-profit.adoc
new file mode 100644
index 0000000..320f3ab
--- /dev/null
+++ b/src/content/tils/2020/09/04/email-cli-fun-profit.adoc
@@ -0,0 +1,80 @@
+---
+title: Send emails using the command line for fun and profit!
+date: 2020-09-04
+layout: post
+lang: en
+ref: send-emails-using-the-command-line-for-fun-and-profit
+---
+Here are a few reasons why:
+
+1. send yourself and other people notification of cronjobs, scripts runs, CI
+ jobs, *etc.*
+
+2. leverage the POSIX pipe `|`, and pipe emails away!
+
+3. because you can.
+
+Reason 3 is the fun part, reasons 1 and 2 are the profit part.
+
+First [install and configure SSMTP][ssmtp] for using, say, Gmail as the email
+server:
+
+```shell
+# file /etc/ssmtp/ssmtp.conf
+FromLineOverride=YES
+MailHub=smtp.gmail.com:587
+UseSTARTTLS=YES
+UseTLS=YES
+rewriteDomain=gmail.com
+root=username@gmail.com
+AuthUser=username
+AuthPass=password
+```
+
+Now install [GNU Mailutils][gnu-mailutils] (`sudo apt-get install mailutils` or the
+equivalent on your OS), and send yourself your first email:
+
+```shell
+echo body | mail -aFrom:email@example.com email@example.com -s subject
+```
+
+And that's about it, you've got mail. Here are some more places where it might
+be applicable:
+
+```shell
+# report a backup cronjob, attaching logs
+set -e
+
+finish() {
+ status=$?
+ if [[ $status = 0 ]]; then
+ STATUS="SUCCESS (status $status)"
+ else
+ STATUS="FAILURE (status $status)"
+ fi
+
+ mail user@example.com \
+ -s "Backup job report on $(hostname): ${STATUS}" \
+ --content-type 'text/plain; charset=utf-8' \
+ -A"$LOG_FILE" <<< 'The log report is in the attachment.'
+}
+trap finish EXIT
+
+do-long-backup-cmd-here
+```
+
+```
+# share the output of a cmd with someone
+some-program | mail someone@example.com -s "The weird logs that I was talking about"
+```
+
+...and so on.
+
+You may consider adding a `alias mail='mail -aFrom:email@example.com'` so you
+don't keep re-entering the "From: " part.
+
+Send yourself some emails to see it working!
+
+[ssmtp]: https://wiki.archlinux.org/index.php/SSMTP
+[gnu-mailutils]: https://mailutils.org/
+[forwarding-wiki-section]: https://wiki.archlinux.org/index.php/SSMTP#Forward_to_a_Gmail_mail_server
diff --git a/src/content/tils/2020/09/05/oldschool-pr.adoc b/src/content/tils/2020/09/05/oldschool-pr.adoc
new file mode 100644
index 0000000..5b4e445
--- /dev/null
+++ b/src/content/tils/2020/09/05/oldschool-pr.adoc
@@ -0,0 +1,118 @@
+---
+
+title: Pull requests with Git, the old school way
+
+date: 2020-09-05
+
+layout: post
+
+lang: en
+
+ref: pull-requests-with-git-the-old-school-way
+
+eu_categories: git
+
+---
+It might be news to you, as it was to me, that "pull requests" that you can
+create on a Git hosting provider's web UI[^pr-webui] like
+GitLab/Bitbucket/GitHub actually comes from Git itself: `git request-pull`.
+
+[^pr-webui]: And maybe even using the Git hosting provider's API from the
+ command line!
+
+At the very core, they accomplish the same thing: both the original and the web
+UI ones are ways for you to request the project maintainers to pull in your
+changes from your fork. It's like saying: "hi there, I did some changes on my
+clone of the repository, what do you think about bringing those in?".
+
+The only difference is that you're working with only Git itself, so you're not
+tied to any Git hosting provider: you can send pull requests across them
+transparently! You could even use your own [cgit][cgit] installation. No need to
+be locked in by any of them, putting the "D" back in "DVCS": it's a
+**distributed** version control system.
+
+[cgit]: https://git.zx2c4.com/cgit/
+
+## `git request-pull` introduction
+
+Here's the raw output of a `git request-pull`:
+
+```shell
+$ git request-pull HEAD public-origin
+The following changes since commit 302c9f2f035c0360acd4e13142428c100a10d43f:
+
+ db post: Add link to email exchange (2020-09-03 21:23:55 -0300)
+
+are available in the Git repository at:
+
+ https://euandre.org/git/euandre.org/
+
+for you to fetch changes up to 524c646cdac4153e54f2163e280176adbc4873fa:
+
+ db post: better pinpoint sqlite unsuitability (2020-09-03 22:08:56 -0300)
+
+----------------------------------------------------------------
+EuAndreh (1):
+ db post: better pinpoint sqlite unsuitability
+
+ _posts/2020-08-31-the-database-i-wish-i-had.md | 12 ++++++------
+ 1 file changed, 6 insertions(+), 6 deletions(-)
+```
+
+That very first line is saying: "create me a pull request with only a single
+commit, defined by `HEAD`, and use the URL defined by `public-origin`".
+
+Here's a pitfall: you may try using your `origin` remote at first where I put
+`public-origin`, but that is many times pointing to something like
+`git@example.com`, or `git.example.com:repo.git` (check that with
+`git remote -v | grep origin`). On both cases those are addresses available for
+interaction via SSH, and it would be better if your pull requests used an
+address ready for public consumption.
+
+A simple solution for that is for you to add the `public-origin` alias as the
+HTTPS alternative to the SSH version:
+
+```shell
+$ git remote add public-origin https://example.com/user/repo
+```
+
+Every Git hosting provider exposes repositories via HTTPS.
+
+Experiment it yourself, and get acquainted with the CLI.
+
+## Delivering decentralized pull requests
+
+Now that you can create the content of a pull request, you can just
+[deliver it][cli-email] to the interested parties email:
+
+```shell
+# send a PR with your last commit to the author's email
+git request-pull HEAD public-origin | mail author@example.com -s "PR: Add thing to repo"
+
+# send a PR with your last 5 commits to the project's mailing
+# list, including the patch
+git request-pull -p HEAD~5 public-origin | \
+ mail list@example.com -s "PR: Add another thing to repo"
+
+# send every commit that is new in "other-branch"
+git request-pull master public-origin other-branch | \
+ mail list@example.com -s 'PR: All commits from my "other-brach"'
+```
+
+[cli-email]: {% link _tils/2020-09-04-send-emails-using-the-command-line-for-fun-and-profit.md %}
+
+## Conclusion
+
+In practice, I've never used or seen anyone use pull requests this way:
+everybody is just [sending patches via email][decentralized-git].
+
+If you stop to think about this model, the problem of "Git hosting providers
+becoming too centralized" is a non-issue, and "Git federation" proposals are a
+less attractive as they may sound initially.
+
+Using Git this way is not scary or so weird as the first impression may suggest.
+It is actually how Git was designed to be used.
+
+Check `git help request-pull` for more info.
+
+[decentralized-git]: https://drewdevault.com/2018/07/23/Git-is-already-distributed.html
diff --git a/src/content/tils/2020/10/11/search-git-history.adoc b/src/content/tils/2020/10/11/search-git-history.adoc
new file mode 100644
index 0000000..251abe9
--- /dev/null
+++ b/src/content/tils/2020/10/11/search-git-history.adoc
@@ -0,0 +1,41 @@
+---
+
+title: Search changes to a filename pattern in Git history
+
+date: 2020-10-11
+
+layout: post
+
+lang: en
+
+ref: search-changes-to-a-filename-pattern-in-git-history
+
+eu_categories: git
+
+---
+
+This is [yet][git-til-1] [another][git-til-2] ["search in Git"][git-til-3] TIL
+entry. You could say that Git has a unintuitive CLI, or that is it very
+powerful.
+
+I wanted to search for an old file that I new that was in the
+history of the repository, but was deleted some time ago. So I didn't really
+remember the name, only bits of it.
+
+I immediately went to the list of TILs I had written on searching in Git, but
+it wasn't readily obvious how to do it, so here it goes:
+
+```shell
+git log -- *pattern*
+```
+
+You could add globs before the pattern to match things on any directory, and add
+our `-p` friend to promptly see the diffs:
+
+```shell
+git log -p -- **/*pattern*
+```
+
+[git-til-1]: {% link _tils/2020-08-14-browse-a-git-repository-at-a-specific-commit.md %}
+[git-til-2]: {% link _tils/2020-08-16-search-in-git.md %}
+[git-til-3]: {% link _tils/2020-08-28-grep-online-repositories.md %}
diff --git a/src/content/tils/2020/11/08/find-broken-symlink.adoc b/src/content/tils/2020/11/08/find-broken-symlink.adoc
new file mode 100644
index 0000000..bc97fc6
--- /dev/null
+++ b/src/content/tils/2020/11/08/find-broken-symlink.adoc
@@ -0,0 +1,36 @@
+---
+
+title: Find broken symlinks with "find"
+
+date: 2020-11-08
+
+layout: post
+
+lang: en
+
+ref: find-broken-symlinks-with-find
+
+eu_categories: shell
+
+---
+
+The `find` command knows how to show broken symlinks:
+
+```shell
+find . -xtype l
+```
+
+This was useful to me when combined with [Git Annex][git-annex]. Its
+[`wanted`][git-annex-wanted] option allows you to have a "sparse" checkout of
+the content, and save space by not having to copy every annexed file locally:
+
+```shell
+git annex wanted . 'exclude=Music/* and exclude=Videos/*'
+```
+
+You can `find` any broken symlinks outside those directories by querying with
+Git Annex itself, but `find . -xtype l` works on other places too, where broken
+symlinks might be a problem.
+
+[git-annex]: https://git-annex.branchable.com/
+[git-annex-wanted]: https://git-annex.branchable.com/git-annex-wanted/
diff --git a/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc b/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc
new file mode 100644
index 0000000..3336482
--- /dev/null
+++ b/src/content/tils/2020/11/12/diy-nix-bash-ci.adoc
@@ -0,0 +1,74 @@
+---
+
+title: DIY bare bones CI server with Bash and Nix
+
+date: 2020-11-12 3
+
+layout: post
+
+lang: en
+
+ref: diy-bare-bones-ci-server-with-bash-and-nix
+
+eu_categories: ci
+
+---
+
+With a server with Nix installed (no need for NixOS), you can leverage its build
+isolation for running CI jobs by adding a [post-receive][post-receive] Git hook
+to the server.
+
+In most of my project I like to keep a `test` attribute which runs the test with
+`nix-build -A test`. This way, a post-receive hook could look like:
+
+```shell
+#!/usr/bin/env bash
+set -Eeuo pipefail
+set -x
+
+LOGS_DIR="/data/static/ci-logs/libedn"
+mkdir -p "$LOGS_DIR"
+LOGFILE="${LOGS_DIR}/$(date -Is)-$(git rev-parse master).log"
+exec &> >(tee -a "${LOGFILE}")
+
+unset GIT_DIR
+CLONE="$(mktemp -d)"
+git clone . "$CLONE"
+pushd "$CLONE"
+
+finish() {
+ printf "\n\n>>> exit status was %s\n" "$?"
+}
+trap finish EXIT
+
+nix-build -A test
+```
+
+We initially (lines #5 to #8) create a log file, named after *when* the run is
+running and for *which* commit it is running for. The `exec` and `tee` combo
+allows the output of the script to go both to `stdout` *and* the log file. This
+makes the logs output show up when you do a `git push`.
+
+Lines #10 to #13 create a fresh clone of the repository and line #20 runs the
+test command.
+
+After using a similar post-receive hook for a while, I now even generate a
+simple HTML file to make the logs available ([example project][ci-logs])
+through the browser.
+
+[post-receive]: https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
+[ci-logs]: https://euandreh.xyz/remembering/ci.html
+
+## Upsides
+
+No vendor lock-in, as all you need is a server with Nix installed.
+
+And if you pin the Nixpkgs version you're using, this very simple setup yields
+extremely sandboxed runs on a very hermetic environment.
+
+## Downsides
+
+Besides the many missing shiny features of this very simplistic CI, `nix-build`
+can be very resource intensive. Specifically, it consumes too much memory. So if
+it has to download too many things, or the build closure gets too big, the
+server might very well run out of memory.
diff --git a/src/content/tils/2020/11/12/git-bisect-automation.adoc b/src/content/tils/2020/11/12/git-bisect-automation.adoc
new file mode 100644
index 0000000..9c34b2a
--- /dev/null
+++ b/src/content/tils/2020/11/12/git-bisect-automation.adoc
@@ -0,0 +1,35 @@
+---
+
+title: Git bisect automation
+
+date: 2020-11-12 2
+
+layout: post
+
+lang: en
+
+ref: git-bisect-automation
+
+eu_categories: git
+
+---
+
+It is good to have an standardized way to run builds and tests on the repository
+of a project, so that you can find when a bug was introduced by using
+`git bisect run`.
+
+I've already been in the situation when a bug was introduced and I didn't know
+how it even was occurring, and running Git bisect over hundreds of commits to
+pinpoint the failing commit was very empowering:
+
+```
+$ GOOD_COMMIT_SHA=e1fd0a817d192c5a5df72dd7422e36558fa78e46
+$ git bisect start HEAD $GOOD_COMMIT_SHA
+$ git bisect run sn -c './build.sh && ./run-failing-case.sh'
+```
+
+Git will than do a binary search between the commits, and run the commands you
+provide it with to find the failing commit.
+
+Instead of being afraid of doing a bisect, you should instead leverage it, and
+make Git help you dig through the history of the repository to find the bad code.
diff --git a/src/content/tils/2020/11/12/useful-bashvars.adoc b/src/content/tils/2020/11/12/useful-bashvars.adoc
new file mode 100644
index 0000000..33a072e
--- /dev/null
+++ b/src/content/tils/2020/11/12/useful-bashvars.adoc
@@ -0,0 +1,72 @@
+---
+
+title: Useful Bash variables
+
+date: 2020-11-12 1
+
+layout: post
+
+lang: en
+
+ref: useful-bash-variables
+
+eu_categories: shell
+
+---
+
+[GNU Bash][gnu-bash] has a few two letter variables that may be useful when
+typing on the terminal.
+
+[gnu-bash]: https://www.gnu.org/software/bash/
+
+## `!!`: the text of the last command
+
+The [`!!` variable][previous-command] refers to the previous command, and I find
+useful when following chains for symlinks:
+
+[previous-command]: https://www.gnu.org/software/bash/manual/bash.html#Event-Designators
+
+```shell
+$ which git
+/run/current-system/sw/bin/git
+$ readlink $(!!)
+readlink $(which git)
+/nix/store/5bgr1xpm4m0r72h9049jbbhagxdyrnyb-git-2.28.0/bin/git
+```
+
+It is also useful when you forget to prefix `sudo` to a command that requires
+it:
+
+```shell
+$ requires-sudo.sh
+requires-sudo.sh: Permission denied
+$ sudo !!
+sudo ./requires-sudo.sh
+# all good
+```
+
+Bash prints the command expansion before executing it, so it is better for you
+to follow along what it is doing.
+
+## `$_`: most recent parameter
+
+The [`$_` variable][recent-parameter] will give you the most recent parameter
+you provided to a previous argument, which can save you typing sometimes:
+
+```shell
+# instead of...
+$ mkdir -p a/b/c/d/
+$ cd a/b/c/d/
+
+# ...you can:
+$ mkdir -p a/b/c/d/
+$ cd $_
+```
+
+[recent-parameter]: https://www.gnu.org/software/bash/manual/bash.html#Special-Parameters
+
+## Conclusion
+
+I wouldn't use those in a script, as it would make the script terser to read, I
+find those useful shortcut that are handy when writing at the interactive
+terminal.
diff --git a/src/content/tils/2020/11/14/gpodder-media.adoc b/src/content/tils/2020/11/14/gpodder-media.adoc
new file mode 100644
index 0000000..a74b225
--- /dev/null
+++ b/src/content/tils/2020/11/14/gpodder-media.adoc
@@ -0,0 +1,33 @@
+---
+
+title: gPodder as a media subscription manager
+
+date: 2020-11-14
+
+layout: post
+
+lang: en
+
+ref: gpodder-as-a-media-subscription-manager
+
+---
+
+As we [re-discover][rss] the value of Atom/RSS feeds, most useful feed clients I
+know of don't support media, specifically audio and video.
+
+[gPodder][gpodder] does.
+
+It is mostly know as a desktop podcatcher. But the thing about podcasts is that
+the feed is provided through an RSS/Atom feed. So you can just use gPodder as
+your media feed client, where you have control of what you look at.
+
+I audio and video providers I know of offer an RSS/Atom view of their content,
+so you can, say, treat any YouTube channel like a feed on its own.
+
+gPodder will then managed your feeds, watched/unwatched, queue downloads, etc.
+
+Being obvious now, it was a big finding for me. If it got you interested, I
+recommend you giving gPodder a try.
+
+[rss]: https://www.charlieharrington.com/unexpected-useless-and-urgent
+[gpodder]: https://gpodder.github.io/
diff --git a/src/content/tils/2020/11/30/git-notes-ci.adoc b/src/content/tils/2020/11/30/git-notes-ci.adoc
new file mode 100644
index 0000000..f8dd063
--- /dev/null
+++ b/src/content/tils/2020/11/30/git-notes-ci.adoc
@@ -0,0 +1,122 @@
+---
+
+title: Storing CI data on Git notes
+
+date: 2020-11-30
+
+layout: post
+
+lang: en
+
+ref: storing-ci-data-on-git-notes
+
+eu_categories: git,ci
+
+---
+
+Extending the bare bones CI server I've [talked about before][previous-article],
+divoplade on Freenode suggested storing CI artifacts on [Git notes][git-notes],
+such as tarballs, binaries, logs, *etc*.
+
+I've written a small script that will put log files and CI job data on Git notes,
+and make it visible on the porcelain log. It is a simple extension of the
+previous article:
+
+```shell
+#!/usr/bin/env bash
+set -Eeuo pipefail
+set -x
+
+PREFIX='/srv/ci/vps'
+mkdir -p "$PREFIX"
+read -r _ SHA _ # oldrev newrev refname
+FILENAME="$(date -Is)-$SHA.log"
+LOGFILE="$PREFIX/$FILENAME"
+exec &> >(tee -a "$LOGFILE")
+
+echo "Starting CI job at: $(date -Is)"
+
+finish() {
+ STATUS="$?"
+ printf "\n\n>>> exit status was %s\n" "$STATUS"
+ echo "Finishing CI job at: $(date -Is)"
+ popd
+ NOTE=$(cat <<EOF
+See CI logs with:
+ git notes --ref=refs/notes/ci-logs show $SHA
+ git notes --ref=refs/notes/ci-data show $SHA
+EOF
+)
+ git notes --ref=refs/notes/ci-data add -f -m "$STATUS $FILENAME"
+ git notes --ref=refs/notes/ci-logs add -f -F "$LOGFILE"
+ git notes add -f -m "$NOTE"
+ printf "\n\n>>> CI logs added as Git note."
+}
+trap finish EXIT
+
+unset GIT_DIR
+CLONE="$(mktemp -d)"
+git clone . "$CLONE"
+pushd "$CLONE"
+git config --global user.email git@euandre.org
+git config --global user.name 'EuAndreh CI'
+
+./container make check site
+./container make publish
+```
+
+The important part is in the `finish()` function:
+- #25 stores the exit status and the generated filename separated by spaces;
+- #26 adds the log file in a note using the `refs/notes/ci-logs` ref;
+- #27 it adds a note to the commit saying how to see the logs.
+
+A commit now has an attached note, and shows it whenever you look at it:
+
+```diff
+$ git show 87c57133abd8be5d7cc46afbf107f59b26066575
+commit 87c57133abd8be5d7cc46afbf107f59b26066575
+Author: EuAndreh <eu@euandre.org>
+Date: Wed Feb 24 21:58:28 2021 -0300
+
+ vps/machines.scm: Change path to cronjob files
+
+Notes:
+ See CI logs with:
+ git notes --ref=refs/notes/ci-logs show 87c57133abd8be5d7cc46afbf107f59b26066575
+ git notes --ref=refs/notes/ci-data show 87c57133abd8be5d7cc46afbf107f59b26066575
+
+diff --git a/servers/vps/machines.scm b/servers/vps/machines.scm
+index d1830ca..a4ccde7 100644
+--- a/servers/vps/machines.scm
++++ b/servers/vps/machines.scm
+@@ -262,8 +262,8 @@ pki " mail-domain " key \"" (tls-priv-for mail-domain) "\""))
+ (service mcron-service-type
+ (mcron-configuration
+ (jobs
+- (list #~(job "30 1 * * 1" "guix gc -d")
+- #~(job "30 0 * * *" "/var/lib/euandreh/backup.sh")))))
++ (list #~(job "30 1 * * 1" "/opt/bin/gc.sh")
++ #~(job "30 0 * * *" "/opt/bin/backup.sh")))))
+ (service dhcp-client-service-type)
+ #;
+ (service opensmtpd-service-type
+```
+
+Other tools such as [cgit][cgit] will also show notes on the web interface:
+<https://euandre.org/git/servers/commit?id=87c57133abd8be5d7cc46afbf107f59b26066575>.
+
+You can go even further: since cgit can serve raw blob directly, you can even
+serve such artifacts (log files, release artifacts, binaries) from cgit itself:
+
+```shell
+$ SHA="$(git notes --ref=refs/notes/ci-logs list 87c57133abd8be5d7cc46afbf107f59b26066575)"
+$ echo "https://euandre.org/git/servers/blob?id=$SHA"
+https://euandre.org/git/servers/blob?id=1707a97bae24e3864fe7943f8dda6d01c294fb5c
+```
+
+And like that you'll have cgit serving the artifacts for you:
+<https://euandre.org/git/servers/blob?id=1707a97bae24e3864fe7943f8dda6d01c294fb5c>.
+
+[previous-article]: {% link _tils/2020-11-12-diy-bare-bones-ci-server-with-bash-and-nix.md %}
+[git-notes]: https://git-scm.com/docs/git-notes
+[cgit]: https://git.zx2c4.com/cgit/
diff --git a/src/content/tils/2020/12/15/shellcheck-repo.adoc b/src/content/tils/2020/12/15/shellcheck-repo.adoc
new file mode 100644
index 0000000..71d10a3
--- /dev/null
+++ b/src/content/tils/2020/12/15/shellcheck-repo.adoc
@@ -0,0 +1,171 @@
+---
+
+title: 'Awk snippet: ShellCheck all scripts in a repository'
+
+date: 2020-12-15
+
+updated_at: 2020-12-16
+
+layout: post
+
+lang: en
+
+ref: awk-snippet-shellcheck-all-scripts-in-a-repository
+
+eu_categories: shell
+
+---
+
+Inspired by Fred Herbert's "[Awk in 20 Minutes][awk-20min]", here's a problem I
+just solved with a line of Awk: run ShellCheck in all scripts of a repository.
+
+In my repositories I usually have Bash and POSIX scripts, which I want to keep
+tidy with [ShellCheck][shellcheck]. Here's the first version of
+`assert-shellcheck.sh`:
+
+```shell
+#!/bin/sh -eux
+
+find . -type f -name '*.sh' -print0 | xargs -0 shellcheck
+```
+
+This is the type of script that I copy around to all repositories, and I want it
+to be capable of working on any repository, without requiring a list of files to
+run ShellCheck on.
+
+This first version worked fine, as all my scripts had the '.sh' ending. But I
+recently added some scripts without any extension, so `assert-shellcheck.sh`
+called for a second version. The first attempt was to try grepping the shebang
+line:
+
+```shell
+$ grep '^#!/' assert-shellcheck.sh
+#!/usr/sh
+```
+
+Good, we have a grep pattern on the first try. Let's try to find all the
+matching files:
+
+```shell
+$ find . -type f | xargs grep -l '^#!/'
+./TODOs.org
+./.git/hooks/pre-commit.sample
+./.git/hooks/pre-push.sample
+./.git/hooks/pre-merge-commit.sample
+./.git/hooks/fsmonitor-watchman.sample
+./.git/hooks/pre-applypatch.sample
+./.git/hooks/pre-push
+./.git/hooks/prepare-commit-msg.sample
+./.git/hooks/commit-msg.sample
+./.git/hooks/post-update.sample
+./.git/hooks/pre-receive.sample
+./.git/hooks/applypatch-msg.sample
+./.git/hooks/pre-rebase.sample
+./.git/hooks/update.sample
+./build-aux/with-guile-env.in
+./build-aux/test-driver
+./build-aux/missing
+./build-aux/install-sh
+./build-aux/install-sh~
+./bootstrap
+./scripts/assert-todos.sh
+./scripts/songbooks
+./scripts/compile-readme.sh
+./scripts/ci-build.sh
+./scripts/generate-tasks-and-bugs.sh
+./scripts/songbooks.in
+./scripts/with-container.sh
+./scripts/assert-shellcheck.sh
+```
+
+This approach has a problem, though: it includes files ignored by Git, such as
+`builld-aux/install-sh~`, and even goes into the `.git/` directory and finds
+sample hooks in `.git/hooks/*`.
+
+To list the files that Git is tracking we'll try `git ls-files`:
+
+```shell
+$ git ls-files | xargs grep -l '^#!/'
+TODOs.org
+bootstrap
+build-aux/with-guile-env.in
+old/scripts/assert-docs-spelling.sh
+old/scripts/build-site.sh
+old/scripts/builder.bats.sh
+scripts/assert-shellcheck.sh
+scripts/assert-todos.sh
+scripts/ci-build.sh
+scripts/compile-readme.sh
+scripts/generate-tasks-and-bugs.sh
+scripts/songbooks.in
+scripts/with-container.sh
+```
+
+It looks to be almost there, but the `TODOs.org` entry shows a flaw in it: grep
+is looking for a `'^#!/'` pattern on any part of the file. In my case,
+`TODOs.org` had a snippet in the middle of the file where a line started with
+`#!/bin/sh`.
+
+So what we actually want is to match the **first** line against the pattern. We
+could loop through each file, get the first line with `head -n 1` and grep
+against that, but this is starting to look messy. I bet there is another way of
+doing it concisely...
+
+Let's try Awk. I need a way to select the line numbers to replace `head -n 1`,
+and to stop processing the file if the pattern matches. A quick search points me
+to using `FNR` for the former, and `{ nextline }` for the latter. Let's try it:
+
+```shell
+$ git ls-files | xargs awk 'FNR>1 { nextfile } /^#!\// { print FILENAME; nextfile }'
+bootstrap
+build-aux/with-guile-env.in
+old/scripts/assert-docs-spelling.sh
+old/scripts/build-site.sh
+old/scripts/builder.bats.sh
+scripts/assert-shellcheck.sh
+scripts/assert-todos.sh
+scripts/ci-build.sh
+scripts/compile-readme.sh
+scripts/generate-tasks-and-bugs.sh
+scripts/songbooks.in
+scripts/with-container.sh
+```
+
+Great! Only `TODOs.org` is missing, but the script is much better: instead of
+matching against any part of the file that may have a shebang-like line, we only
+look for the first. Let's put it back into the `assert-shellcheck.sh` file and
+use `NULL` for separators to accommodate files with spaces in the name:
+
+```
+#!/usr/sh -eux
+
+git ls-files -z | \
+ xargs -0 awk 'FNR>1 { nextfile } /^#!\// { print FILENAME; nextfile }' | \
+ xargs shellcheck
+```
+
+This is where I've stopped, but I imagine a likely improvement: match against
+only `#!/bin/sh` and `#!/usr/bin/env bash` shebangs (the ones I use most), to
+avoid running ShellCheck on Perl files, or other shebangs.
+
+Also when reviewing the text of this article, I found that `{ nextfile }` is a
+GNU Awk extension. It would be an improvement if `assert-shellcheck.sh` relied
+on the POSIX subset of Awk for working correctly.
+
+## *Update*
+
+After publishing, I could remove `{ nextfile }` and even make the script
+simpler:
+
+```shell
+#!/usr/sh -eux
+
+git ls-files -z | \
+ xargs -0 awk 'FNR==1 && /^#!\// { print FILENAME }' | \
+ xargs shellcheck
+```
+
+Now both the shell and Awk usage are POSIX compatible.
+
+[awk-20min]: https://ferd.ca/awk-in-20-minutes.html
+[shellcheck]: https://www.shellcheck.net/
diff --git a/src/content/tils/2020/12/29/svg.adoc b/src/content/tils/2020/12/29/svg.adoc
new file mode 100644
index 0000000..54cca9a
--- /dev/null
+++ b/src/content/tils/2020/12/29/svg.adoc
@@ -0,0 +1,134 @@
+---
+
+title: SVG favicon
+
+date: 2020-12-29
+
+updated_at: 2021-01-12
+
+layout: post
+
+lang: en
+
+ref: svg-favicon
+
+---
+
+I've wanted to change this website's favicon from a plain `.ico` file to a
+proper SVG. The problem I was trying to solve was to reuse the same image on
+other places, such as avatars.
+
+Generating a PNG from the existing 16x16 icon was possible but bad: the final
+image was blurry. Converting the `.ico` to an SVG was possible, but sub-optimal:
+tools try to guess some vector paths, and the final SVG didn't match the
+original.
+
+Instead I used a tool to draw the "vector pixels" as black squares, and after
+getting the final result I manually cleaned-up the generated XML:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16">
+ <path d="M 0 8 L 1 8 L 1 9 L 0 9 L 0 8 Z" />
+ <path d="M 0 13 L 1 13 L 1 14 L 0 14 L 0 13 Z" />
+ <path d="M 1 8 L 2 8 L 2 9 L 1 9 L 1 8 Z" />
+ <path d="M 1 13 L 2 13 L 2 14 L 1 14 L 1 13 Z" />
+ <path d="M 2 8 L 3 8 L 3 9 L 2 9 L 2 8 Z" />
+ <path d="M 2 13 L 3 13 L 3 14 L 2 14 L 2 13 Z" />
+ <path d="M 3 8 L 4 8 L 4 9 L 3 9 L 3 8 Z" />
+ <path d="M 3 13 L 4 13 L 4 14 L 3 14 L 3 13 Z" />
+ <path d="M 4 7 L 5 7 L 5 8 L 4 8 L 4 7 Z" />
+ <path d="M 4 8 L 5 8 L 5 9 L 4 9 L 4 8 Z" />
+ <path d="M 4 13 L 5 13 L 5 14 L 4 14 L 4 13 Z" />
+ <path d="M 5 6 L 6 6 L 6 7 L 5 7 L 5 6 Z" />
+ <path d="M 5 7 L 6 7 L 6 8 L 5 8 L 5 7 Z" />
+ <path d="M 5 13 L 6 13 L 6 14 L 5 14 L 5 13 Z" />
+ <path d="M 6 5 L 7 5 L 7 6 L 6 6 L 6 5 Z" />
+ <path d="M 6 6 L 7 6 L 7 7 L 6 7 L 6 6 Z" />
+ <path d="M 6 14 L 7 14 L 7 15 L 6 15 L 6 14 Z" />
+ <path d="M 7 1 L 8 1 L 8 2 L 7 2 L 7 1 Z" />
+ <path d="M 7 14 L 8 14 L 8 15 L 7 15 L 7 14 Z" />
+ <path d="M 7 15 L 8 15 L 8 16 L 7 16 L 7 15 Z" />
+ <path d="M 7 2 L 8 2 L 8 3 L 7 3 L 7 2 Z" />
+ <path d="M 7 3 L 8 3 L 8 4 L 7 4 L 7 3 Z" />
+ <path d="M 7 4 L 8 4 L 8 5 L 7 5 L 7 4 Z" />
+ <path d="M 7 5 L 8 5 L 8 6 L 7 6 L 7 5 Z" />
+ <path d="M 8 1 L 9 1 L 9 2 L 8 2 L 8 1 Z" />
+ <path d="M 8 15 L 9 15 L 9 16 L 8 16 L 8 15 Z" />
+ <path d="M 9 1 L 10 1 L 10 2 L 9 2 L 9 1 Z" />
+ <path d="M 9 2 L 10 2 L 10 3 L 9 3 L 9 2 Z" />
+ <path d="M 9 6 L 10 6 L 10 7 L 9 7 L 9 6 Z" />
+ <path d="M 9 15 L 10 15 L 10 16 L 9 16 L 9 15 Z" />
+ <path d="M 10 2 L 11 2 L 11 3 L 10 3 L 10 2 Z" />
+ <path d="M 10 3 L 11 3 L 11 4 L 10 4 L 10 3 Z" />
+ <path d="M 10 4 L 11 4 L 11 5 L 10 5 L 10 4 Z" />
+ <path d="M 10 5 L 11 5 L 11 6 L 10 6 L 10 5 Z" />
+ <path d="M 10 6 L 11 6 L 11 7 L 10 7 L 10 6 Z" />
+ <path d="M 11 6 L 12 6 L 12 7 L 11 7 L 11 6 Z" />
+ <path d="M 11 8 L 12 8 L 12 9 L 11 9 L 11 8 Z" />
+ <path d="M 10 15 L 11 15 L 11 16 L 10 16 L 10 15 Z" />
+ <path d="M 11 10 L 12 10 L 12 11 L 11 11 L 11 10 Z" />
+ <path d="M 11 12 L 12 12 L 12 13 L 11 13 L 11 12 Z" />
+ <path d="M 11 14 L 12 14 L 12 15 L 11 15 L 11 14 Z" />
+ <path d="M 11 15 L 12 15 L 12 16 L 11 16 L 11 15 Z" />
+ <path d="M 12 6 L 13 6 L 13 7 L 12 7 L 12 6 Z" />
+ <path d="M 12 8 L 13 8 L 13 9 L 12 9 L 12 8 Z" />
+ <path d="M 12 10 L 13 10 L 13 11 L 12 11 L 12 10 Z" />
+ <path d="M 12 12 L 13 12 L 13 13 L 12 13 L 12 12 Z" />
+ <path d="M 12 14 L 13 14 L 13 15 L 12 15 L 12 14 Z" />
+ <path d="M 13 6 L 14 6 L 14 7 L 13 7 L 13 6 Z" />
+ <path d="M 13 8 L 14 8 L 14 9 L 13 9 L 13 8 Z" />
+ <path d="M 13 10 L 14 10 L 14 11 L 13 11 L 13 10 Z" />
+ <path d="M 13 12 L 14 12 L 14 13 L 13 13 L 13 12 Z" />
+ <path d="M 13 13 L 14 13 L 14 14 L 13 14 L 13 13 Z" />
+ <path d="M 13 14 L 14 14 L 14 15 L 13 15 L 13 14 Z" />
+ <path d="M 14 7 L 15 7 L 15 8 L 14 8 L 14 7 Z" />
+ <path d="M 14 8 L 15 8 L 15 9 L 14 9 L 14 8 Z" />
+ <path d="M 14 9 L 15 9 L 15 10 L 14 10 L 14 9 Z" />
+ <path d="M 14 10 L 15 10 L 15 11 L 14 11 L 14 10 Z" />
+ <path d="M 14 11 L 15 11 L 15 12 L 14 12 L 14 11 Z" />
+ <path d="M 14 12 L 15 12 L 15 13 L 14 13 L 14 12 Z" />
+</svg>
+```
+
+The good thing about this new favicon
+(at [`/static/lord-favicon.svg`](/static/lord-favicon.svg)) is that
+a) it is simple enough that I feel
+comfortable editing it manually and b) it is an SVG, which means I can generate
+any desired size.
+
+With the new favicon file, I now had to add to the templates' `<head>` a
+`<link>` to this icon:
+```html
+<head>
+ <meta charset="UTF-8" />
+ <link rel="icon" type="image/svg+xml" href="/static/favicon.svg">
+ ...
+```
+
+Still missing is a bitmap image for places that can't handle vector images. I
+used Jekyll generator to create an PNG from the existing SVG:
+
+```ruby
+module Jekyll
+ class FaviconGenerator < Generator
+ safe true
+ priority :high
+
+ SIZE = 420
+
+ def generate(site)
+ svg = 'static/favicon.svg'
+ png = 'static/favicon.png'
+ unless File.exist? png then
+ puts "Missing '#{png}', generating..."
+ puts `inkscape -o #{png} -w #{SIZE} -h #{SIZE} #{svg}`
+ end
+ end
+ end
+end
+```
+
+I had to increase the priority of the generator so that it would run before
+other places that would use a `{% link /static/lord-favicon.png %}`, otherwise
+the file would be considered missing.
diff --git a/src/content/tils/2021/01/12/curl-awk-emails.adoc b/src/content/tils/2021/01/12/curl-awk-emails.adoc
new file mode 100644
index 0000000..880ddf1
--- /dev/null
+++ b/src/content/tils/2021/01/12/curl-awk-emails.adoc
@@ -0,0 +1,142 @@
+---
+
+title: 'Awk snippet: send email to multiple recipients with cURL'
+
+date: 2021-01-12
+
+layout: post
+
+lang: en
+
+ref: awk-snippet-send-email-to-multiple-recipients-with-curl
+
+---
+
+As I experiment with [Neomutt][neomutt], I wanted to keep being able to enqueue emails for sending later like my previous setup, so that I didn't rely on having an internet connection.
+
+My requirements for the `sendmail` command were:
+1. store the email in a file, and send it later.
+1. send from different addresses, using different SMTP servers;
+
+I couldn't find an MTA that could accomplish that, but I was able to quickly write a solution.
+
+The first part was the easiest: store the email in a file:
+
+```shell
+# ~/.config/mutt/muttrc:
+set sendmail=~/bin/enqueue-email.sh
+
+# ~/bin/enqueue-email.sh:
+#!/bin/sh -eu
+
+cat - > "$HOME/mbsync/my-queued-emails/$(date -Is)"
+```
+
+Now that I had the email file store locally, I needed a program to send the email from the file, so that I could create a cronjob like:
+
+```shell
+for f in ~/mbsync/my-queued-emails/*; do
+ ~/bin/dispatch-email.sh "$f" && rm "$f"
+done
+```
+
+The `dispatch-email.sh` would have to look at the `From: ` header and decide which SMTP server to use.
+As I [found out][curl-email] that [curl][curl] supports SMTP and is able to send emails, this is what I ended up with:
+
+```shell
+#!/bin/sh -eu
+
+F="$1"
+
+rcpt="$(awk '
+ match($0, /^(To|Cc|Bcc): (.*)$/, m) {
+ split(m[2], tos, ",")
+ for (i in tos) {
+ print "--mail-rcpt " tos[i]
+ }
+ }
+' "$F")"
+
+if grep -qE '^From: .*<addr@server1\.org>$' "$F"; then
+ curl \
+ -s \
+ --url smtp://smtp.server1.org:587 \
+ --ssl-reqd \
+ --mail-from addr@server1.org \
+ $rcpt \
+ --user 'addr@server1.org:my-long-and-secure-passphrase' \
+ --upload-file "$F"
+elif grep -qE '^From: .*<addr@server2\.org>$' "$F"; then
+ curl \
+ -s \
+ --url smtp://smtp.server2.org:587 \
+ --ssl-reqd \
+ --mail-from addr@server2.org \
+ $rcpt \
+ --user 'addr@server2.org:my-long-and-secure-passphrase' \
+ --upload-file "$F"
+else
+ echo 'Bad "From: " address'
+ exit 1
+fi
+```
+
+Most of curl flags used are self-explanatory, except for `$rcpt`.
+
+curl connects to the SMTP server, but doesn't set the recipient address by looking at the message.
+My solution was to generate the curl flags, store them in `$rcpt` and use it unquoted to leverage shell word splitting.
+
+To me, the most interesting part was building the `$rcpt` flags.
+My first instinct was to try grep, but it couldn't print only matches in a regex.
+As I started to turn towards sed, I envisioned needing something else to loop over the sed output, and I then moved to Awk.
+
+In the short Awk snippet, 3 things were new to me: the `match(...)`, `split(...)` and `for () {}`.
+The only other function I have ever used was `gsub(...)`, but these new ones felt similar enough that I could almost guess their behaviour and arguments.
+`match(...)` stores the matches of a regex on the given array positionally, and `split(...)` stores the chunks in the given array.
+
+I even did it incrementally:
+
+```shell
+$ H='To: to@example.com, to2@example.com\nCc: cc@example.com, cc2@example.com\nBcc: bcc@example.com,bcc2@example.com\n'
+$ printf "$H" | awk '/^To: .*$/ { print $0 }'
+To: to@example.com, to2@example.com
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { print m }'
+awk: ligne de commande:1: (FILENAME=- FNR=1) fatal : tentative d'utilisation du tableau « m » dans un contexte scalaire
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { print m[0] }'
+To: to@example.com, to2@example.com
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { print m[1] }'
+to@example.com, to2@example.com
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos }'
+awk: ligne de commande:1: (FILENAME=- FNR=1) fatal : tentative d'utilisation du tableau « tos » dans un contexte scalaire
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[0] }'
+
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[1] }'
+to@example.com,
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[2] }'
+to2@example.com
+$ printf "$H" | awk 'match($0, /^To: (.*)$/, m) { split(m[1], tos, " "); print tos[3] }'
+
+```
+
+(This isn't the verbatim interactive session, but a cleaned version to make it more readable.)
+
+At this point, I realized I needed a for loop over the `tos` array, and I moved the Awk snippet into the `~/bin/dispatch-email.sh`.
+I liked the final thing:
+
+```awk
+match($0, /^(To|Cc|Bcc): (.*)$/, m) {
+ split(m[2], tos, ",")
+ for (i in tos) {
+ print "--mail-rcpt " tos[i]
+ }
+}
+```
+
+As I learn more about Awk, I feel that it is too undervalued, as many people turn to Perl or other programming languages when Awk suffices.
+The advantage is pretty clear: writing programs that run on any POSIX system, without extra dependencies required.
+
+Coding to the standards is underrated.
+
+[neomutt]: https://neomutt.org/
+[curl-email]: https://blog.edmdesigner.com/send-email-from-linux-command-line/
+[curl]: https://curl.se/
diff --git a/src/content/tils/2021/01/17/posix-shebang.adoc b/src/content/tils/2021/01/17/posix-shebang.adoc
new file mode 100644
index 0000000..5f5b897
--- /dev/null
+++ b/src/content/tils/2021/01/17/posix-shebang.adoc
@@ -0,0 +1,55 @@
+= POSIX sh and shebangs
+
+date: 2021-01-17
+
+layout: post
+
+lang: en
+
+ref: posix-sh-and-shebangs
+
+---
+
+As I [keep moving][posix-awk-0] [towards POSIX][posix-awk-1], I'm on the process of migrating all my Bash scripts to POSIX sh.
+
+As I dropped `[[`, arrays and other Bashisms, I was left staring at the first line of every script, wondering what to do: what is the POSIX sh equivalent of `#!/usr/bin/env bash`?
+I already knew that POSIX says nothing about shebangs, and that the portable way to call a POSIX sh script is `sh script.sh`, but I didn't know what to do with that first line.
+
+What I had previously was:
+```shell
+#!/usr/bin/env bash
+set -Eeuo pipefail
+cd "$(dirname "${BASH_SOURCE[0]}")"
+```
+
+Obviously, the `$BASH_SOURCE` would be gone, and I would have to adapt some of my scripts to not rely on the script location.
+The `-E` and `-o pipefail` options were also gone, and would be replaced by nothing.
+
+I converted all of them to:
+```shell
+#!/bin/sh -eu
+```
+
+I moved the `-eu` options to the shebang line itself, striving for conciseness.
+But as I changed callers from `./script.sh` to `sh script.sh`, things started to fail.
+Some tests that should fail reported errors, but didn't return 1.
+
+My first reaction was to revert back to `./script.sh`, but the POSIX bug I caught is a strong strain, and when I went back to it, I figured that the callers were missing some flags.
+Specifically, `sh -eu script.sh`.
+
+Then it clicked: when running with `sh script.sh`, the shebang line with the sh options is ignored, as it is a comment!
+
+Which means that the shebang most friendly with POSIX is:
+
+```shell
+#!/bin/sh
+set -eu
+```
+
+1. when running via `./script.sh`, if the system has an executable at `/bin/sh`, it will be used to run the script;
+2. when running via `sh script.sh`, the sh options aren't ignored as previously.
+
+TIL.
+
+[posix-awk-0]: {% link _tils/2020-12-15-awk-snippet-shellcheck-all-scripts-in-a-repository.md %}
+[posix-awk-1]: {% link _tils/2021-01-12-awk-snippet-send-email-to-multiple-recipients-with-curl.md %}
diff --git a/src/content/tils/2021/04/24/cl-generic-precedence.adoc b/src/content/tils/2021/04/24/cl-generic-precedence.adoc
new file mode 100644
index 0000000..8051232
--- /dev/null
+++ b/src/content/tils/2021/04/24/cl-generic-precedence.adoc
@@ -0,0 +1,137 @@
+---
+
+title: Common Lisp argument precedence order parameterization of a generic function
+
+date: 2021-04-24 2
+
+layout: post
+
+lang: en
+
+ref: common-lisp-argument-precedence-order-parameterization-of-a-generic-function
+
+---
+
+When CLOS dispatches a method, it picks the most specific method definition to the argument list:
+
+```lisp
+
+* (defgeneric a-fn (x))
+#<STANDARD-GENERIC-FUNCTION A-FN (0) {5815ACB9}>
+
+* (defmethod a-fn (x) :default-method)
+#<STANDARD-METHOD A-FN (T) {581DB535}>
+
+* (defmethod a-fn ((x number)) :a-number)
+#<STANDARD-METHOD A-FN (NUMBER) {58241645}>
+
+* (defmethod a-fn ((x (eql 1))) :number-1)
+#<STANDARD-METHOD A-FN ((EQL 1)) {582A7D75}>
+
+* (a-fn nil)
+:DEFAULT-METHOD
+
+* (a-fn "1")
+:DEFAULT-METHOD
+
+* (a-fn 0)
+:A-NUMBER
+
+* (a-fn 1)
+:NUMBER-1
+```
+
+CLOS uses a similar logic when choosing the method from parent classes, when multiple ones are available:
+
+```lisp
+* (defclass class-a () ())
+
+#<STANDARD-CLASS CLASS-A {583E0B25}>
+* (defclass class-b () ())
+
+#<STANDARD-CLASS CLASS-B {583E7F6D}>
+* (defgeneric another-fn (obj))
+
+#<STANDARD-GENERIC-FUNCTION ANOTHER-FN (0) {583DA749}>
+* (defmethod another-fn ((obj class-a)) :class-a)
+; Compiling LAMBDA (.PV-CELL. .NEXT-METHOD-CALL. OBJ):
+; Compiling Top-Level Form:
+
+#<STANDARD-METHOD ANOTHER-FN (CLASS-A) {584523C5}>
+* (defmethod another-fn ((obj class-b)) :class-b)
+; Compiling LAMBDA (.PV-CELL. .NEXT-METHOD-CALL. OBJ):
+; Compiling Top-Level Form:
+
+#<STANDARD-METHOD ANOTHER-FN (CLASS-B) {584B8895}>
+```
+
+Given the above definitions, when inheriting from `class-a` and `class-b`, the order of inheritance matters:
+
+```lisp
+* (defclass class-a-coming-first (class-a class-b) ())
+#<STANDARD-CLASS CLASS-A-COMING-FIRST {584BE6AD}>
+
+* (defclass class-b-coming-first (class-b class-a) ())
+#<STANDARD-CLASS CLASS-B-COMING-FIRST {584C744D}>
+
+* (another-fn (make-instance 'class-a-coming-first))
+:CLASS-A
+
+* (another-fn (make-instance 'class-b-coming-first))
+:CLASS-B
+```
+
+Combining the order of inheritance with generic functions with multiple arguments, CLOS has to make a choice of how to pick a method given two competing definitions, and its default strategy is prioritizing from left to right:
+
+```lisp
+* (defgeneric yet-another-fn (obj1 obj2))
+#<STANDARD-GENERIC-FUNCTION YET-ANOTHER-FN (0) {584D9EC9}>
+
+* (defmethod yet-another-fn ((obj1 class-a) obj2) :first-arg-specialized)
+#<STANDARD-METHOD YET-ANOTHER-FN (CLASS-A T) {5854269D}>
+
+* (defmethod yet-another-fn (obj1 (obj2 class-b)) :second-arg-specialized)
+#<STANDARD-METHOD YET-ANOTHER-FN (T CLASS-B) {585AAAAD}>
+
+* (yet-another-fn (make-instance 'class-a) (make-instance 'class-b))
+:FIRST-ARG-SPECIALIZED
+```
+
+CLOS has to make a choice between the first and the second definition of `yet-another-fn`, but its choice is just a heuristic.
+What if we want the choice to be based on the second argument, instead of the first?
+
+For that, we use the `:argument-precedence-order` option when declaring a generic function:
+
+```lisp
+* (defgeneric yet-another-fn (obj1 obj2) (:argument-precedence-order obj2 obj1))
+#<STANDARD-GENERIC-FUNCTION YET-ANOTHER-FN (2) {584D9EC9}>
+
+* (yet-another-fn (make-instance 'class-a) (make-instance 'class-b))
+:SECOND-ARG-SPECIALIZED
+```
+
+I liked that the `:argument-precedence-order` option exists.
+We shouldn't have to change the arguments from `(obj1 obj2)` to `(obj2 obj1)` just to make CLOS pick the method that we want.
+We can configure its default behaviour if desired, and keep the order of arguments however it best fits the generic function.
+
+## Comparison with Clojure
+
+Clojure has an equivalent, when using `defmulti`.
+
+Since when declaring a multi-method with `defmulti` we must define the dispatch function, Clojure uses it to pick the method definition.
+Since the dispatch function is required, there is no need for a default behaviour, such as left-to-right.
+
+## Conclusion
+
+Making the argument precedence order configurable for generic functions but not for class definitions makes a lot of sense.
+
+When declaring a class, we can choose the precedence order, and that is about it.
+But when defining a generic function, the order of arguments is more important to the function semantics, and the argument precedence being left-to-right is just the default behaviour.
+
+One shouldn't change the order of arguments of a generic function for the sake of tailoring it to the CLOS priority ranking algorithm, but doing it for a class definition is just fine.
+
+TIL.
+
+## References
+
+1. [Object-Oriented Programming in Common Lisp: A Programmer's Guide to CLOS](https://en.wikipedia.org/wiki/Object-Oriented_Programming_in_Common_Lisp), by Sonja E. Keene
diff --git a/src/content/tils/2021/04/24/clojure-autocurry.adoc b/src/content/tils/2021/04/24/clojure-autocurry.adoc
new file mode 100644
index 0000000..c1e277f
--- /dev/null
+++ b/src/content/tils/2021/04/24/clojure-autocurry.adoc
@@ -0,0 +1,135 @@
+---
+
+title: Clojure auto curry
+
+date: 2021-04-24 1
+
+updated_at: 2021-04-27
+
+layout: post
+
+lang: en
+
+ref: clojure-auto-curry
+
+---
+
+Here's a simple macro defined by [Loretta He][lorettahe] to create Clojure functions that are curried on all arguments, relying on Clojure's multi-arity support:
+
+```clojure
+(defmacro defcurry
+ [name args & body]
+ (let [partials (map (fn [n]
+ `(~(subvec args 0 n) (partial ~name ~@(take n args))))
+ (range 1 (count args)))]
+ `(defn ~name
+ (~args ~@body)
+ ~@partials)))
+```
+
+A naive `add` definition, alongside its usage and macroexpansion:
+
+```clojure
+user=> (defcurry add
+ [a b c d e]
+ (+ 1 2 3 4 5))
+#'user/add
+
+user=> (add 1)
+#object[clojure.core$partial$fn__5857 0x2c708440 "clojure.core$partial$fn__5857@2c708440"]
+
+user=> (add 1 2 3 4)
+#object[clojure.core$partial$fn__5863 0xf4c0e4e "clojure.core$partial$fn__5863@f4c0e4e"]
+
+user=> ((add 1) 2 3 4 5)
+15
+
+user=> (((add 1) 2 3) 4 5)
+15
+
+user=> (use 'clojure.pprint)
+nil
+
+user=> (pprint
+ (macroexpand
+ '(defcurry add
+ [a b c d e]
+ (+ 1 2 3 4 5))))
+(def
+ add
+ (clojure.core/fn
+ ([a b c d e] (+ 1 2 3 4 5))
+ ([a] (clojure.core/partial add a))
+ ([a b] (clojure.core/partial add a b))
+ ([a b c] (clojure.core/partial add a b c))
+ ([a b c d] (clojure.core/partial add a b c d))))
+nil
+```
+
+This simplistic `defcurry` definition doesn't support optional parameters, multi-arity, `&` rest arguments, docstrings, etc., but it could certainly evolve to do so.
+
+I like how `defcurry` is so short, and abdicates the responsability of doing the multi-arity logic to Clojure's built-in multi-arity support.
+Simple and elegant.
+
+Same Clojure as before, now with auto-currying via macros.
+
+[lorettahe]: http://lorettahe.github.io/clojure/2016/09/22/clojure-auto-curry
+
+## Comparison with Common Lisp
+
+My attempt at writing an equivalent for Common Lisp gives me:
+
+```lisp
+(defun partial (fn &rest args)
+ (lambda (&rest args2)
+ (apply fn (append args args2))))
+
+(defun curry-n (n func)
+ (cond ((< n 0) (error "Too many arguments"))
+ ((zerop n) (funcall func))
+ (t (lambda (&rest rest)
+ (curry-n (- n (length rest))
+ (apply #'partial func rest))))))
+
+(defmacro defcurry (name args &body body)
+ `(defun ,name (&rest rest)
+ (let ((func (lambda ,args ,@body)))
+ (curry-n (- ,(length args) (length rest))
+ (apply #'partial func rest)))))
+```
+
+Without built-in multi-arity support, we have to do more work, like tracking the number of arguments consumed so far.
+We also have to write `#'partial` ourselves.
+That is, without dependending on any library, sticking to ANSI Common Lisp.
+
+The usage is pretty similar:
+
+```lisp
+* (defcurry add (a b c d e)
+ (+ a b c d e))
+ADD
+
+* (add 1)
+#<FUNCTION (LAMBDA (&REST REST) :IN CURRY-N) {100216419B}>
+
+* (funcall (add 1) 2 3 4)
+#<FUNCTION (LAMBDA (&REST REST) :IN CURRY-N) {100216537B}>
+
+* (funcall (add 1) 2 3 4 5)
+15
+
+* (funcall (funcall (add 1) 2 3) 4 5)
+15
+
+* (macroexpand-1
+ '(defcurry add (a b c d e)
+ (+ a b c d e)))
+(DEFUN ADD (&REST REST)
+ (LET ((FUNC (LAMBDA (A B C D E) (+ A B C D E))))
+ (CURRY-N (- 5 (LENGTH REST)) (APPLY #'PARTIAL FUNC REST))))
+T
+```
+
+This also require `funcall`s, since we return a `lambda` that doesn't live in the function namespace.
+
+Like the Clojure one, it doesn't support optional parameters, `&rest` rest arguments, docstrings, etc., but it also could evolve to do so.
diff --git a/src/content/tils/2021/04/24/scm-nif.adoc b/src/content/tils/2021/04/24/scm-nif.adoc
new file mode 100644
index 0000000..f53451b
--- /dev/null
+++ b/src/content/tils/2021/04/24/scm-nif.adoc
@@ -0,0 +1,63 @@
+---
+
+title: Three-way conditional for number signs on Lisp
+
+date: 2021-04-24 3
+
+updated_at: 2021-08-14
+
+layout: post
+
+lang: en
+
+ref: three-way-conditional-for-number-signs-on-lisp
+
+---
+
+A useful macro from Paul Graham's [On Lisp][on-lisp] book:
+
+```lisp
+(defmacro nif (expr pos zero neg)
+ (let ((g (gensym)))
+ `(let ((,g ,expr))
+ (cond ((plusp ,g) ,pos)
+ ((zerop ,g) ,zero)
+ (t ,neg)))))
+```
+
+After I looked at this macro, I started seeing opportunities to using it in many places, and yet I didn't see anyone else using it.
+
+The latest example I can think of is section 1.3.3 of [Structure and Interpretation of Computer Programs][sicp], which I was reading recently:
+
+```scheme
+(define (search f neg-point pos-point)
+ (let ((midpoint (average neg-point pos-point)))
+ (if (close-enough? neg-point post-point)
+ midpoint
+ (let ((test-value (f midpoint)))
+ (cond ((positive? test-value)
+ (search f neg-point midpoint))
+ ((negative? test-value)
+ (search f midpoint pos-point))
+ (else midpoint))))))
+```
+
+Not that the book should introduce such macro this early, but I couldn't avoid feeling bothered by not using the `nif` macro, which could even remove the need for the intermediate `test-value` variable:
+
+```scheme
+(define (search f neg-point pos-point)
+ (let ((midpoint (average neg-point pos-point)))
+ (if (close-enough? neg-point post-point)
+ midpoint
+ (nif (f midpoint)
+ (search f neg-point midpoint)
+ (midpoint)
+ (search f midpoint pos-point)))))
+```
+
+It also avoids `cond`'s extra clunky parentheses for grouping, which is unnecessary but built-in.
+
+As a macro, I personally feel it tilts the balance towards expressivenes despite its extra cognitive load toll.
+
+[on-lisp]: http://www.paulgraham.com/onlisptext.html
+[sicp]: https://mitpress.mit.edu/sites/default/files/sicp/index.html
diff --git a/src/content/tils/2021/07/23/git-tls-gpg.adoc b/src/content/tils/2021/07/23/git-tls-gpg.adoc
new file mode 100644
index 0000000..fd42c1c
--- /dev/null
+++ b/src/content/tils/2021/07/23/git-tls-gpg.adoc
@@ -0,0 +1,56 @@
+---
+
+title: GPG verification of Git repositories without TLS
+
+date: 2021-07-23
+
+layout: post
+
+lang: en
+
+ref: gpg-verification-of-git-repositories-without-tls
+
+---
+
+For online Git repositories that use the [Git Protocol] for serving code, you
+can can use GPG to handle authentication, if you have the committer's public
+key.
+
+Here's how I'd verify that I've cloned an authentic version of
+[remembering][remembering][^not-available]:
+
+[^not-available]: Funnily enough, not available anymore via the Git Protocol, now only with HTTPS.
+
+```shell
+$ wget -qO- https://euandre.org/public.asc | gpg --import -
+gpg: clef 81F90EC3CD356060 : « EuAndreh <eu@euandre.org> » n'est pas modifiée
+gpg: Quantité totale traitée : 1
+gpg: non modifiées : 1
+$ pushd `mktemp -d`
+$ git clone git://euandreh.xyz/remembering .
+$ git verify-commit HEAD
+gpg: Signature faite le dim. 27 juin 2021 16:50:21 -03
+gpg: avec la clef RSA 5BDAE9B8B2F6C6BCBB0D6CE581F90EC3CD356060
+gpg: Bonne signature de « EuAndreh <eu@euandre.org> » [ultime]
+```
+
+On the first line we import the public key (funnily enough, available via
+HTTPS), and after cloning the code via the insecure `git://` protocol, we use
+`git verify-commit` to check the signature.
+
+The verification is successful, and we can see that the public key from the
+signature matches the fingerprint of the imported one. However
+`git verify-commit` doesn't have an option to check which public key you want
+to verify the commit against. Which means that if a MITM attack happens, the
+attacker could very easily serve a malicious repository with signed commits,
+and you'd have to verify the public key by yourself. That would need to happen
+for subsequent fetches, too.
+
+Even though this is possible, it is not very convenient, and certainly very
+brittle. Despite the fact that the Git Protocol is much faster, it being harder
+to make secure is a big downside.
+
+
+
+[Git Protocol]: https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols#_the_git_protocol
+[remembering]: https://euandreh.xyz/remembering/
diff --git a/src/content/tils/2021/08/11/js-bigint-reviver.adoc b/src/content/tils/2021/08/11/js-bigint-reviver.adoc
new file mode 100644
index 0000000..d71174d
--- /dev/null
+++ b/src/content/tils/2021/08/11/js-bigint-reviver.adoc
@@ -0,0 +1,100 @@
+---
+
+title: Encoding and decoding JavaScript BigInt values with reviver
+
+date: 2021-08-11
+
+updated_at: 2021-08-13
+
+layout: post
+
+lang: en
+
+ref: encoding-and-decoding-javascript-bigint-values-with-reviver
+
+---
+
+`JSON.parse()` accepts a second parameter: a [`reviver()` function][reviver].
+It is a function that can be used to transform the `JSON` values as they're
+being parsed.
+
+[reviver]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#using_the_reviver_parameter
+
+As it turns out, when combined with JavaScript's [`BigInt`] type, you can parse
+and encode JavaScript `BigInt` numbers via JSON:
+
+[`BigInt`]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt
+
+```javascript
+const bigIntReviver = (_, value) =>
+ typeof value === "string" && value.match(/^-?[0-9]+n$/)
+ ? BigInt(value.slice(0, value.length - 1))
+ : value;
+```
+
+I chose to interpret strings that contains only numbers and an ending `n` suffix
+as `BigInt` values, similar to how JavaScript interprets `123` (a number)
+differently from `123n` (a `bigint`);
+
+We do those checks before constructing the `BigInt` to avoid throwing needless
+exceptions and catching them on the parsing function, as this could easily
+become a bottleneck when parsing large JSON values.
+
+In order to do the full roundtrip, we now only need the `toJSON()` counterpart:
+
+```javascript
+BigInt.prototype.toJSON = function() {
+ return this.toString() + "n";
+};
+```
+
+With both `bigIntReviver` and `toJSON` defined, we can now successfully parse
+and encode JavaScript objects with `BigInt` values transparently:
+
+```javascript
+const s = `[
+ null,
+ true,
+ false,
+ -1,
+ 3.14,
+ "a string",
+ { "a-number": "-123" },
+ { "a-bigint": "-123n" }
+]`;
+
+const parsed = JSON.parse(s, bigIntReviver);
+const s2 = JSON.stringify(parsed);
+
+console.log(parsed);
+console.log(s2);
+
+console.log(typeof parsed[6]["a-number"])
+console.log(typeof parsed[7]["a-bigint"])
+```
+
+The output of the above is:
+
+```
+[
+ null,
+ true,
+ false,
+ -1,
+ 3.14,
+ 'a string',
+ { 'a-number': '-123' },
+ { 'a-bigint': -123n }
+]
+[null,true,false,-1,3.14,"a string",{"a-number":"-123"},{"a-bigint":"-123n"}]
+string
+bigint
+```
+
+If you're on a web browser, you can probably try copying and pasting the above
+code on the console right now, as is.
+
+Even though [`JSON`] doesn't include `BigInt` number, encoding and decoding them
+as strings is quite trivial on JavaScript.
+
+[`JSON`]: https://datatracker.ietf.org/doc/html/rfc8259
diff --git a/src/content/tils/index.adoc b/src/content/tils/index.adoc
new file mode 100644
index 0000000..4ae3b92
--- /dev/null
+++ b/src/content/tils/index.adoc
@@ -0,0 +1 @@
+= TIL